CN113344787B - Optimal suture line automatic adjustment algorithm, traffic early warning method and system - Google Patents

Optimal suture line automatic adjustment algorithm, traffic early warning method and system Download PDF

Info

Publication number
CN113344787B
CN113344787B CN202110656203.6A CN202110656203A CN113344787B CN 113344787 B CN113344787 B CN 113344787B CN 202110656203 A CN202110656203 A CN 202110656203A CN 113344787 B CN113344787 B CN 113344787B
Authority
CN
China
Prior art keywords
vehicle
suture line
optimal suture
optimal
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110656203.6A
Other languages
Chinese (zh)
Other versions
CN113344787A (en
Inventor
文涛
李洋洋
钟连德
赵飞
阿布部音木·阿布都克里木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongjiao Huaan Technology Co ltd
Original Assignee
Beijing Zhongjiao Huaan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongjiao Huaan Technology Co ltd filed Critical Beijing Zhongjiao Huaan Technology Co ltd
Priority to CN202110656203.6A priority Critical patent/CN113344787B/en
Publication of CN113344787A publication Critical patent/CN113344787A/en
Application granted granted Critical
Publication of CN113344787B publication Critical patent/CN113344787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The invention discloses an optimal suture line automatic adjustment algorithm, which relates to the technical field of image processing, based on the optimal suture line algorithm, the optimal suture line is searched in the overlapping area of each image of preprocessed video data, whether the optimal suture line needs to be adjusted or not is judged according to preset constraint conditions, if the optimal suture line needs to be adjusted, the optimal suture line is searched again in the overlapping area of each image of the video data by using the optimal suture line algorithm, the applicability is improved, and the requirement on the performance of a computer system is lowered. In addition, the invention also discloses a panoramic video generation method and a traffic early warning method based on the optimal suture line automatic adjustment algorithm, and relates to the technical field of traffic intelligent monitoring.

Description

Optimal suture line automatic adjustment algorithm, traffic early warning method and system
Technical Field
The invention particularly relates to an optimal suture line automatic adjustment algorithm, a traffic early warning method and a system, and belongs to the technical field of image processing and traffic intelligent monitoring.
Background
Along with the annual increase of highway construction process, traffic accidents are gradually increased, particularly the traffic accidents in the tunnel, and meanwhile, the tunnel is used as a semi-closed structure, the traffic accidents in the tunnel have more serious consequences than the traffic accidents on the road surface, and a great amount of personnel and property losses are caused. However, with the continuous development of information technology, a large number of video monitoring equipment facilities are applied to highways for monitoring passing vehicles, especially for monitoring at the entrance and exit of a tunnel and in the tunnel, so that the incidence rate of traffic accidents is effectively reduced to a certain extent, but the application of video monitoring at present mainly aims at monitoring section traffic and is not the monitoring of whole road section visualization, so that a large number of traffic accidents cannot be traced and traced, especially, the connected traffic accidents in the tunnel.
In order to solve the above problems, a real-time video stitching method based on an optimal stitching line is mainly adopted at present, such as:
patent application No. CN111553841A discloses a real-time video stitching algorithm based on optimal suture line update, which uses an image registration technique based on feature points to estimate the internal and external parameters of each camera through the matching feature points of images from different perspectives. When image fusion is carried out among different visual angles, an image fusion algorithm based on an optimal suture line is adopted, and aiming at the problems of double images and discontinuity possibly generated by a moving object, a background elimination algorithm based on a K-neighborhood algorithm and a suture line updating algorithm based on dynamic programming are used for avoiding the discontinuity and the double images generated when the moving object passes through the suture line, and an image fusion algorithm based on a convolution pyramid is used for eliminating splicing seams;
the invention patent with publication number CN107580186B discloses a video stitching method based on suture line space-time optimization, which detects and aligns feature points through a scale invariant feature transformation algorithm, optimizes the feature points by using a random sampling consistency algorithm, calculates the tie distance of the feature points in the vertical direction, and pre-aligns video frames; calculating an optimal suture line based on a graph cut algorithm and adding constraints such as a foreground, an edge, parallax and the like; smoothing the suture sequence using foreground detection and a gaussian filter; performing quality evaluation on the suture sequence; and linearly fusing images on two sides of the suture line by using the quality of the suture line to obtain the panoramic video.
In summary, video splicing technologies, especially image-based video splicing technologies, are mature at the present stage, but the research on monitoring of panoramic videos in tunnels under corresponding complex and variable environments is less, the tunnels are not comprehensively supervised, and emergency treatment is not timely, so that accidents in the tunnels are frequent and rescue is difficult. Therefore, it is especially necessary to splice panoramic videos in complex and variable environments such as tunnels to effectively control traffic in the tunnels. Because the tunnel is regarded as semi-closed structure thing, its internal environment is complicated, changeable, the camera is many, the mileage span is big, consequently, the video concatenation algorithm of current single fixed parameter can't satisfy the panoramic video concatenation demand of multichannel camera under changeable environment, carries out the video concatenation based on real-time update best stylolite simultaneously and has higher performance requirement to computer system.
Disclosure of Invention
Aiming at the defects that a single fixed parameter video splicing algorithm in the prior art cannot meet the panoramic video splicing requirement of multiple cameras in a changeable environment and cannot splice panoramic videos in complex and changeable environments such as a tunnel to effectively control traffic in the tunnel, the embodiment of the invention provides an optimal suture line automatic adjustment algorithm, a traffic early warning method and a traffic early warning system.
In order to achieve the purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides an optimal suture line automatic adjustment algorithm, which includes the following steps:
s11, based on the optimal suture line algorithm, searching an optimal suture line in the overlapping area of each image of the preprocessed video data;
s12, judging whether the optimal suture line needs to be adjusted or not according to preset constraint conditions, wherein the constraint conditions comprise a duration constraint expression and a pixel difference constraint expression;
s13, if the optimal suture line needs to be adjusted, searching the optimal suture line again in the overlapping area of each image of the video data by using an optimal suture line algorithm;
s14 repeating the above steps S12-S13, automatically adjusting the optimal suture line.
As a preferred embodiment of the present invention, the constraint conditions are:
Figure BDA0003113700760000021
wherein, | Ef(x,y)-Ef,z(x,y)|>△PiFor pixel difference constraint expressions, Tt-T0>△TiFor duration constraint expressions, Ef(x, y) is the pixel criterion value of the optimal stitch line f at point (x, y), Ef,z(x, y) is the pixel criterion value, T, of the optimal stitch line f at the neighboring point z around the coordinate (x, y)tFor recording time-points, T0Time point after determination, DeltaP, for an optimal sutureiFor a sequence of camera pixel difference thresholds, DeltaTiFor the duration threshold sequence, i is 1, 2 … N, and N is a natural number.
As a preferred embodiment of the present invention, the determining whether the optimal suture line needs to be adjusted according to a preset constraint condition includes:
and judging whether the duration constraint expression and the pixel difference constraint expression are simultaneously satisfied, and if so, determining that the optimal suture line needs to be adjusted currently.
In a second aspect, an embodiment of the present invention provides a method for generating a panoramic video based on the optimal suture line automatic adjustment algorithm described in the first aspect, where the method includes the following steps:
s21, continuously obtaining the optimal suture line by using an optimal suture line automatic adjustment algorithm;
s22, generating images to be spliced according to the optimal suture line;
s23, fusing the images to be spliced by using a Poisson fusion algorithm;
and S24, generating a panoramic video in real time according to the fused images to be spliced.
In a third aspect, an embodiment of the present invention provides a method for performing traffic early warning based on the panoramic video real-time generation method in the second aspect, where the method includes the following steps:
s31, generating a panoramic video in real time by using a panoramic video real-time generation method;
s32, acquiring vehicles with abnormal driving states in the panoramic video in real time according to the first identification model, and acquiring basic information of the vehicles by using an image identification technology;
s33, sending out abnormal vehicle early warning information according to the basic information of the vehicle;
s34, acquiring roads with abnormal traffic states in the panoramic video in real time according to the second recognition model;
and S35, sending road early warning information according to the serial number, the name and the road section of the road.
As a preferred embodiment of the present invention, the functional expression of the first recognition model is:
Figure BDA0003113700760000031
wherein, cj1、cj2、cj3、cj4Abnormal information respectively showing speed, acceleration, front-rear distance and lane offset of the vehicle j, 1, 2, 3 and 4 respectively showing speed abnormality, acceleration abnormality, front-rear distance abnormality and lane offset abnormality of the vehicle j, tauα、、τβ、τδτγRespectively is a preset vehicle speed threshold value, a preset vehicle acceleration threshold value, a preset vehicle distance threshold value before and after the vehicle, a preset lane deviation threshold value alphaj、、βj、δjγjThe speed, acceleration, front-rear vehicle distance, and lane offset of the vehicle j are respectively represented, j is 1, 2 … K, and K is a natural number.
As a preferred embodiment of the present invention, the functional expression of the second recognition model is:
Figure BDA0003113700760000041
wherein SttThe traffic state information of the road at the time t is shown, 1, 2, 3 and 4 respectively show that the traffic state of the road is normal, congested, accident and overspeed, vzFor vehicle speed limit of road, v85To set the speed value of the 85 th vehicle passing the road in the set period of time,
Figure BDA0003113700760000042
indicating that the speed of any vehicle in the road is not 0,
Figure BDA0003113700760000043
the speed indicating the presence of a vehicle k in the road is 0.
In a fourth aspect, an embodiment of the present invention provides a panoramic video generation system, where the system includes:
a calculation module configured to continuously obtain an optimal suture line using an optimal suture line automatic adjustment algorithm;
a first generation module configured to generate images to be stitched according to the optimal stitching line;
the fusion module is configured to fuse the images to be spliced by utilizing a Poisson fusion algorithm;
and the second generation module is configured to generate the panoramic video in real time according to the fused images to be spliced.
In a fifth aspect, an embodiment of the present invention provides a traffic early warning system, where the system includes:
a calculation module configured to generate a panoramic video in real time using a panoramic video real-time generation method;
the first acquisition module is configured to acquire a vehicle with an abnormal driving state in the panoramic video in real time according to a first identification model and acquire basic information of the vehicle by using an image identification technology;
the first early warning module is configured to send out abnormal vehicle early warning information according to the basic information of the vehicle;
the second acquisition module is configured to acquire a road with an abnormal traffic state in the panoramic video in real time according to a second recognition model;
and the second early warning module is configured to send out road early warning information according to the serial number, the name and the road section of the road.
The invention has the following effects:
(1) the method has the advantages that double-threshold constraint is introduced by combining the change characteristics of the environment in the tunnel, an optimal suture line automatic adjustment algorithm is provided, the panoramic video splicing requirement of multiple cameras in a changeable environment is met, the applicability is improved, the optimal suture line does not need to be adjusted in real time, and the requirement on the performance of a computer system is lowered;
(2) based on the optimal suture line automatic adjustment algorithm, the embodiment of the invention also provides a panoramic video generation method and system, and a traffic early warning method and system, which are used for splicing the panoramic video in the complex and variable environments such as the tunnel to effectively control the traffic in the tunnel, solve the problems of unfavorable supervision and untimely supervision of the traffic condition in the tunnel, fill the blank of splicing the panoramic video in the variable environments such as the tunnel, and reduce the traffic accident rate in the complex and variable environments such as the tunnel.
Drawings
FIG. 1 is a schematic flow chart of an optimal suture thread auto-adjustment algorithm according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a panoramic video generation method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a traffic warning method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a panoramic video generation system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a traffic early warning system according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example 1
As shown in fig. 1, an embodiment of the present invention provides an optimal suture thread automatic adjustment algorithm, which includes the following steps:
and S101, collecting video data.
As a specific embodiment of the invention, aiming at the collection of video data near and in a tunnel, the method comprises two parts of image snapshot and video collection, wherein a collection system for snapshotting passing vehicles is arranged at the entrance position of the tunnel, and the vehicle snapshot in the tunnel can be additionally arranged if the vehicle snapshot is needed; the arrangement of video acquisition inside the tunnel is combined with the arrangement of pixels and resolution ratios of the cameras, in order to meet the requirement of splicing acquired video data of adjacent cameras, the arrangement height, the distance and the angle of each camera in the tunnel need to be ensured to be consistent, so that the obtained video data have certain similarity, and the identification of passing vehicle information is realized, wherein the vehicle information comprises the information of colors, shapes, license plate numbers and the like of vehicles.
S102, preprocessing the video data, comprising the following steps:
step 1: and (5) distortion correction.
As a specific embodiment of the present invention, an image in video data may be distorted due to lens distortion and a plurality of external environmental factors, and distortion correction is performed through a lens of a camera, so that the color and brightness of the image are uniform, which is beneficial to complete and smooth image stitching.
Step 2: and (5) denoising the image.
As a specific embodiment of the present invention, a typical gaussian filtering denoising method is used to denoise an image in video data, and a filtering result is obtained by weighting and averaging pixel values in a filter window.
Step 3: and (6) color correction.
The method comprises the following steps of obtaining a color correction algorithm, wherein under the influence of a plurality of factors, the difference of color and brightness between images can cause the integral brightness of spliced images to be inconsistent and obvious splicing gaps appear, so that a better image fusion effect is obtained through the color correction, namely, the color and the brightness of one image are taken as reference, and the color and the brightness of the other image are changed through a color correction algorithm, so that the color and the brightness of the two images are similar. The color correction algorithm transforms RGB channels by adopting a one-dimensional histogram matching method respectively, and then combines 3 channels to obtain a target image.
Step 4: and (5) matching the features.
The method mainly comprises the following four steps of establishing a scale space, detecting characteristic points, confirming the main direction of the characteristic points and generating characteristic descriptors:
(1) establishment of a scale space
By establishing a scale space to increase scale invariance for the feature points, a box filter can be convolved with the image to obtain a multi-scale image group.
(2) Determining keypoints
And detecting key points by using the maximum value of the Hessian matrix determinant.
(3) Determining characteristic point principal directions
And establishing a circular neighborhood by taking the characteristic point as a center, scanning the circular neighborhood by adopting a fan-shaped sliding window, calculating a wavelet response value of a pixel point in the window, and taking a central angle corresponding to the window with the maximum response value as a direction angle of the characteristic point.
(4) Establishing a feature point description vector
And establishing a square area by taking the feature point as a center and the main direction of the feature point as the horizontal direction, and calculating the response value of each sub-window in the horizontal direction and the vertical direction so as to establish a four-bit feature description vector of the sub-window.
Step 5: and (5) carrying out interpolation processing on the pixel points.
Integer conversion is carried out by adopting bilinear interpolation, namely, the change among pixel points is assumed to be linear, and the pixel to be solved
Four pixel points P around the point11(x1,y1),P12(x1,y2),P21(x2,y1),P22(x2,y2) Interpolation is performed in the horizontal direction and the vertical direction, respectively, and the pixel value of the point P (x, y) to be found is calculated.
First, linear interpolation is performed in the horizontal direction:
Figure BDA0003113700760000071
then:
Figure BDA0003113700760000072
in the formulae (1) and (2), the coordinate R1Is (x, y)1) Coordinate R2Is (x, y)2) F is a function of the replacement image pixel value, and f (P) is the pixel value of the point to be solved P (x, y).
The video data is preprocessed, so that a foundation is laid for a subsequent image registration process, and the accuracy of image registration can be effectively improved.
S103, searching an optimal suture line in the overlapping area of each image of the video data by using a monitoring-oriented multi-path video stream splicing algorithm.
Among them, the criterion of the optimal sewing line requires that the difference in color and geometry of the images on both sides of the sewing line is minimized, and therefore, the following formula can be taken as the criterion for searching the optimal sewing line.
E(x,y)=(Ecolor(x,y))2+Egeometry(x,y) (3)
In the formula (3), E (x, y) represents a pixel criterion value of the overlap region at the point (x, y), Ecolor(x, y) denotes a color difference value of the overlap region at the point (x, y), Egeometry(x, y) represents the difference in geometry of the overlap region at point (x, y).
The geometric structure difference value calculation formula based on the gradient is as follows:
Egeometry(x,y)=(Sx×(I1(x,y)-I2(x,y)))2+(Sy×(I1(x,y)-I2(x,y)))2 (4)
in the formula (4), I1(x, y) and I2(x, y) respectively represent the pixel values of the two images at the point (x, y) in the overlapping area, SxAnd SyRespectively representing the Sobel operator in the horizontal direction and the Sobel operator in the vertical direction.
By the idea of dynamic programming, starting from the initial position, each pixel point of the line is used as the starting point of the suture line to be expanded to the last line for pixel value processing, and the specific steps are as follows:
step 1: taking each pixel in the first row as a starting point of each sewing line, wherein the initial criterion value of each sewing line is the criterion value of the starting point pixel;
step 2: suture expansion rules: comparing the criterion values of 5 points which are adjacent left and right before the point and adjacent in the next row, and selecting the minimum point as the next point of the suture line. Selecting a next minimum criterion value point when the next point is the selected point;
step 3: if the current point is the point of the last line of the overlapping area, stopping expanding and executing Step4, otherwise, returning to Step 2;
step 4: and calculating an average criterion value through the suture path distance, and taking the suture with the minimum average criterion value as an optimal suture.
And S104, judging whether the optimal suture line needs to be adjusted or not according to preset constraint conditions, wherein the constraint conditions comprise a duration constraint expression and a pixel difference constraint expression.
As a specific embodiment of the present invention, due to the complex and variable environment in the tunnel and the large difference in video information between adjacent single-point cameras, the requirement of a single fixed suture line for video splicing cannot be met, and meanwhile, the requirement of real-time dynamic adjustment of the suture line on the computing capability of the computer system is high, and the panoramic visual image is changed more. Therefore, in order to effectively avoid the defects of poor applicability of a single optimal suture line and the defects of reduced system operation efficiency, multiple images and the like caused by adjusting the optimal suture line in real time, the embodiment of the invention provides an optimal suture line adjusting algorithm based on double-threshold constraint according to the environment change characteristics of different time periods and different positions in a tunnel.
The method is characterized in that the optimal suture line of the overlapped images acquired by the adjacent cameras in the tunnel is adjusted by setting different duration threshold values and pixel difference threshold values in different time periods according to the environmental characteristics of the overlapped images acquired by the adjacent cameras in the tunnel, and the method specifically comprises the following steps:
step 1: dividing a time period
The camera is gone on all weather all the time in the tunnel, and according to the influence that the characteristics that the tunnel position light changes produced the image of camera shooting, divide into a plurality of time quantum all the day: night period mode, day period mode, evening period mode, different time duration is executed at different time. If the duration of the night period mode execution is longer than that of the day period mode execution, the duration of the day period mode execution is longer than that of the evening period mode execution.
The division can also be performed according to time points, such as 20: 00-6: 00,6: 01-10: 00, 10: 01-14: 00, 14: 01-17: 00, 17: 01-19: 59. the time periods can also be divided according to the sunrise and sunset times of different seasons, and the time periods can be divided according to light change.
Step 2: selecting a threshold value
Because videos collected by the cameras in the tunnel are variable, the method avoids the problem that a computer system adjusts the optimal suture line in real time by setting the duration, and avoids the problem that the splicing applicability of a single optimal suture line is poor by setting the pixel difference threshold.
Step 3: determining constraints
Based on the dual-threshold constraint of the duration threshold and the pixel difference threshold, whether the optimal suture line needs to be adjusted is judged, which is specifically as follows:
Figure BDA0003113700760000091
in the formula (5), Ef(x, y) is the pixel criterion value of the optimal stitch line f at point (x, y), Ef,z(x, y) is the pixel criterion value, T, of the optimal stitch line f at the neighboring point z around the coordinate (x, y)tFor recording time-points, T0Time point after determination, DeltaP, for an optimal sutureiIs a sequence of camera pixel difference threshold values,the size of the pixel difference threshold value changes along with the change of the time period and the change of the arrangement position of the camera; delta TiThe duration threshold sequence is characterized in that the duration is different from the computing capability of the computer system, the change degree of the climate environment, different time periods and camera positions, i is 1, 2 … N, and N is a natural number.
And S105, if the optimal suture line is determined to need to be adjusted, searching the optimal suture line again in the overlapping area of each image of the video data by using an optimal suture line algorithm.
As an alternative embodiment of the present invention, when the 2 expressions in the expression (5) are simultaneously satisfied, the position of the optimal suture needs to be determined again, otherwise, the optimal suture remains unchanged.
And S106, repeating the steps S104-S105 to realize automatic adjustment of the optimal suture line.
Further, the duration threshold value and the camera pixel difference threshold value which are currently determined can be continuously adjusted and optimized by using a machine learning model or a neural network model, and the duration threshold value sequence and the camera pixel difference threshold value sequence are determined.
It should be understood by those skilled in the art that the adjustment algorithm provided in example 1 is suitable for searching for an optimal suture line not only in a complex and variable environment, but also in a normal environment.
The optimal suture line automatic adjustment algorithm provided by the embodiment of the invention is based on the optimal suture line algorithm, the optimal suture line is searched in the overlapping area of each image in the preprocessed video data, whether the optimal suture line needs to be adjusted is judged according to the preset constraint condition, if the optimal suture line needs to be adjusted, the optimal suture line is searched again in the overlapping area of each image in the video data by using the optimal suture line algorithm, the applicability is improved, and the requirement on the performance of a computer system is reduced.
Example 2
As shown in fig. 2, an embodiment of the present invention provides a method for generating a panoramic video, where the method includes the following steps:
s201, continuously acquiring the optimal suture line of the image to be spliced by using the optimal suture line automatic adjustment algorithm provided by the embodiment 1;
s202, generating an image to be spliced according to the optimal suture line;
s203, fusing the images to be spliced by using a Poisson fusion algorithm;
and S204, generating a panoramic video in real time according to the fused images to be spliced.
The mask image of the image to be spliced is generated based on the optimal suture line algorithm, and the image still needs to be fused. The Poisson fusion algorithm is an algorithm for solving an unknown pixel value by utilizing a Poisson equation established by utilizing a pixel value and known gradient information, a vector is used for guiding according to the gradients of a source image and a target image, the target image has a gradient similar to the source image, the pixel value of the target image is still reserved at the boundary part, the result has no obvious splicing seam, the image fusion problem is converted into the problem of solving the minimization of a target function, the image pixel of a synthesis area is solved through the Poisson equation, and the image to be spliced is better fused.
Further, based on the panoramic video, the track and the running state information of each vehicle can be further acquired, and meanwhile, the information such as visibility, air humidity and road surface conditions in the tunnel can be acquired for comprehensively guaranteeing the safety of the vehicles in the tunnel.
Example 3
As shown in fig. 3, an embodiment of the present invention provides a traffic early warning method, which includes the following steps:
s301, generating a panoramic video in real time by using the panoramic video real-time generation method provided in the foregoing embodiment 2.
S302, according to the first identification model, a vehicle with abnormal driving state in the panoramic video is obtained in real time, and basic information of the vehicle is obtained by utilizing an image identification technology.
As an alternative embodiment of the present invention, the functional expression of the first recognition model is:
Figure BDA0003113700760000111
in the formula (6), cj1、cj2、cj3、cj4Abnormal information respectively showing speed, acceleration, front-rear distance and lane offset of the vehicle j, 1, 2, 3 and 4 respectively showing speed abnormality, acceleration abnormality, front-rear distance abnormality and lane offset abnormality of the vehicle j, tauα、、τβ、τδτγRespectively is a preset vehicle speed threshold value, a preset vehicle acceleration threshold value, a preset vehicle distance threshold value before and after the vehicle, a preset lane deviation threshold value alphaj、、βj、δjγjThe speed, acceleration, front-rear vehicle distance, and lane offset of the vehicle j are respectively represented, j is 1, 2 … K, and K is a natural number.
And S303, sending out abnormal vehicle early warning information according to the basic information of the vehicle.
As a specific embodiment of the invention, for the abnormal vehicle information early warning, a dynamic warning is given and implemented by adopting a text prompt and voice warning mode of an electronic display screen in a tunnel.
As a specific embodiment of the present invention, when the current speed of the vehicle is greater than the speed threshold value of the current road segment, it is determined that the current speed of the vehicle is abnormal, and the vehicle is determined to be an abnormal vehicle and an early warning message that the vehicle has a speed abnormality is sent according to the information of the license plate number, the body color, and the like of the vehicle.
As a specific embodiment of the present invention, when the current acceleration (or lane deviation) of the vehicle is greater than the set current road section acceleration threshold (or lane deviation threshold) of the current road section, it is determined that the current acceleration (or lane deviation) of the vehicle is abnormal, and the vehicle is determined to be an abnormal vehicle, and warning information that the acceleration (or lane deviation) of the vehicle is abnormal is issued according to the information of the license plate number, the body color, and the like of the vehicle.
Similarly, as a specific embodiment of the present invention, when the distance between the vehicle and the preceding vehicle is smaller than the threshold value of the distance between the vehicle and the preceding vehicle on the current road, it is determined that the distance between the vehicle and the preceding vehicle is abnormal, and the vehicle is determined to be an abnormal vehicle, and the warning information that the distance between the vehicle and the preceding vehicle is abnormal is sent according to the information of the license plate number, the body color, and the like of the vehicle.
And S304, acquiring the road with the abnormal traffic state in the panoramic video in real time according to the second identification model.
As an alternative embodiment of the present invention, the functional expression of the second recognition model is:
Figure BDA0003113700760000121
in formula (7), SttThe traffic state information of the road at the time t is shown, 1, 2, 3 and 4 respectively show that the traffic state of the road is normal, congested, accident and overspeed, vzFor vehicle speed limit of road, v85To set the speed value of the 85 th vehicle passing the road in the set period of time,
Figure BDA0003113700760000122
indicating that the speed of any vehicle in the road is not 0,
Figure BDA0003113700760000123
the speed indicating the presence of a vehicle k in the road is 0.
And S305, sending road early warning information according to the serial number, the name and the road section of the road.
As a specific embodiment of the invention, the device for sending out the early warning information comprises a front-end warning module and an intelligent acousto-optic early warning module. The front-end warning module is mainly arranged at the position of 200m in front of the tunnel entrance, and timely issues traffic conditions, accident conditions and the like in the tunnel through the electronic display screen and the intelligent variable speed limit sign. The intelligent acousto-optic early warning module is mainly arranged in a tunnel at a place where accidents happen frequently or in combination with the tunnel length at equal intervals, and warns abnormal vehicles in the tunnel through a loudspeaker, an electronic display screen and the like, and voice warning is carried out when necessary.
According to the traffic early warning method provided by the embodiment of the invention, the panoramic video is generated in real time through the panoramic video real-time generation method, the vehicle with abnormal driving state in the panoramic video is acquired in real time according to the first identification model, the basic information of the vehicle is acquired by utilizing an image identification technology, the abnormal vehicle early warning information is sent out according to the basic information of the vehicle, meanwhile, the road with abnormal traffic state in the panoramic video is acquired in real time according to the second identification model, and the road early warning information is sent out according to the serial number, the name and the road section where the road is located, so that the traffic accident rate in complex and variable environments such as tunnels is reduced.
As a specific embodiment of the invention, the early warning information of the road can be given to information prompt and dynamic speed limit control in the tunnel through a warning device arranged in front of the tunnel entrance.
Example 4
As shown in fig. 4, an embodiment of the present invention provides a panoramic video generation system, including:
a calculation module configured to continuously obtain an optimal suture line using an optimal suture line automatic adjustment algorithm;
a first generation module configured to generate an image to be stitched according to the optimal suture line;
the fusion module is configured to fuse the images to be spliced by utilizing a Poisson fusion algorithm;
and the second generation module is configured to generate the panoramic video in real time according to the fused images to be spliced.
Example 5
As shown in fig. 5, an embodiment of the present invention provides a traffic early warning system, including:
a calculation module configured to generate a panoramic video in real time using a panoramic video real-time generation method;
the first acquisition module is configured to acquire a vehicle with an abnormal driving state in the panoramic video in real time according to a first identification model and acquire basic information of the vehicle by utilizing an image identification technology;
the first early warning module is configured to send out abnormal vehicle early warning information according to the basic information of the vehicle;
the second acquisition module is configured to acquire a road with an abnormal traffic state in the panoramic video in real time according to the second identification model;
and the second early warning module is configured to send out road early warning information according to the serial number, the name and the road section of the road.
It should be understood by those skilled in the art that the first and second capture modules may operate any optimal suture stitching method to obtain panoramic video, in addition to the method described in embodiment 1 of the present invention.
It will be appreciated that the relevant features of the method and system described above are mutually referenced. In addition, "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent merits of the embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In addition, the memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
It should be noted that the above-mentioned embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the protection scope of the present invention.

Claims (8)

1. An optimal suture thread automatic adjustment algorithm, comprising the steps of:
s11, based on the optimal suture line algorithm, searching an optimal suture line in the overlapping area of each image of the preprocessed video data;
s12, judging whether the optimal suture line needs to be adjusted according to preset constraint conditions, wherein the constraint conditions comprise a duration constraint expression and a pixel difference constraint expression, | Ef(x,y)-Ef,z(x,y)|>ΔPiFor pixel difference constraint expressions, Tt-T0>ΔTiFor duration constraint expressions, Ef(x, y) is the pixel criterion value of the optimal stitch line f at point (x, y), Ef,z(x, y) is the pixel criterion value, T, of the optimal stitch line f at the point (x, y) peripheral neighbor point ztFor recording time-points, T0Time point after determination for optimal suture, Δ PiFor a sequence of camera pixel difference thresholds, Δ TiThe duration threshold sequence is that i is 1, 2 … N, and N is a natural number;
s13, if the optimal suture line needs to be adjusted, searching the optimal suture line again in the overlapping area of each image of the video data by using an optimal suture line algorithm;
s14 repeating the above steps S12-S13, automatically adjusting the optimal suture line.
2. The optimal suture thread automatic adjustment algorithm according to claim 1, wherein judging whether the optimal suture thread needs to be adjusted according to a preset constraint condition comprises:
and judging whether the duration constraint expression and the pixel difference constraint expression are simultaneously satisfied, and if so, determining that the optimal suture line needs to be adjusted currently.
3. A method for generating a panoramic video based on the optimal stitch line automatic adjustment algorithm of claim 1 or 2, comprising the steps of:
s21, continuously obtaining the optimal suture line by using an optimal suture line automatic adjustment algorithm;
s22, generating images to be spliced according to the optimal suture line;
s23, fusing the images to be spliced by using a Poisson fusion algorithm;
and S24, generating a panoramic video in real time according to the fused images to be spliced.
4. A method for traffic early warning based on the method for generating panoramic video of claim 3, which is characterized by comprising the following steps:
s31, generating a panoramic video in real time by using a method for generating the panoramic video;
s32, acquiring vehicles with abnormal driving states in the panoramic video in real time according to the first identification model, and acquiring basic information of the vehicles by using an image identification technology;
s33, sending out abnormal vehicle early warning information according to the basic information of the vehicle;
s34, acquiring roads with abnormal traffic states in the panoramic video in real time according to the second recognition model;
and S35, sending road early warning information according to the serial number, the name and the road section of the road.
5. The method of traffic early warning according to claim 4, wherein the functional expression of the first recognition model is:
Figure FDA0003372346830000021
wherein, cj1、cj2、cj3、cj4Abnormal information respectively showing speed, acceleration, front-rear distance and lane offset of the vehicle j, 1, 2, 3 and 4 respectively showing speed abnormality, acceleration abnormality, front-rear distance abnormality and lane offset abnormality of the vehicle j, tauα、τβ、τδ、τγRespectively is a preset vehicle speed threshold value, a preset vehicle acceleration threshold value, a preset vehicle distance threshold value before and after the vehicle, a preset lane deviation threshold value alphaj、βj、δj、γjThe speed, acceleration, front-rear vehicle distance, and lane offset of the vehicle j are respectively represented, j is 1, 2 … K, and K is a natural number.
6. The method for traffic early warning according to claim 4, wherein the functional expression of the second recognition model is:
Figure FDA0003372346830000022
wherein SttThe traffic state information of the road at the time t is shown, 1, 2, 3 and 4 respectively show that the traffic state of the road is normal, congested, accident and overspeed, vzFor vehicle speed limit of road, v85To set the speed value of the 85 th vehicle passing the road in the set period of time,
Figure FDA0003372346830000023
indicating that the speed of any vehicle in the road is not 0,
Figure FDA0003372346830000024
the speed indicating the presence of a vehicle k in the road is 0.
7. A panoramic video generation system, comprising:
a calculation module configured to continuously obtain an optimal suture line using the optimal suture line auto-adjustment algorithm of claim 1;
a first generation module configured to generate images to be stitched according to the optimal stitching line;
the fusion module is configured to fuse the images to be spliced by utilizing a Poisson fusion algorithm;
and the second generation module is configured to generate the panoramic video in real time according to the fused images to be spliced.
8. A traffic warning system, comprising:
a computing module configured to generate a panoramic video in real time using the method of generating a panoramic video of claim 3;
the first acquisition module is configured to acquire a vehicle with an abnormal driving state in the panoramic video in real time according to a first identification model and acquire basic information of the vehicle by using an image identification technology;
the first early warning module is configured to send out abnormal vehicle early warning information according to the basic information of the vehicle;
the second acquisition module is configured to acquire a road with an abnormal traffic state in the panoramic video in real time according to a second recognition model;
and the second early warning module is configured to send out road early warning information according to the serial number, the name and the road section of the road.
CN202110656203.6A 2021-06-11 2021-06-11 Optimal suture line automatic adjustment algorithm, traffic early warning method and system Active CN113344787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110656203.6A CN113344787B (en) 2021-06-11 2021-06-11 Optimal suture line automatic adjustment algorithm, traffic early warning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110656203.6A CN113344787B (en) 2021-06-11 2021-06-11 Optimal suture line automatic adjustment algorithm, traffic early warning method and system

Publications (2)

Publication Number Publication Date
CN113344787A CN113344787A (en) 2021-09-03
CN113344787B true CN113344787B (en) 2022-02-01

Family

ID=77476957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110656203.6A Active CN113344787B (en) 2021-06-11 2021-06-11 Optimal suture line automatic adjustment algorithm, traffic early warning method and system

Country Status (1)

Country Link
CN (1) CN113344787B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
CN107346536A (en) * 2017-07-04 2017-11-14 广东工业大学 A kind of method and apparatus of image co-registration
CN108922245A (en) * 2018-07-06 2018-11-30 北京中交华安科技有限公司 A kind of bad section method for early warning of highway sighting distance and system
CN109859105A (en) * 2019-01-21 2019-06-07 桂林电子科技大学 A kind of printenv image nature joining method
CN110390640A (en) * 2019-07-29 2019-10-29 齐鲁工业大学 Graph cut image split-joint method, system, equipment and medium based on template
CN111553841A (en) * 2020-04-21 2020-08-18 东南大学 Real-time video stitching algorithm based on optimal suture line updating
CN112333537A (en) * 2020-07-27 2021-02-05 深圳Tcl新技术有限公司 Video integration method and device and computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI533675B (en) * 2013-12-16 2016-05-11 國立交通大學 Optimal dynamic seam adjustment system and method for images stitching
CN108205797B (en) * 2016-12-16 2021-05-11 杭州海康威视数字技术股份有限公司 Panoramic video fusion method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856727A (en) * 2014-03-24 2014-06-11 北京工业大学 Multichannel real-time video splicing processing system
CN107346536A (en) * 2017-07-04 2017-11-14 广东工业大学 A kind of method and apparatus of image co-registration
CN108922245A (en) * 2018-07-06 2018-11-30 北京中交华安科技有限公司 A kind of bad section method for early warning of highway sighting distance and system
CN109859105A (en) * 2019-01-21 2019-06-07 桂林电子科技大学 A kind of printenv image nature joining method
CN110390640A (en) * 2019-07-29 2019-10-29 齐鲁工业大学 Graph cut image split-joint method, system, equipment and medium based on template
CN111553841A (en) * 2020-04-21 2020-08-18 东南大学 Real-time video stitching algorithm based on optimal suture line updating
CN112333537A (en) * 2020-07-27 2021-02-05 深圳Tcl新技术有限公司 Video integration method and device and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image stitching based on feature extraction techniques: a survey;Ebstam Adel 等;《International Journal of Computer Applications》;20140831;第99卷(第6期);第1-8页 *
基于全景视觉的车辆异常行为检测技术的研究;庞成俊;《中国优秀硕士学位论文全文数据库 信息科技辑》;20111215(第S1期);第I138-1561页 *
基于最佳缝合线与灰度均值差改正比的图像拼接算法;罗永涛 等;《激光杂志》;20181215;第39卷(第12期);第42-46页 *

Also Published As

Publication number Publication date
CN113344787A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
US8050459B2 (en) System and method for detecting pedestrians
CN111967393A (en) Helmet wearing detection method based on improved YOLOv4
WO2017171659A1 (en) Signal light detection
CN110532903B (en) Traffic light image processing method and equipment
CN110689724B (en) Automatic motor vehicle zebra crossing present pedestrian auditing method based on deep learning
CN105205785A (en) Large vehicle operation management system capable of achieving positioning and operation method thereof
AU2019100914A4 (en) Method for identifying an intersection violation video based on camera cooperative relay
CN112991742A (en) Visual simulation method and system for real-time traffic data
CN105354529A (en) Vehicle converse running detection method and apparatus
CN114781479A (en) Traffic incident detection method and device
CN115187946A (en) Multi-scale intelligent sensing method for fusing underground obstacle point cloud and image data
KR101699014B1 (en) Method for detecting object using stereo camera and apparatus thereof
Nguyen et al. Real-time validation of vision-based over-height vehicle detection system
CN113344787B (en) Optimal suture line automatic adjustment algorithm, traffic early warning method and system
Soh et al. Analysis of road image sequences for vehicle counting
CN114897684A (en) Vehicle image splicing method and device, computer equipment and storage medium
CN114372919A (en) Method and system for splicing panoramic all-around images of double-trailer train
JP2016122966A (en) Data reproduction device, data reproduction method, and program
CN102163328B (en) Method for detecting and eliminating glare in traffic video image
US20210150218A1 (en) Method of acquiring detection zone in image and method of determining zone usage
Wang et al. Planning autonomous driving with compact road profiles
CN104376316A (en) License plate image acquisition method and device
CN114554158A (en) Panoramic video stitching method and system based on road traffic scene
CN113920731A (en) Unmanned aerial vehicle-based traffic operation risk real-time identification method
KR101865958B1 (en) Method and apparatus for recognizing speed limit signs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant