CN113784214B - Video integration system and method based on big data analysis - Google Patents

Video integration system and method based on big data analysis Download PDF

Info

Publication number
CN113784214B
CN113784214B CN202111345087.2A CN202111345087A CN113784214B CN 113784214 B CN113784214 B CN 113784214B CN 202111345087 A CN202111345087 A CN 202111345087A CN 113784214 B CN113784214 B CN 113784214B
Authority
CN
China
Prior art keywords
video
target
unit
videos
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111345087.2A
Other languages
Chinese (zh)
Other versions
CN113784214A (en
Inventor
黄健松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Jinwei Intelligent Technology Co ltd
Original Assignee
Nanjing Jinwei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Jinwei Intelligent Technology Co ltd filed Critical Nanjing Jinwei Intelligent Technology Co ltd
Priority to CN202111345087.2A priority Critical patent/CN113784214B/en
Publication of CN113784214A publication Critical patent/CN113784214A/en
Application granted granted Critical
Publication of CN113784214B publication Critical patent/CN113784214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems

Abstract

The invention discloses a video integration system and method based on big data analysis, comprising the following steps: the video integration system comprises a target data acquisition module, a data management platform, a target video integration module, a key video generation module and a video display integration module, wherein video data of a tracking target is acquired through the target data acquisition module, the acquired data is stored and called through the data management platform, videos of similar targets and the acquired videos are matched through the target video integration module, videos without tracking targets are screened out, residual videos are integrated, video contents with discontinuous phenomena are screened out through the key video generation module, interrupted videos are integrated, the display position of the integrated videos is adjusted through the video display integration module, the continuity of the videos is improved, the data analysis rate is improved, and meanwhile the logic of video display integration is enhanced.

Description

Video integration system and method based on big data analysis
Technical Field
The invention relates to the technical field of video integration, in particular to a video integration system and method based on big data analysis.
Background
Video integration refers to a process of collecting and converging dispersed video resources together, the integrated video can help relevant departments to analyze data more conveniently and comprehensively, the obtained data analysis result is more accurate, the more accurate analysis result can promote the quick development of the departments, the effective and reasonable utilization of the video resources is realized, and the benefit maximization is realized;
however, in the prior art, video integration has the following problems: firstly, the integrated video contains too much information, and after integration, relevant personnel are required to screen the integrated video again to confirm the target to be tracked, so that too much time is wasted on confirming the target, and the data is not beneficial to being quickly analyzed; secondly, excessive attention is paid to multi-video integration, and the phenomena of discontinuity and information loss of a single video are ignored, so that the data analysis result is influenced; finally, the video is required to be displayed to relevant personnel after being integrated, the integrated video has no logic sequence and is randomly displayed, and the data analysis speed of the relevant personnel is delayed.
Therefore, a video integration system and method based on big data analysis are needed to solve the above problems.
Disclosure of Invention
The present invention is directed to a video integration system and method based on big data analysis, so as to solve the problems mentioned in the background art.
In order to solve the technical problems, the invention provides the following technical scheme: a video integration system based on big data analysis is characterized in that: the system comprises: the system comprises a target data acquisition module, a data management platform, a target video integration module, a key video generation module and a video display integration module;
selecting a tracking target through the target data acquisition module, acquiring habitual action data of the target, collecting and storing the acquired target data through the data management platform, accessing other videos through the target video integration module, matching the acquired data with similar target habitual action data in other videos to acquire targets in other videos, accessing videos with the targets, acquiring a fixed frame length motion track of the accessed random video when the targets perform repetitive actions through the key video generation module, detecting and analyzing motion track changes of the targets after the current video frame, matching with the current motion track, judging whether track discontinuity occurs in the corresponding videos, screening out partial videos with discontinuity, integrating videos disconnected due to screening out, and displaying and integrating the videos through the video display integration module, and positioning the shooting positions of the integrated videos, modeling the display window of the integrated videos, analyzing the display distribution position of the current integrated video, and adjusting and controlling the integrated videos to be displayed in a data chain type centralized manner.
Further, the target data acquisition module comprises a target tracking unit and a target information acquisition unit, wherein the target tracking unit is used for confirming a target and tracking the target; the target information acquisition unit is used for acquiring habitual action data of the acquired target in the video.
Further, the target video integration module comprises a video access unit and a tracking target matching unit, wherein the video access unit is used for accessing videos with similar targets; the tracking target matching unit is used for matching the acquired data with the habitual action data of similar targets in the accessed video, judging whether tracking targets exist in the accessed video, screening out videos without tracking targets, and integrating the rest videos.
Further, the key video generation module comprises a motion track generation unit, a track change detection unit and an interrupt video screening unit, wherein the motion track generation unit is used for acquiring a fixed frame length motion track of a target in a random video after access; the track change detection unit is used for detecting and analyzing whether the subsequent movement track of the target is matched with the current track or not; the interruption video screening unit is used for screening out video contents according to the matching result: if the video is matched with the video, judging that the track interruption phenomenon does not occur in the video, and not performing screening processing; if not, judging that the track interruption phenomenon occurs in the video, screening out the video content with the unmatched track, and integrating the rest videos.
Furthermore, the video display integration module comprises a window modeling unit, a distribution information acquisition unit, a shooting positioning unit, an integrated video display unit and a display position adjustment unit, wherein the window modeling unit is used for establishing a two-dimensional coordinate system by taking the center of the video display equipment as an origin; the distribution information acquisition unit is used for acquiring position distribution data of all display windows in the video display equipment; the shooting positioning unit is used for positioning and integrating video shooting places; the integrated video display unit is used for displaying the distribution position of the current integrated video in the display window; the display position adjusting unit is used for adjusting and controlling the integrated video to be displayed in a data chain type centralized mode.
A video integration method based on big data analysis is characterized in that: the method comprises the following steps:
s11: confirming and tracking the target, and acquiring habitual action data of the target in the acquired video;
s12: accessing videos with similar targets, matching habitual action data, and screening and accessing the videos;
s13: matching the motion trail of the target in the accessed video, and judging whether the motion trail of the video is discontinuous or not;
s14: and adjusting the distribution position of the integrated video in the display window.
Further, in steps S11-S12: confirming a target and tracking the target by using a target tracking unit, collecting and extracting each frame of image of the target in the acquired video with the tracked target by using a target information acquisition unit, respectively feeding forward an image classification model for each frame of image, sharing different image classification model parameters to obtain the characteristics of each frame of image, average converging the image characteristics of each frame to obtain video characteristics, accessing the video with similar targets by using a video access unit, extracting each frame image of the video with similar targets by using a tracking target matching unit, and obtaining a similar target video feature set in the same way, matching the similar target video feature set with the video features with the tracking target to obtain a matching accuracy set of P = { P1, P2.., Pn }, wherein n represents the number of videos with the similar target, and setting a matching accuracy threshold value P.Threshold(s)Comparing random one of the matching accuracy rates Pi and PThreshold(s): if it is
Figure 100002_DEST_PATH_IMAGE001
Judging that a tracking target exists in the video corresponding to the Pi; if it is
Figure 622112DEST_PATH_IMAGE002
And judging that no tracking target exists in the video corresponding to the Pi, screening out the video without the tracking target, integrating the rest videos, extracting each frame of image data in the video, and obtaining the matching accuracy rate in an image feature matching mode, so that the method is favorable for quickly and simply searching and integrating the rest videos with the tracking target.
Further, in step S13: the motion trail generation unit is used for acquiring the long motion of the current partial frame when the target in the accessed random video does repetitive motionThe moving trajectory curve function is f (x), the peak set obtained for the corresponding curve is y = { y1, y 2.., yk }, and the valley set is y={y1,y2,...,ymAnd detecting that a function of a target subsequent motion track curve is F (x) by using a track change detection unit, and acquiring that a peak value set of the subsequent motion track curve is Y = { Y1, Y2.., Yk }, and a valley value set is Y={Y1,Y2,...,YmWherein k represents the number of peak points, m represents the number of valley points, and the matching coefficient W of the two curves is calculated according to the following formula:
Figure 100002_DEST_PATH_IMAGE003
wherein Yi and Yi respectively represent a random corresponding peak value of the current frame length curve and the subsequent frame length curve, YiAnd YiRespectively representing a random corresponding valley value of the current frame length curve and the subsequent frame length curve, and setting the threshold value of the matching coefficient as WThreshold(s)Comparing W with WThreshold(s): if it is
Figure 971185DEST_PATH_IMAGE004
If the curve matching degree is low, judging that the track interruption phenomenon occurs in the video, screening out the video content with the low track matching degree by using an interrupted video screening unit, and integrating the rest videos; if it is
Figure 100002_DEST_PATH_IMAGE005
The curve matching degree is high, the phenomenon of track interruption of the video is judged, the influence of peak points and valley points in the curve on the curve matching result is reduced by analyzing the absolute error of the two curves, the accuracy of the matching result is improved, the purpose of calculating the matching coefficient is to analyze whether the single video is discontinuous or interrupted, the interruption video can be screened out in time, the continuity of the video is improved, and related personnel can be helped to analyze the logic of series data analysis.
Further, in step S14: by taking picturesThe bit unit positions an integrated video shooting place, acquires a host IP address of the video display device, confirms the host position, obtains a set of data transmission paths from all the integrated video shooting places to the video display device as D = { D1, D2,. and DI }, obtains a set of difference values between the integrated video shooting time and the acquired video shooting time with a tracking target as T = { T1, T2,. and TI }, wherein I represents the number of integrated videos, establishes a two-dimensional coordinate system by using a window modeling unit and taking the center of the video display device as an origin, acquires the number of display windows in the video display device as M by using a distribution information acquisition unit, and compares I with M: if it is
Figure 425431DEST_PATH_IMAGE006
The video display device is capable of simultaneously displaying all integrated videos in which the tracking target exists; if it is
Figure 100002_DEST_PATH_IMAGE007
The video display device can not simultaneously display all integrated videos with the tracking target, the integrated video display unit is used for displaying the acquired video position with the tracking target, and the display priority coefficient Qi of one integrated video is calculated according to the following formula:
Figure 252573DEST_PATH_IMAGE008
wherein Di represents a data transmission path from a shooting location corresponding to a random integrated video to a video display device, Dmin and Dmax represent shortest and longest data transmission paths, Ti represents a difference between shooting time of the random integrated video and video shooting time of an acquired tracking target, Tmin and Tmax represent shortest and longest time differences, respectively, a set of display priority coefficients is obtained, Q = { Q1, Q2,.. and QI }, and a display position adjusting unit is used for adjusting the distribution position of the integrated video in a display window: controlling the video corresponding to the maximum priority coefficient Qmax to be displayed on the adjacent window of the acquired video with the tracking target, and controlling the other videos to be adjusted according to the priority coefficient from large to smallThe video of the position is displayed adjacently
Figure 334929DEST_PATH_IMAGE007
And then, controlling the video display window corresponding to the minimum priority coefficient Qmin to display in a split screen mode, wherein the longer the data transmission path is, the slower the transmission speed is, and adding a video shooting time factor on the basis of the length of the data transmission path to calculate the display priority coefficient of the integrated video, so that the display position of the integrated video is adjusted according to the size of the coefficient, the data analysis rate is improved, and the logic of video display is enhanced.
Compared with the prior art, the invention has the following beneficial effects:
the method acquires one video with the tracking target and the video content with the target similar to the tracking target by collecting the selected tracking target data, performs image feature matching by extracting frame image data of the video, judges that the tracking target exists in the video with high matching accuracy, and is favorable for quickly searching and integrating the other videos with the tracking target in an image feature matching mode; by analyzing the curve matching degree of the front frame length and the rear frame length of a target in an individual video, the problem that the influence of a peak point and a valley point on a matching result is reduced during curve matching while the problem that the influence of the discontinuity and information loss of the individual video possibly affects the data analysis result in the prior art is solved, the accuracy of the matching result is improved, whether the discontinuity and interruption of the individual video occur or not is analyzed, the interruption video is screened out in time, the continuity of the video is improved, and related personnel are helped to serially connect data analysis logics; the video display position is adjusted according to the shooting transmission distance and the video shooting time difference, so that the data analysis rate is improved, and the logic of video display is enhanced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block diagram of a big data analysis based video integration system and method of the present invention;
FIG. 2 is a flow chart of a video integration system and method based on big data analysis according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Referring to fig. 1-2, the present invention provides the following technical solutions: a video integration system based on big data analysis is characterized in that: the system comprises: the system comprises a target data acquisition module S1, a data management platform S2, a target video integration module S3, a key video generation module S4 and a video display integration module S5;
selecting a tracking target through a target data acquisition module S1, acquiring habitual action data of the target, collecting and storing the acquired target data through a data management platform S2, accessing other videos through a target video integration module S3, matching the acquired data with similar target habitual action data in other videos to acquire targets in other videos, accessing videos with the targets, acquiring a fixed frame length motion track when the targets in one accessed random video perform repetitive actions through a key video generation module S4, detecting and analyzing the motion track change of the targets after the current video frame, matching the motion track with the current motion track, judging whether the corresponding video has a track discontinuity phenomenon or not, screening out partial videos with the discontinuity phenomenon, integrating videos disconnected due to screening out, displaying and integrating the videos through a video display integration module S5, and positioning the shooting positions of the integrated videos, modeling the display window of the integrated videos, analyzing the display distribution position of the current integrated video, and adjusting and controlling the integrated videos to be displayed in a data chain type centralized manner.
The target data acquisition module S1 comprises a target tracking unit and a target information acquisition unit, wherein the target tracking unit is used for confirming a target and tracking the target; the target information acquisition unit is used for acquiring habitual action data of the target in the acquired video.
The target video integration module S3 comprises a video access unit and a tracking target matching unit, wherein the video access unit is used for accessing videos with similar targets; the tracking target matching unit is used for matching the acquired data with the habitual action data of similar targets in the accessed video, judging whether tracking targets exist in the accessed video, screening out videos without tracking targets, and integrating the rest videos.
The key video generation module S4 comprises a motion track generation unit, a track change detection unit and an interrupt video screening unit, wherein the motion track generation unit is used for acquiring a fixed frame length motion track of a target in a random video after access; the track change detection unit is used for detecting and analyzing whether the subsequent movement track of the target is matched with the current track or not; the interrupted video screening unit is used for screening out video contents according to the matching result: if the video is matched with the video, judging that the track interruption phenomenon does not occur in the video, and not performing screening processing; if not, judging that the track interruption phenomenon occurs in the video, screening out the video content with the unmatched track, and integrating the rest videos.
The video display integration module S5 comprises a window modeling unit, a distribution information acquisition unit, a shooting positioning unit, an integrated video display unit and a display position adjustment unit, wherein the window modeling unit is used for establishing a two-dimensional coordinate system by taking the center of the video display equipment as an origin; the distribution information acquisition unit is used for acquiring position distribution data of all display windows in the video display equipment; the shooting positioning unit is used for positioning and integrating video shooting places; the integrated video display unit is used for displaying the distribution position of the current integrated video in the display window; the display position adjusting unit is used for adjusting and controlling the integrated video to be displayed in a data chain type centralized mode.
A video integration method based on big data analysis is characterized in that: the method comprises the following steps:
s11: confirming and tracking the target, and acquiring habitual action data of the target in the acquired video;
s12: accessing videos with similar targets, matching habitual action data, and screening and accessing the videos;
s13: matching the motion trail of the target in the accessed video, and judging whether the motion trail of the video is discontinuous or not;
s14: and adjusting the distribution position of the integrated video in the display window.
In steps S11-S12: confirming a target and tracking the target by using a target tracking unit, collecting and extracting each frame of image of the target in the acquired video with the tracked target by using a target information acquisition unit, respectively feeding forward an image classification model for each frame of image, sharing different image classification model parameters to obtain the characteristics of each frame of image, average converging the image characteristics of each frame to obtain video characteristics, accessing the video with similar targets by using a video access unit, extracting each frame image of the video with similar targets by using a tracking target matching unit, and obtaining a similar target video feature set in the same way, matching the similar target video feature set with the video features with the tracking target to obtain a matching accuracy set of P = { P1, P2.., Pn }, wherein n represents the number of videos with the similar target, and setting a matching accuracy threshold value P.Threshold(s)Comparing random one of the matching accuracy rates Pi and PThreshold(s): if it is
Figure 199111DEST_PATH_IMAGE001
Judging that a tracking target exists in the video corresponding to the Pi; if it is
Figure 796446DEST_PATH_IMAGE002
And judging that no tracking target exists in the video corresponding to the Pi, screening out the video without the tracking target, integrating the rest videos, extracting each frame of image data in the video, and obtaining the matching accuracy rate in an image feature matching mode, so that the rest videos with the tracking target can be conveniently and quickly searched and integrated.
In step S13: a motion trail generation unit is used for acquiring a function f (x) of a motion trail curve of the length of a current partial frame when a target in a random video performs repetitive actions after access, a peak value set of a corresponding curve is acquired as y = { y1, y 2.. once, yk }, and a valley value set is acquired as y={y1,y2,...,ymAnd detecting that a function of a target subsequent motion track curve is F (x) by using a track change detection unit, and acquiring that a peak value set of the subsequent motion track curve is Y = { Y1, Y2.., Yk }, and a valley value set is Y={Y1,Y2,...,YmAnd f, wherein k represents the number of peak points, m represents the number of valley points, and the matching coefficient W of the two curves is calculated according to the following formula:
Figure 551912DEST_PATH_IMAGE003
wherein Yi and Yi respectively represent a random corresponding peak value of the current frame length curve and the subsequent frame length curve, YiAnd YiRespectively representing a random corresponding valley value of the current frame length curve and the subsequent frame length curve, and setting the threshold value of the matching coefficient as WThreshold(s)Comparing W with WThreshold(s): if it is
Figure 285513DEST_PATH_IMAGE004
If the curve matching degree is low, judging that the track interruption phenomenon occurs in the video, screening out the video content with the low track matching degree by using an interrupted video screening unit, and integrating the rest videos; if it is
Figure 101022DEST_PATH_IMAGE005
The curve matching degree is high, the phenomenon of track interruption of the video is judged, the influence of peak points and valley points in the curve on the curve matching result is reduced by analyzing the absolute error of the two curves, the accuracy of the matching result can be effectively improved, the matching coefficient is calculated, the purpose of analyzing whether the single video is discontinuous or interrupted is achieved, the interrupted video can be screened out in time conveniently, the continuity of the video is improved, and related personnel are helped to analyze the logic of series data.
In step S14: positioning the integrated video shooting site by using the shooting positioning unit, acquiring the host IP address of the video display equipment, confirming the host position, and obtaining all the integrated video shooting sites to the video display equipmentThe data transmission path set is D = { D1, D2.. and DI }, the difference set between the acquired integrated video shooting time and the acquired video shooting time with the tracking target is T = { T1, T2.. and TI }, wherein I represents the number of integrated videos, a two-dimensional coordinate system is established by using a window modeling unit and taking the center of the video display device as an origin, the number of display windows in the video display device is acquired by using a distribution information acquisition unit to be M, and the I and the M are compared: if it is
Figure 185653DEST_PATH_IMAGE006
The video display device is capable of simultaneously displaying all integrated videos in which the tracking target exists; if it is
Figure 885756DEST_PATH_IMAGE007
The video display device can not simultaneously display all integrated videos with the tracking target, the integrated video display unit is used for displaying the acquired video position with the tracking target, and the display priority coefficient Qi of one integrated video is calculated according to the following formula:
Figure 739442DEST_PATH_IMAGE008
wherein Di represents a data transmission path from a shooting location corresponding to a random integrated video to a video display device, Dmin and Dmax represent shortest and longest data transmission paths, Ti represents a difference between shooting time of the random integrated video and video shooting time of an acquired tracking target, Tmin and Tmax represent shortest and longest time differences, respectively, a set of display priority coefficients is obtained, Q = { Q1, Q2,.. and QI }, and a display position adjusting unit is used for adjusting the distribution position of the integrated video in a display window: controlling the video corresponding to the maximum priority coefficient Qmax to be displayed on the adjacent window of the acquired video with the tracking target, and controlling the rest of videos to be displayed adjacent to the video with the adjusted position according to the priority coefficient from large to small
Figure 132378DEST_PATH_IMAGE007
And then, controlling the video display window corresponding to the minimum priority coefficient Qmin to display in a split screen mode, wherein the longer the data transmission path is, the slower the transmission speed is, and adding a video shooting time factor on the basis of the length of the data transmission path to calculate the display priority coefficient of the integrated video, so that the display position of the integrated video is adjusted according to the size of the coefficient, and the data analysis rate is improved while the display logicality of the integrated video is enhanced.
The first embodiment is as follows: a motion trail generation unit is used for acquiring a curve function f (x) of the motion trail of the current partial frame length when a target in a random video performs repetitive actions after integration, the peak value set of the corresponding curve is y = { y1, y2, y3} = {10, 5, 8}, and the valley value set is y={y1,y2,y3And (5) } = { -2, -6, -8}, detecting that the function of the target subsequent motion track curve is F (x) by using the track change detection unit, and acquiring that the peak value set of the subsequent motion track curve is Y = { Y1, Y2, Y3} = {7, 4, 8}, and the valley value set is Y={Y1,Y2,Y3} = { -3, -6, -9}, where k represents the number of peak points and m represents the number of valley points, according to the formula
Figure 563359DEST_PATH_IMAGE003
Calculating the matching coefficient of the two curves
Figure DEST_PATH_IMAGE009
Setting the matching coefficient threshold value to WThreshold(s)=0.5, comparing W with WThreshold(s)
Figure 473677DEST_PATH_IMAGE010
If the curve matching degree is low, judging that the track interruption phenomenon occurs in the video, screening out the video content with the low track matching degree by using an interrupted video screening unit, and integrating the rest videos;
example two: positioning the integrated video shooting site by using the shooting positioning unit, acquiring the host IP address of the video display equipment, confirming the host position, and obtaining the data transmission from all the integrated video shooting sites to the video display equipmentThe distance set is D = { D1, D2, D3} = {100, 50, 20}, the difference value set of the video shooting time of the integrated video shooting time and the obtained tracking target exists is T = { T1, T2, T3} = {2, 5, 8}, a two-dimensional coordinate system is established by using a window modeling unit and taking the center of the video display device as an origin, the number of display windows in the video display device is acquired by using a distribution information acquisition unit to be M =3, and I and M are compared:
Figure DEST_PATH_IMAGE011
the video display equipment can simultaneously display all integrated videos with the tracking targets, the integrated video display unit is used for displaying the acquired video positions with the tracking targets, and the video positions are displayed according to a formula
Figure 244187DEST_PATH_IMAGE008
Obtaining a set of display priority coefficients Q = { Q1, Q2, Q3} = {1, 0.3, 0}, and adjusting the distribution position of the integrated video in the display window by using the display position adjusting unit: controlling the video 1 corresponding to the maximum priority coefficient Qmax =1 to be displayed in the adjacent window of the acquired video with the tracking target, and controlling the rest videos to be displayed in the adjacent window of the video with the adjusted position according to the priority coefficient from large to small: video 2 is displayed adjacent to video 1 and video 3 is displayed adjacent to video 2.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A video integration system based on big data analysis is characterized in that: the system comprises: a target data acquisition module (S1), a data management platform (S2), a target video integration module (S3), a key video generation module (S4) and a video display integration module (S5);
selecting a tracking target through the target data acquisition module (S1), acquiring habitual action data of the target, collecting and storing the acquired target data through the data management platform (S2), accessing other videos through the target video integration module (S3), matching the acquired data with the habitual action data of similar targets in other videos, acquiring the targets in other videos, accessing the videos with the targets, acquiring a fixed frame length motion track when the targets in one accessed random video perform repetitive actions through the key video generation module (S4), detecting and analyzing the motion track change of the targets after the current video frame, matching the motion track with the current motion track, judging whether the corresponding video has a track discontinuity phenomenon, screening out partial videos with the discontinuity phenomenon, and integrating videos disconnected due to screening out, positioning each video shooting position after integration through the video display integration module (S5), modeling an integrated video display window, analyzing the display distribution position of the current integrated video, and adjusting and controlling the integrated video to be displayed in a data chain type centralized manner;
a motion trail generation unit is used for acquiring a function f (x) of a motion trail curve of the length of a current partial frame when a target in a random video performs repetitive actions after access, a peak value set of a corresponding curve is acquired as y = { y1, y 2.. once, yk }, and a valley value set is acquired as y={y1,y2,...,ymAnd detecting that a function of a target subsequent motion track curve is F (x) by using a track change detection unit, and acquiring that a peak value set of the subsequent motion track curve is Y = { Y1, Y2.., Yk }, and a valley value set is Y={Y1,Y2,...,YmWherein k represents the number of peak points, m represents the number of valley points, and the matching coefficient W of the two curves is calculated according to the following formula:
Figure DEST_PATH_IMAGE001
wherein Yi and Yi respectively represent a random corresponding peak value of the current frame length curve and the subsequent frame length curve, YiAnd YiRespectively representing a random corresponding valley value of the current frame length curve and the subsequent frame length curve, and setting the threshold value of the matching coefficient as WThreshold(s)Comparing W with WThreshold(s): if it is
Figure 304460DEST_PATH_IMAGE002
If the curve matching degree is low, judging that the track interruption phenomenon occurs in the video, screening out the video content with the low track matching degree by using an interrupted video screening unit, and integrating the rest videos; if it is
Figure DEST_PATH_IMAGE003
And the curve matching degree is high, and the phenomenon of track interruption of the video is judged to be not generated.
2. The big data analysis-based video integration system according to claim 1, wherein: the target data acquisition module (S1) comprises a target tracking unit and a target information acquisition unit, wherein the target tracking unit is used for confirming a target and tracking the target; the target information acquisition unit is used for acquiring habitual action data of the acquired target in the video.
3. The big data analysis-based video integration system according to claim 1, wherein: the target video integration module (S3) comprises a video access unit and a tracking target matching unit, wherein the video access unit is used for accessing videos with similar targets; the tracking target matching unit is used for matching the acquired data with the habitual action data of similar targets in the accessed video, judging whether tracking targets exist in the accessed video, screening out videos without tracking targets, and integrating the rest videos.
4. The big data analysis-based video integration system according to claim 1, wherein: the key video generation module (S4) comprises a motion track generation unit, a track change detection unit and an interrupt video screening unit, wherein the motion track generation unit is used for acquiring a fixed frame length motion track of a target in a random video after access; the track change detection unit is used for detecting and analyzing whether the subsequent movement track of the target is matched with the current track or not; the interruption video screening unit is used for screening out video contents according to the matching result: if the video is matched with the video, judging that the track interruption phenomenon does not occur in the video, and not performing screening processing; if not, judging that the track interruption phenomenon occurs in the video, screening out the video content with the unmatched track, and integrating the rest videos.
5. The big data analysis-based video integration system according to claim 1, wherein: the video display integration module (S5) comprises a window modeling unit, a distribution information acquisition unit, a shooting positioning unit, an integrated video display unit and a display position adjustment unit, wherein the window modeling unit is used for establishing a two-dimensional coordinate system by taking the center of the video display equipment as an origin; the distribution information acquisition unit is used for acquiring position distribution data of all display windows in the video display equipment; the shooting positioning unit is used for positioning and integrating video shooting places; the integrated video display unit is used for displaying the distribution position of the current integrated video in the display window; the display position adjusting unit is used for adjusting and controlling the integrated video to be displayed in a data chain type centralized mode.
6. A video integration method based on big data analysis is characterized in that: the method comprises the following steps:
s11: confirming and tracking the target, and acquiring habitual action data of the target in the acquired video;
s12: accessing videos with similar targets, matching habitual action data, and screening and accessing the videos;
s13: matching the motion trail of the target in the accessed video, and judging whether the motion trail of the video is discontinuous or not;
s14: adjusting the distribution position of the integrated video in the display window;
in steps S11-S12: confirming a target and tracking the target by using a target tracking unit, collecting and extracting each frame of image of the target in the acquired video with the tracked target by using a target information acquisition unit, respectively feeding forward an image classification model for each frame of image, sharing different image classification model parameters to obtain the characteristics of each frame of image, average converging the image characteristics of each frame to obtain video characteristics, accessing the video with similar targets by using a video access unit, extracting each frame image of the video with similar targets by using a tracking target matching unit, and obtaining a similar target video feature set in the same way, matching the similar target video feature set with the video features with the tracking target to obtain a matching accuracy set of P = { P1, P2.., Pn }, wherein n represents the number of videos with the similar target, and setting a matching accuracy threshold value P.Threshold(s)Comparing random one of the matching accuracy rates Pi and PThreshold(s): if it is
Figure 842888DEST_PATH_IMAGE004
Judging that a tracking target exists in the video corresponding to the Pi; if it is
Figure DEST_PATH_IMAGE005
Judging that no tracking target exists in the video corresponding to the Pi, screening out the video without the tracking target, and integrating the rest videos;
in step S13: a motion trail generation unit is used for acquiring a function f (x) of a motion trail curve of the length of a current partial frame when a target in a random video performs repetitive actions after access, a peak value set of a corresponding curve is acquired as y = { y1, y 2.. once, yk }, and a valley value set is acquired as y={y1,y2,...,ymAnd detecting that a function of a target subsequent motion track curve is F (x) by using a track change detection unit, and acquiring that a peak value set of the subsequent motion track curve is Y = { Y1, Y2.., Yk }, and a valley value set is Y={Y1,Y2,...,YmWherein, the position of the base is changed,k represents the number of peak points, m represents the number of valley points, and the matching coefficient W of the two curves is calculated according to the following formula:
Figure 40652DEST_PATH_IMAGE001
wherein Yi and Yi respectively represent a random corresponding peak value of the current frame length curve and the subsequent frame length curve, YiAnd YiRespectively representing a random corresponding valley value of the current frame length curve and the subsequent frame length curve, and setting the threshold value of the matching coefficient as WThreshold(s)Comparing W with WThreshold(s): if it is
Figure 525991DEST_PATH_IMAGE002
If the curve matching degree is low, judging that the track interruption phenomenon occurs in the video, screening out the video content with the low track matching degree by using an interrupted video screening unit, and integrating the rest videos; if it is
Figure 696072DEST_PATH_IMAGE003
And the curve matching degree is high, and the phenomenon of track interruption of the video is judged to be not generated.
7. The video integration method based on big data analysis according to claim 6, wherein: in step S14: positioning an integrated video shooting place by using a shooting positioning unit, acquiring a host IP address of video display equipment, confirming the position of the host, obtaining a set of data transmission routes from all the integrated video shooting places to the video display equipment, wherein the set of data transmission routes is D = { D1, D2.., DI }, obtaining a set of difference values between the integrated video shooting time and the obtained video shooting time with a tracking target, and the set of difference values is T = { T1, T2.., TI }, wherein I represents the number of integrated videos, establishing a two-dimensional coordinate system by using a window modeling unit and taking the center of the video display equipment as an origin, acquiring the number of display windows in the video display equipment by using a distribution information acquisition unit as M, and comparing I with M: if it is
Figure 264457DEST_PATH_IMAGE006
The video display device is capable of simultaneously displaying all integrated videos in which the tracking target exists; if it is
Figure DEST_PATH_IMAGE007
The video display device can not simultaneously display all integrated videos with the tracking target, the integrated video display unit is used for displaying the acquired video position with the tracking target, and the display priority coefficient Qi of one integrated video is calculated according to the following formula:
Figure 559303DEST_PATH_IMAGE008
wherein Di represents a data transmission path from a shooting location corresponding to a random integrated video to a video display device, Dmin and Dmax represent shortest and longest data transmission paths, Ti represents a difference between shooting time of the random integrated video and video shooting time of an acquired tracking target, Tmin and Tmax represent shortest and longest time differences, respectively, a set of display priority coefficients is obtained, Q = { Q1, Q2,.. and QI }, and a display position adjusting unit is used for adjusting the distribution position of the integrated video in a display window: controlling the video corresponding to the maximum priority coefficient Qmax to be displayed on the adjacent window of the acquired video with the tracking target, and controlling the rest of videos to be displayed adjacent to the video with the adjusted position according to the priority coefficient from large to small
Figure 972967DEST_PATH_IMAGE007
And controlling the video display window corresponding to the minimum priority coefficient Qmin to display in a split screen mode.
CN202111345087.2A 2021-11-15 2021-11-15 Video integration system and method based on big data analysis Active CN113784214B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111345087.2A CN113784214B (en) 2021-11-15 2021-11-15 Video integration system and method based on big data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111345087.2A CN113784214B (en) 2021-11-15 2021-11-15 Video integration system and method based on big data analysis

Publications (2)

Publication Number Publication Date
CN113784214A CN113784214A (en) 2021-12-10
CN113784214B true CN113784214B (en) 2022-02-11

Family

ID=78873844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111345087.2A Active CN113784214B (en) 2021-11-15 2021-11-15 Video integration system and method based on big data analysis

Country Status (1)

Country Link
CN (1) CN113784214B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7396976B2 (en) * 2006-04-21 2008-07-08 I Did It, Inc. Easy-to-peel securely attaching bandage
CN103559286B (en) * 2013-11-08 2017-04-26 北京奇虎科技有限公司 Processing method and device for video searching results
CN109729392A (en) * 2019-01-15 2019-05-07 深圳市云歌人工智能技术有限公司 The method, apparatus and storage medium of video push
CN109857908B (en) * 2019-03-04 2021-04-09 北京字节跳动网络技术有限公司 Method and apparatus for matching videos

Also Published As

Publication number Publication date
CN113784214A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
Zhang et al. An efficient algorithm for pothole detection using stereo vision
CN111144364B (en) Twin network target tracking method based on channel attention updating mechanism
TWI393074B (en) Apparatus and method for moving object detection
JP2023500969A (en) Target Tracking Method, Apparatus, Electronics, Computer Readable Storage Medium and Computer Program Product
US8634592B2 (en) System and method for predicting object location
JP2018063236A (en) Method and apparatus for annotating point cloud data
US20160048978A1 (en) Method and apparatus for automatic keyframe extraction
US8879894B2 (en) Pixel analysis and frame alignment for background frames
WO2023065395A1 (en) Work vehicle detection and tracking method and system
TW200818916A (en) Wide-area site-based video surveillance system
WO2022135027A1 (en) Multi-object tracking method and apparatus, computer device, and storage medium
US11900676B2 (en) Method and apparatus for detecting target in video, computing device, and storage medium
US10037605B2 (en) Video object tagging using synthetic images and segmentation hierarchies
CN108012202A (en) Video concentration method, equipment, computer-readable recording medium and computer installation
CN105469427B (en) One kind is for method for tracking target in video
Illes et al. Robust estimation for area of origin in bloodstain pattern analysis via directional analysis
CN105261040B (en) A kind of multi-object tracking method and device
WO2015184768A1 (en) Method and device for generating video abstract
CN113784214B (en) Video integration system and method based on big data analysis
Yang et al. Scene adaptive online surveillance video synopsis via dynamic tube rearrangement using octree
CN110849380B (en) Map alignment method and system based on collaborative VSLAM
CN104978731A (en) Information processing method and electronic equipment
CN113850837B (en) Video processing method and device, electronic equipment, storage medium and computer product
CN110276233A (en) A kind of polyphaser collaboration tracking system based on deep learning
CN108073668A (en) Video index establishing method and device applying same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A video integration system and method based on big data analysis

Effective date of registration: 20220530

Granted publication date: 20220211

Pledgee: China Construction Bank Corporation Nanjing Jiangbei new area branch

Pledgor: Nanjing Jinwei Intelligent Technology Co.,Ltd.

Registration number: Y2022980006761