CN110322471A - Method, apparatus, equipment and the storage medium of panoramic video concentration - Google Patents
Method, apparatus, equipment and the storage medium of panoramic video concentration Download PDFInfo
- Publication number
- CN110322471A CN110322471A CN201910648517.4A CN201910648517A CN110322471A CN 110322471 A CN110322471 A CN 110322471A CN 201910648517 A CN201910648517 A CN 201910648517A CN 110322471 A CN110322471 A CN 110322471A
- Authority
- CN
- China
- Prior art keywords
- moving target
- selection
- motion profile
- video
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The embodiment of the present application provides method, apparatus, equipment and the storage medium of a kind of panoramic video concentration.This method comprises: obtaining the pre-selection motion profile of each pre-selection moving target in the first video, this respectively preselects the pre-selection motion profile of moving target by position corresponding to cut-off rule, which is that panoramic video obtains after cut-off rule segmentation;According to the pre-selection motion profile of each pre-selection moving target, motion feature when each pre-selection moving target passes through the position is obtained, and according to the motion feature, merges the pre-selection motion profile of corresponding same moving target, the motion profile after being merged;Video according to the motion profile after the merging, after obtaining panoramic video concentration.Technical solution provided by the embodiment of the present application, avoid panoramic video concentration during by the tracing of the movement of same movement target be multiple moving targets a plurality of motion profile the problem of, improve panoramic video concentration accuracy.
Description
Technical field
A kind of be concentrated this application involves technical field of video monitoring more particularly to panoramic video method, apparatus, equipment and
Storage medium.
Background technique
With computer network, the rapid development of digitized video technology, the video monitoring based on digital network is extensive
During security applied to public place and critical facility etc. is applied, such as bank, electric power, traffic, safety check and military affairs field.Companion
As the range of safety monitoring is growing, the quantity of monitoring device is also increasing at an amazing speed, and then generates magnanimity
Monitor video mainly has the features such as video storage data quantity is big, the storage period is long, occupancy memory space is big.By manually checking
The traditional method that clue is found in video recording needs to expend a large amount of human and material resources and time, and efficiency is extremely low.Therefore, in video
In monitoring system, the memory space of massive video can be greatly reduced using video concentration technique, improve massive video monitoring record
As the utilization rate of analysis, the maximum value in magnanimity monitoring video is sufficiently excavated.
The panoramic video of panorama camera shooting visual angle for ordinary video is wider, can be complete to carrying out compared with large scene
Office's monitoring.But there are track discontinuous problems between the different cameras of panorama camera for the panoramic video of panorama camera shooting, such as
Fruit directly uses existing video concentration technique, and existing the tracing of the movement of same movement target is multiple moving targets
The problem of a plurality of motion profile, affects the accuracy of panoramic video concentration so that the video and original video after concentration mismatch.
Summary of the invention
The embodiment of the present application provides method, apparatus, equipment and the storage medium of a kind of panoramic video concentration, complete to improve
The accuracy of scape video concentration.
In a first aspect, the embodiment of the present application provides a kind of method of panoramic video concentration, comprising: obtain in the first video
The pre-selection motion profile of each pre-selection moving target, the pre-selection motion profile of each pre-selection moving target is by corresponding to cut-off rule
Position, first video is panoramic video to be obtained after cut-off rule segmentation;According to the pre- of each pre-selection moving target
Motion profile is selected, motion feature when each pre-selection moving target passes through the position is obtained, and according to the motion feature, is merged
The pre-selection motion profile of corresponding same moving target, the motion profile after being merged;According to the motion profile after the merging,
Video after obtaining the panoramic video concentration.
With reference to first aspect, in a kind of possible implementation of first aspect, each pre-selection moving target passes through
The motion feature when position is any one of following: along first direction move closer to the position, along first direction by
The position is gradually moved closer to far from the position, in a second direction, is gradually distance from the position in a second direction;Wherein, edge
First direction moves closer to the position and is gradually distance from the position in a second direction to match, and is gradually distance from along first direction
The position and the position is moved closer in a second direction matching, the first direction and the second direction are opposite.
With reference to first aspect, in a kind of possible implementation of first aspect, according to the motion feature, merging pair
Answer the pre-selection motion profile of same moving target, comprising: the position is passed through according to the motion feature, each pre-selection moving target
Time and each pre-selection moving target coordinate, merge the pre-selection motion profile of corresponding same moving target.
With reference to first aspect, in a kind of possible implementation of first aspect, according to the motion feature, each pre-selection
Moving target merges the pre-selection fortune of corresponding same moving target by the time of the position and the coordinate of each pre-selection moving target
Dynamic rail mark, comprising: for one first pre-selection moving target in each pre-selection moving target: determining to be moved with first pre-selection
The motion feature of target matches, by the position time is identical and coordinate and the first pre-selection moving target first
The pre-selection moving target that coordinate matches is the first moving target group;Using pedestrian's weight recognizer, first movement is determined
The pre-selection moving target that moving target is same moving target is preselected in target group with described first;Merge corresponding same movement mesh
Target preselects motion profile.
With reference to first aspect, in a kind of possible implementation of first aspect, match with first coordinate
The abscissa of coordinate is located in preset range, and the absolute value of ordinate and the difference of the ordinate of first coordinate is less than or waits
In preset threshold.
Second aspect, the embodiment of the present application provide a kind of device of video concentration, comprising: module are obtained, for obtaining the
The pre-selection motion profile of each pre-selection moving target in one video, the pre-selection motion profile of each pre-selection moving target is through excessive
Position corresponding to secant, first video are that panoramic video obtains after cut-off rule segmentation;Merging module is used for
According to the pre-selection motion profile of each pre-selection moving target, motion feature when each pre-selection moving target passes through the position is obtained,
And according to the motion feature, merge the pre-selection motion profile of corresponding same moving target, the motion profile after being merged;Institute
Acquisition module is stated, the video after being also used to obtain the panoramic video concentration according to the motion profile after the merging.
In conjunction with second aspect, in a kind of possible implementation of second aspect, each pre-selection moving target passes through
The motion feature when position is any one of following: along first direction move closer to the position, along first direction by
The position is gradually moved closer to far from the position, in a second direction, is gradually distance from the position in a second direction;Wherein, edge
First direction moves closer to the position and is gradually distance from the position in a second direction to match, and is gradually distance from along first direction
The position and the position is moved closer in a second direction matching, the first direction and the second direction are opposite.
In conjunction with second aspect, in a kind of possible implementation of second aspect, the merging module is specifically used for:
According to the motion feature, each pre-selection moving target by the time of the position and the coordinate of each pre-selection moving target, merge
The pre-selection motion profile of corresponding same moving target.
In conjunction with second aspect, in a kind of possible implementation of second aspect, the merging module is specifically used for:
For one first pre-selection moving target in each pre-selection moving target: determining special with the movement of the first pre-selection moving target
Sign matches, by the position time is identical and the first coordinate of coordinate and the first pre-selection moving target matches
Pre-selection moving target is the first moving target group;Using pedestrian weight recognizer, determine in the first moving target group with institute
State the pre-selection moving target that the first pre-selection moving target is same moving target;Merge the pre-selection campaign of corresponding same moving target
Track.
In conjunction with second aspect, in a kind of possible implementation of second aspect, match with first coordinate
The abscissa of coordinate is located in preset range, and the absolute value of ordinate and the difference of the ordinate of first coordinate is less than or waits
In preset threshold.
The third aspect, the embodiment of the present application provide a kind of equipment of video concentration, comprising: processor and memory.
Memory is for storing computer executable instructions.
Processor is used to execute the computer executed instructions of memory storage, so that processor executes the complete of such as first aspect
The method of scape video concentration.
Fourth aspect, the embodiment of the present application provide a kind of computer storage medium, are stored in computer storage medium
Computer executed instructions are concentrated when computer executed instructions are executed by processor for realizing the panoramic video of such as first aspect
Method.
5th aspect, the embodiment of the present application provide a kind of computer program product, including computer executed instructions, work as meter
Calculation machine executes instruction the method being concentrated when being executed by processor for realizing the panoramic video of such as first aspect.
The application is pre- by each pre-selection moving target in the first video after obtaining the segmented line segmentation of panoramic video
Motion profile is selected, each pre-selection motion profile for preselecting moving target passes through the position of cut-off rule, and according to each pre-selection moving target
Pre-selection motion profile, obtain motion feature of each pre-selection moving target by position corresponding to cut-off rule when, merge and correspond to
The pre-selection motion profile of same moving target, the motion profile after being merged carry out video according to the motion profile after merging
Concentration.Therefore, the motion profile for being divided the separated same moving target of line in panoramic video is merged in the application, is avoided
By the tracing of the movement of same movement target be multiple moving targets a plurality of motion profile the problem of, improve panoramic video
The accuracy of concentration.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, will be described below to embodiment
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some realities of the application
Example is applied, for those of ordinary skill in the art, is also possible to obtain other drawings based on these drawings.
Fig. 1 is the schematic diagram of the video after the segmented line segmentation of panoramic video that one embodiment of the application provides;
Fig. 2 is the flow chart of the method for the panoramic video concentration that one embodiment of the application provides;
Fig. 3 is the schematic diagram of the device for the panoramic video concentration that one embodiment of the application provides;
Fig. 4 is the schematic diagram of the equipment for the panoramic video concentration that one embodiment of the application provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application
In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is
Some embodiments of the present application, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall in the protection scope of this application.
Specifically, in the application, "at least one" refers to one or more, and " multiple " refer to two or more.
"and/or" describes the incidence relation of affiliated partner, indicates may exist three kinds of relationships, for example, A and/or B, can indicate: single
Solely there are A, A and B are existed simultaneously, the case where individualism B, wherein A, B can be odd number or plural number.The general table of character "/"
Show that forward-backward correlation object is a kind of relationship of "or".At least one of " following (a) " or its similar expression, refer to these in
Any combination, any combination including individual event (a) or complex item (a).For example, at least one (a) in a, b or c, it can
To indicate: a, b, c, a-b, a-c, b-c or a-b-c, wherein a, b, c can be individually, be also possible to multiple.Art in the application
Language " first ", " second " etc. are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.
The description and claims of this application and term " first ", " second ", " third ", " in above-mentioned attached drawing
The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage
The data that solution uses in this way are interchangeable under appropriate circumstances, so that embodiments herein described herein for example can be to remove
Sequence other than those of illustrating or describe herein is implemented.In addition, term " includes " and " having " and theirs is any
Deformation, it is intended that cover it is non-exclusive include, for example, containing the process, method of a series of steps or units, system, production
Product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include be not clearly listed or for this
A little process, methods, the other step or units of product or equipment inherently.
Panorama camera involved in the embodiment of the present application is a kind of to install the complete of multiple pick-up lens around a fixed point
Scape camera, for example 6 pick-up lens are installed around a fixed point of panorama camera, every 60 degree of installations, one pick-up lens,
The entire field of multiple pick-up lens is summed to form full-view visual field.
The video of panorama camera output is that can throw each camera content of shooting made of the fusion of each camera content of shooting
Shadow is shown on spherical surface, and projecting to the video obtained on spherical surface can be described as panoramic video.By panoramic video along a cut-off rule
The first video can be obtained after segmentation, i.e. the first video is that the panoramic video is spread to the video obtained on two-dimensional surface.This
When, in the first video, two different movement mesh can be identified as by the moving target at position corresponding to cut-off rule
Mark, so that two motion profiles can be obtained for the moving target, to affect the accuracy of video concentration.
Panorama camera involved in the application can be common fixed-focus panorama camera, be also possible to 3D fixed-focus panorama camera,
It can also be square zoom camera, be not limited thereto.
Fig. 1 is the schematic diagram of the video after the segmented line segmentation of panoramic video that one embodiment of the application provides, such as Fig. 1 institute
Show, panoramic video is after cut-off rule segmentation, and one of moving target is divided into two moving targets, correspondingly the movement mesh
There are two motion profiles for mark.
In order to which the accuracy occurred when solving the problems, such as that existing video concentration technique directly applies to panoramic video is insufficient,
The application provides a kind of method of panoramic video concentration, by the pre-selection campaign for obtaining each pre-selection moving target in the first video
Track, for each pre-selection motion profile for preselecting moving target by position corresponding to cut-off rule, the first video is panoramic video warp
It is obtained after cut-off rule segmentation;According to the pre-selection motion profile of each pre-selection moving target, obtains each pre-selection moving target and pass through
Motion feature when position corresponding to cut-off rule, and according to the motion feature, merge the pre-selection fortune of corresponding same moving target
Dynamic rail mark, the motion profile after being merged;Video according to the motion profile after merging, after obtaining panoramic video concentration.
It is described in detail below with technical solution of the specific embodiment to the application.These specific implementations below
Example can be combined with each other, and the same or similar concept or process may be repeated no more in some embodiments.
Fig. 2 is the flow chart of the method for the panoramic video concentration that one embodiment of the application provides.The executing subject of this method
It is the device of video concentration, the device of video concentration can be the intelligence such as computer, tablet computer, laptop, server
Can equipment it is some or all of.As shown in Fig. 2, this method comprises the following steps:
S201: obtaining the pre-selection motion profile of each pre-selection moving target in the first video, each to preselect the pre- of moving target
Select motion profile by position corresponding to cut-off rule, the first video is that panoramic video obtains after cut-off rule segmentation.
S202: it according to the pre-selection motion profile of each pre-selection moving target, obtains each pre-selection moving target and passes through cut-off rule institute
Motion feature when corresponding position, and according to the motion feature, merge the pre-selection motion profile of corresponding same moving target, obtains
Motion profile after to merging.
S203: the video according to the motion profile after merging, after obtaining panoramic video concentration.
It is carried out for step S201 as described below:
Before the pre-selection motion profile for obtaining each pre-selection moving target in the first video, video enrichment facility, which uses, to be divided
Secant divides panoramic video, obtains the first video, and the first video is the video being deployed in panoramic video on two-dimensional surface.
After getting the first video, video enrichment facility obtains the background of first video, can pass through background modeling
Method obtain the background in first video, which is the stationary body image that first video is free of motion information.
Wherein, the algorithm of the prior art can be used in the background for obtaining first video, for example Gaussian Mixture background can be used in background modeling
Modeling algorithm.Gaussian Mixture background modeling algorithm may be implemented to carry out the pixel in video frame the classification of foreground/background two, pass through
The pixel value for counting each point in video image obtains background model.
Further, video enrichment facility carries out foreground target detection to first video, obtains in first video
Each moving target, each moving target in first video obtained in the present embodiment can be described as original motion target.Using
Track algorithm tracks the original motion target in first video, obtains each original motion target in first video
Motion profile.
Specifically, foreground target detection can be realized by algorithm in the prior art, for example only needs (You Only at a glance
Look Once, YOLO) algorithm.YOLO algorithm is the real-time target detection algorithm end to end based on deep learning, and YOLO is by mesh
Mark regional prediction and target category prediction are integrated in single Neural model, are realized in the higher situation of accuracy rate quickly
Carry out object detection and recognition.
It is possible to further realized by algorithm in the prior art each original motion target motion profile acquisition,
For example the motion profile of each original motion target is obtained using Kalman filtering algorithm and Hungary Algorithm.Specifically, fortune is obtained
After moving-target, the characteristic information of moving target, including movement mass center and boundary rectangle are calculated, features described above information initializing is used
Kalman filter is such as initialized as 0.Target area corresponding in next frame is predicted with Kalman filter, instantly
When one frame arrives, object matching is carried out using Hungary Algorithm in estimation range, obtains the movement rail of each original motion target
Mark.
Further, after the motion profile for getting each original motion target in first video, from first view
The pre-selection fortune of each pre-selection moving target in each original motion target is obtained in the motion profile of each original motion target in frequency
Dynamic rail mark, wherein the pre-selection motion profile of each pre-selection moving target is by position corresponding to cut-off rule.It may also be said that each original
Motion profile is pre-selection moving target by the moving target of position corresponding to the cut-off rule in beginning moving target.
It is carried out for step S202 as described below:
After the pre-selection motion profile for getting each pre-selection moving target, according to the pre-selection campaign rail of each pre-selection moving target
Mark obtains motion feature of each pre-selection moving target by position corresponding to cut-off rule when.In a kind of mode, motion feature
It can be any one of following: move closer to position corresponding to cut-off rule along first direction, be gradually distance from point along first direction
Position corresponding to secant moves closer to position corresponding to cut-off rule in a second direction, is gradually distance from segmentation in a second direction
Position corresponding to line;Wherein, position corresponding to cut-off rule is moved closer to along first direction and be gradually distance from a second direction
Position corresponding to cut-off rule matches, and is gradually distance from position corresponding to cut-off rule and in a second direction gradually along first direction
Position corresponding to close cut-off rule matches, and first direction and second direction are opposite.
It, can be according to each pre- after motion feature when obtaining each pre-selection moving target position corresponding to the cut-off rule
Motion feature of the moving target by position corresponding to cut-off rule when is selected, the pre-selection campaign rail of corresponding same moving target is merged
Mark, the motion profile after being merged.
In a kind of mode, motion feature when position corresponding to cut-off rule is passed through according to each pre-selection moving target, is closed
And the pre-selection motion profile of corresponding same moving target, comprising: when each pre-selection moving target is by position corresponding to cut-off rule
Motion feature, each pre-selection moving target by the time of position corresponding to cut-off rule and the coordinate of each pre-selection moving target,
Merge the pre-selection motion profile of corresponding same moving target.
Motion feature, each pre-selection when first can pass through position corresponding to cut-off rule according to each pre-selection moving target move mesh
Mark determines the pre- of corresponding same moving target by the time of position corresponding to cut-off rule and the coordinate of each pre-selection moving target
Moving target is selected, then, merges the pre-selection motion profile of the pre-selection moving target of corresponding same moving target.Step can specifically be passed through
Suddenly (1)~(3) are realized:
(1) for one first pre-selection moving target in each pre-selection moving target: determining and the first pre-selection movement mesh
Target motion feature matches, by position corresponding to cut-off rule time is identical and coordinate and the first pre-selection moving target
The pre-selection moving target that matches of coordinate be the first moving target group.
Motion feature when specifically, by each pre-selection moving target by position corresponding to cut-off rule is identical to be divided into
One group, obtain 4 groups;
According to any one first group in 4 groups, obtain with this first group in respectively preselect moving target by cut-off rule institute
Match second group of motion feature when corresponding position;For in first group any one first pre-selection moving target, from
And seat identical by the time of position corresponding to cut-off rule as the first pre-selection moving target in this first group is determined in second group
Mark with this first group in first pre-selection moving target the first coordinate match second pre-selection moving target.It is understood that
It is that the number of the second pre-selection moving target is at least one, at least one second pre-selection moving target forms the first moving target
Group.Wherein, with this first group in the coordinate that matches of the first coordinate of the first pre-selection moving target meet following condition: with this
The abscissa for the coordinate that one coordinate matches is located in preset range, the ordinate of the coordinate to match with first coordinate with should
The absolute value of the difference of the ordinate of first coordinate is less than or equal to preset threshold.
It is understood that preset range includes the first preset range and the second preset range.Wherein, the first preset range
Interior abscissa is the abscissa of each point in the first area of first video, and first area is close to the of first video
The region on one side, the abscissa in the second preset range are the abscissa of each point in the second area of first video, second
Region be close to first video the second side region, wherein the first side can be first video the right frame line, second
The left side frame line of Bian Kewei first video.
Motion feature when if the first pre-selection moving target is by position corresponding to cut-off rule be along first direction gradually
Position corresponding to cut-off rule is moved closer to far from position corresponding to cut-off rule or in a second direction, then the second pre-selected target
Abscissa should be in the first preset range;Movement when if the first pre-selection moving target is by position corresponding to cut-off rule
Feature is to move closer to position corresponding to cut-off rule along first direction or be gradually distance from corresponding to cut-off rule in a second direction
Position, then the abscissa of the second pre-selected target should be in the second preset range.
In a kind of possible mode, for the first pre-selection moving target, first obtains and pass through with the first pre-selection moving target
At least one first moving target that motion feature when position corresponding to cut-off rule matches, at least one the first movement
Abscissa is obtained in target and is at least one second moving target in preset range, at least one second moving target
At least one third for being less than or equal to preset threshold with the absolute value of the difference of the ordinate of first goal-selling is obtained to transport
Moving-target.Third moving target is the second pre-selection moving target, at least one third moving target forms the first moving target
Group.
With continued reference to Fig. 1, in practice, it is right that line separated moving target 11 and moving target 12 are divided in the first video
Same moving target is answered, moving target 11 and moving target 12 are to preselect moving target.As shown in Figure 1, moving target 11 exists
Motion feature when by position corresponding to cut-off rule is to be gradually distance from position corresponding to cut-off rule along first direction, is moved
Motion feature of the target 12 when by position corresponding to cut-off rule is to move closer to corresponding to cut-off rule in a second direction
Position, motion feature and moving target 12 of the moving target 11 when by position corresponding to cut-off rule are passing through cut-off rule institute
Motion feature when corresponding position matches.Then in the case where moving target 11 is the first pre-selection moving target, mesh is moved
Mark 12 is one second pre-selection moving target in the first moving target group.
(2) using pedestrian's weight recognizer, determine that preselecting moving target with first in the first moving target group is same fortune
The pre-selection moving target of moving-target.
Specifically, the first pre-selection moving target and the moving target of the first moving target group are compared, using pedestrian
Weight identification technology carries out feature extraction, while according to each movement spy preselected when moving target passes through position corresponding to cut-off rule
Sign, the time by position corresponding to cut-off rule and each coordinate for preselecting moving target, preselect moving target group first
In retrieved, obtain the highest pre-selection moving target of similitude, the pre-selection moving target with first preselect moving target be same
One moving target.
(3) merge the pre-selection motion profile of the pre-selection moving target of corresponding same moving target.
Specifically, the pre-selection motion profile of the pre-selection moving target of same moving target is merged, the fortune after being merged
Dynamic rail mark.
Wherein, the merging method of motion profile can refer to method in the prior art, and details are not described herein again.
It is carried out for step S203 as described below:
Merge the pre-selection motion profile of the pre-selection moving target of corresponding same moving target, the motion profile after being merged
Later, the video according to the motion profile after merging, after obtaining panoramic video concentration.According to the motion profile after merging, obtain
Video after panoramic video concentration, specifically includes: according to motion profile, the first motion profile and the background after merging, obtaining complete
Video after the concentration of scape video, the first motion profile are the movement mesh in each original motion target other than each pre-selection moving target
Target motion profile, background are the backgrounds obtained in step S201 according to the first video.
Specifically, after step S202, the complete motion profile of each moving target in available panoramic video, this
When, it energy function algorithm can be used to complete the concentration of panoramic video, which may be implemented do not changing space bit
It sets and densely arranges motion profile under the premise of avoiding the motion profile of moving target to collide as far as possible, use step S201
The static background image of acquisition is placed as background, by the motion profile arranged according to luv space position, by target area
The panoramic video after being concentrated is merged with background image.
In the present embodiment, pass through each pre-selection moving target in the first video after the segmented line segmentation of acquisition panoramic video
Pre-selection motion profile, each pre-selection motion profile for preselecting moving target passes through the position of cut-off rule, and according to each pre-selection movement
The pre-selection motion profile of target obtains motion feature of each pre-selection moving target by position corresponding to cut-off rule when, merges
The pre-selection motion profile of corresponding same moving target, the motion profile after being merged are carried out according to the motion profile after merging
Video concentration.Therefore, the motion profile for being divided the separated same moving target of line in panoramic video is merged in the application, is kept away
Exempted from by the tracing of the movement of same movement target be multiple moving targets a plurality of motion profile the problem of, improve panorama
The accuracy of video concentration.
Above using specific embodiment to the panoramic video enrichment facility of the application
Fig. 3 is the schematic diagram of the device for the panoramic video concentration that one embodiment of the application provides.The present embodiment provides one kind
Video enrichment facility, the device can be some or all of of the smart machines such as computer, tablet computer, laptop.
As shown in figure 3, the device includes:
Module 310 is obtained, it is described each pre- for obtaining the pre-selection motion profile of each pre-selection moving target in the first video
Select the pre-selection motion profile of moving target by position corresponding to cut-off rule, first video is panoramic video through described point
It is obtained after secant segmentation.
Merging module 320 obtains each pre-selection moving target warp for the pre-selection motion profile according to each pre-selection moving target
The motion feature when position is crossed, and according to the motion feature, merges the pre-selection motion profile of corresponding same moving target,
Motion profile after being merged.
Module 310 is obtained, the view after being also used to obtain the panoramic video concentration according to the motion profile after the merging
Frequently.
Optionally, as one embodiment, motion feature when each pre-selection moving target passes through the position is such as
It is any one of lower: to move closer to the position along first direction, be gradually distance from the position along first direction, in a second direction
It moves closer to the position, be gradually distance from the position in a second direction;Wherein, along first direction move closer to the position and
It is gradually distance from the position in a second direction to match, be gradually distance from the position along first direction and gradually leans in a second direction
The nearly position matches, and the first direction and the second direction are opposite.
Optionally, as one embodiment, the merging module 320 is specifically used for: according to the motion feature, each pre-
It selects moving target by the time of the position and the coordinate of each pre-selection moving target, merges the pre-selection of corresponding same moving target
Motion profile.
Optionally, as one embodiment, the merging module 320 is specifically used for: in each pre-selection moving target
One first pre-selection moving target: the determining motion feature with the first pre-selection moving target matches, by the position
The pre-selection moving target that time is identical and the first coordinate of coordinate and the first pre-selection moving target matches be the first fortune
Moving-target group;Using pedestrian's weight recognizer, determine that preselecting moving target with described first in the first moving target group is
The pre-selection moving target of same moving target;Merge the pre-selection motion profile of corresponding same moving target.
Optionally, as one embodiment, the abscissa of the coordinate to match with first coordinate is located at preset range
Interior, the absolute value of the difference of the ordinate of ordinate and first coordinate is less than or equal to preset threshold.
Video enrichment facility provided by the embodiments of the present application specifically can be used for executing above-mentioned video concentration method, in fact
Existing principle and effect can refer to embodiment of the method part, repeat no more to this.
Fig. 4 is the schematic diagram of the equipment for the panoramic video concentration that one embodiment of the application provides.As shown in figure 4, the application
One embodiment provide video concentrator include:
Memory 410, for storing computer executable instructions.
Processor 420 realizes above-mentioned video concentration side for executing the computer executable instructions stored in memory
Method.
Optionally, video concentrator further include: transceiver 430 is used for and other network equipments or terminal device
Realize communication.
Video enrichment facility provided by the embodiments of the present application specifically can be used for executing above-mentioned video concentration method, in fact
Existing principle and effect can refer to embodiment of the method part, repeat no more to this.
The embodiment of the present application also provides a kind of computer readable storage medium, stored in the computer readable storage medium
There are computer executed instructions, for realizing any of the above-described video concentration method when computer executed instructions are executed by processor.
The embodiment of the present application also provides a kind of computer program product, which includes computer executed instructions, meter
Calculation machine executes instruction when being executed by processor for realizing any of the above-described video concentration method.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation.Another point, it is shown or beg for
The mutual coupling, direct-coupling or communication connection of opinion can be through some interfaces, the INDIRECT COUPLING of device or unit
Or communication connection, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above-mentioned each method embodiment can lead to
The relevant hardware of program instruction is crossed to complete.Computer program above-mentioned can store in a computer-readable storage medium
In.For the computer program when being executed by processor, realization includes the steps that above-mentioned each method embodiment;And storage above-mentioned is situated between
Matter includes: the various media that can store program code such as ROM, RAM, magnetic or disk.
Finally, it should be noted that the above various embodiments is only to illustrate the technical solution of the application, rather than its limitations;To the greatest extent
Pipe is described in detail the application referring to foregoing embodiments, those skilled in the art should understand that: its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into
Row equivalent replacement;And these are modified or replaceed, each embodiment technology of the application that it does not separate the essence of the corresponding technical solution
The range of scheme.
Claims (10)
1. a kind of method of panoramic video concentration characterized by comprising
Obtain the pre-selection motion profile of each pre-selection moving target in the first video, the pre-selection campaign of each pre-selection moving target
By position corresponding to cut-off rule, first video is that panoramic video obtains after cut-off rule segmentation for track;
According to the pre-selection motion profile of each pre-selection moving target, movement spy when each pre-selection moving target passes through the position is obtained
Sign, and according to the motion feature, merge the pre-selection motion profile of corresponding same moving target, the movement rail after being merged
Mark;
Video according to the motion profile after the merging, after obtaining the panoramic video concentration.
2. the method according to claim 1, wherein fortune when each pre-selection moving target is by the position
Dynamic feature is any one of following:
The position is moved closer to along first direction, the position is gradually distance from along first direction, is moved closer in a second direction
The position is gradually distance from the position in a second direction;
Wherein, it moves closer to the position along first direction and is gradually distance from the position in a second direction match, along first
Direction, which is gradually distance from the position and moves closer to the position in a second direction, to match, the first direction and described second
It is contrary.
3. according to the method described in claim 2, it is characterized in that, merging corresponding same movement mesh according to the motion feature
Target preselects motion profile, comprising:
Time and each coordinate for preselecting moving target according to the motion feature, each pre-selection moving target by the position,
Merge the pre-selection motion profile of corresponding same moving target.
4. according to the method described in claim 3, it is characterized in that, being passed through according to the motion feature, each pre-selection moving target
The coordinate of the time of the position and each pre-selection moving target, merge the pre-selection motion profile of corresponding same moving target, comprising:
For one first pre-selection moving target in each pre-selection moving target: the determining fortune with the first pre-selection moving target
Dynamic feature matches, by the position time is identical and the first coordinate phase of coordinate and the first pre-selection moving target
The pre-selection moving target matched is the first moving target group;
Using pedestrian's weight recognizer, determine that preselecting moving target with described first in the first moving target group is same fortune
The pre-selection moving target of moving-target;
Merge the pre-selection motion profile of corresponding same moving target.
5. according to the method described in claim 4, it is characterized in that, the abscissa position of the coordinate to match with first coordinate
In in preset range, the absolute value of the difference of the ordinate of ordinate and first coordinate is less than or equal to preset threshold.
6. a kind of device of video concentration characterized by comprising
Module is obtained, for obtaining the pre-selection motion profile of each pre-selection moving target in the first video, each pre-selection movement
For the pre-selection motion profile of target by position corresponding to cut-off rule, first video is panoramic video through the cut-off rule point
It is obtained after cutting;
Merging module obtains described in each pre-selection moving target process for the pre-selection motion profile according to each pre-selection moving target
Motion feature when position, and according to the motion feature, merge the pre-selection motion profile of corresponding same moving target, is closed
Motion profile after and;
The acquisition module, the video after being also used to obtain the panoramic video concentration according to the motion profile after the merging.
7. device according to claim 6, which is characterized in that the merging module is specifically used for:
Time and each coordinate for preselecting moving target according to the motion feature, each pre-selection moving target by the position,
Merge the pre-selection motion profile of corresponding same moving target.
8. device according to claim 7, which is characterized in that the merging module is specifically used for:
For one first pre-selection moving target in each pre-selection moving target: the determining fortune with the first pre-selection moving target
Dynamic feature matches, by the position time is identical and the first coordinate phase of coordinate and the first pre-selection moving target
The pre-selection moving target matched is the first moving target group;
Using pedestrian's weight recognizer, determine that preselecting moving target with described first in the first moving target group is same fortune
The pre-selection moving target of moving-target;
Merge the pre-selection motion profile of corresponding same moving target.
9. a kind of equipment of video concentration characterized by comprising processor and memory;
The memory is for storing computer executable instructions, so that the processor executes the computer executable instructions
The method for realizing panoramic video concentration as described in any one in claim 1-5.
10. a kind of computer storage medium characterized by comprising computer executable instructions, the computer is executable to be referred to
Enable the method being concentrated for realizing panoramic video as described in any one in claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910648517.4A CN110322471B (en) | 2019-07-18 | 2019-07-18 | Method, device and equipment for concentrating panoramic video and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910648517.4A CN110322471B (en) | 2019-07-18 | 2019-07-18 | Method, device and equipment for concentrating panoramic video and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110322471A true CN110322471A (en) | 2019-10-11 |
CN110322471B CN110322471B (en) | 2021-02-19 |
Family
ID=68123960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910648517.4A Active CN110322471B (en) | 2019-07-18 | 2019-07-18 | Method, device and equipment for concentrating panoramic video and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110322471B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113689331A (en) * | 2021-07-20 | 2021-11-23 | 中国铁路设计集团有限公司 | Panoramic image splicing method under complex background |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751677A (en) * | 2008-12-17 | 2010-06-23 | 中国科学院自动化研究所 | Target continuous tracking method based on multi-camera |
CN102256065A (en) * | 2011-07-25 | 2011-11-23 | 中国科学院自动化研究所 | Automatic video condensing method based on video monitoring network |
CN102708182A (en) * | 2012-05-08 | 2012-10-03 | 浙江捷尚视觉科技有限公司 | Rapid video concentration abstracting method |
CN102930061A (en) * | 2012-11-28 | 2013-02-13 | 安徽水天信息科技有限公司 | Video abstraction method and system based on moving target detection |
CN103686095A (en) * | 2014-01-02 | 2014-03-26 | 中安消技术有限公司 | Video concentration method and system |
CN107770484A (en) * | 2016-08-19 | 2018-03-06 | 杭州海康威视数字技术股份有限公司 | A kind of video monitoring information generation method, device and video camera |
-
2019
- 2019-07-18 CN CN201910648517.4A patent/CN110322471B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751677A (en) * | 2008-12-17 | 2010-06-23 | 中国科学院自动化研究所 | Target continuous tracking method based on multi-camera |
CN102256065A (en) * | 2011-07-25 | 2011-11-23 | 中国科学院自动化研究所 | Automatic video condensing method based on video monitoring network |
CN102708182A (en) * | 2012-05-08 | 2012-10-03 | 浙江捷尚视觉科技有限公司 | Rapid video concentration abstracting method |
CN102930061A (en) * | 2012-11-28 | 2013-02-13 | 安徽水天信息科技有限公司 | Video abstraction method and system based on moving target detection |
CN103686095A (en) * | 2014-01-02 | 2014-03-26 | 中安消技术有限公司 | Video concentration method and system |
CN107770484A (en) * | 2016-08-19 | 2018-03-06 | 杭州海康威视数字技术股份有限公司 | A kind of video monitoring information generation method, device and video camera |
Non-Patent Citations (3)
Title |
---|
P. KAEWTRAKULPONG等: "An improved adaptive background mixture model for real-time tracking with shadow detection", 《VIDEO-BASED SURVEILLANCE SYSTEMS: COMPUTER VISION AND DISTRIBUTED PROCESSING》 * |
WEN-NUNG LIE等: "News Video Summarization Based on Spatial", 《PROCEEDINGS OF THE 5TH PACIFIC RIM CONFERENCE ON ADVANCES IN MULTIMEDIA INFORMATION》 * |
梁浩哲等: "基于运动相似性的监控轨迹聚合分析", 《国防科技大学学报》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113689331A (en) * | 2021-07-20 | 2021-11-23 | 中国铁路设计集团有限公司 | Panoramic image splicing method under complex background |
Also Published As
Publication number | Publication date |
---|---|
CN110322471B (en) | 2021-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110135243B (en) | Pedestrian detection method and system based on two-stage attention mechanism | |
US20180114071A1 (en) | Method for analysing media content | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN102378992B (en) | Articulated region detection device and method for same | |
EP2956891B1 (en) | Segmenting objects in multimedia data | |
CN109948497A (en) | A kind of object detecting method, device and electronic equipment | |
CN107624189A (en) | Method and apparatus for generating forecast model | |
Chetverikov et al. | Dynamic texture as foreground and background | |
Kumar et al. | Multiple cameras using real time object tracking for surveillance and security system | |
Gutoski et al. | Detection of video anomalies using convolutional autoencoders and one-class support vector machines | |
Lu et al. | Deep learning methods for human behavior recognition | |
Tyagi et al. | A review of deep learning techniques for crowd behavior analysis | |
CN110532959B (en) | Real-time violent behavior detection system based on two-channel three-dimensional convolutional neural network | |
CN115578770A (en) | Small sample facial expression recognition method and system based on self-supervision | |
François | Real-time multi-resolution blob tracking | |
CN108961287B (en) | Intelligent shelf triggering method, intelligent shelf system, storage medium and electronic equipment | |
CN110322471A (en) | Method, apparatus, equipment and the storage medium of panoramic video concentration | |
Ghani | Robust real-time fire detector using cnn and lstm | |
Jebur et al. | Abnormal Behavior Detection in Video Surveillance Using Inception-v3 Transfer Learning Approaches | |
CN111160255B (en) | Fishing behavior identification method and system based on three-dimensional convolution network | |
Qiu et al. | A methodology review on multi-view pedestrian detection | |
Ahad et al. | Towards Generalized Violence Detection; a Pose Estimation Approach | |
Guraya et al. | Predictive visual saliency model for surveillance video | |
Xiang et al. | Action recognition for videos by long-term point trajectory analysis with background removal | |
Revathi et al. | Hybridisation of feed forward neural network and self-adaptive PSO with diverse of features for anomaly detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |