CN108458691B - A kind of collision checking method and equipment - Google Patents
A kind of collision checking method and equipment Download PDFInfo
- Publication number
- CN108458691B CN108458691B CN201810106578.3A CN201810106578A CN108458691B CN 108458691 B CN108458691 B CN 108458691B CN 201810106578 A CN201810106578 A CN 201810106578A CN 108458691 B CN108458691 B CN 108458691B
- Authority
- CN
- China
- Prior art keywords
- moving object
- collision
- video
- testing result
- collision factor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
Abstract
This application provides a kind of collision checking method and equipment, in scheme provided by the present application, after obtaining video to be detected, based on the moving object for including in depth learning technology detection video, as the first testing result, the background and prospect in technology separation video are wiped out based on background simultaneously, and the moving object for including in video is detected in the foreground, as the second testing result, then integrated treatment is carried out according to the first testing result and the second testing result, determine the moving object for including in video and its corresponding collision factor information, and then determine whether collide between moving object according to the collision factor information of moving object.Since depth learning technology is utilized in the first testing result, detection accuracy is higher, and the first testing result is supplemented with the second testing result, further improve the precision of testing result, so that the moving object and its corresponding collision factor information that determine with this are more accurate, the precision of collision detection is which thereby enhanced.
Description
Technical field
This application involves information technology field more particularly to a kind of collision checking method and equipment.
Background technique
In recent years, with the rapid development of economy, urban infrastructure construction and vehicles number all achieve prominent fly
The development pushed ahead vigorously, while bringing great convenience, congested in traffic, traffic accident occurrence frequency is consequently increased,
Influence the various aspects such as production, the life of people.The research of traffic accident is just extremely important as one of Modern Traffic
Research field.Traffic video research based on video image technology becomes an importance for solving traffic accident.Currently, base
In the scheme that video image technology detects traffic accident collision accident, detection accuracy is limited, so as to cause detection
As a result accuracy is not high, and situation specific for traffic accident also has no way of judging.
Apply for content
The purpose of the application is to provide a kind of collision detection scheme, to solve the problems, such as that detection accuracy is not high.
To achieve the above object, this application provides a kind of collision checking methods, this method comprises:
Obtain video to be detected;
The moving object for including in the video is detected based on deep learning technology, as the first testing result;
Background and prospect in the video are separated based on the background technology of wiping out, and detects the video in the prospect
In include moving object, as the second testing result;
Integrated treatment is carried out according to first testing result and second testing result, determines in the video and includes
Moving object and its corresponding collision factor information;
Determine whether collide between the moving object according to the collision factor information of the moving object.
Another aspect based on the application additionally provides a kind of crash detection device, which includes:
Input unit, for obtaining video to be detected;
First detection device, for detecting the moving object for including in the video based on deep learning technology, as
One testing result;
Second detection device, for separating background and prospect in the video based on the background technology of wiping out, and described
The moving object for including in the video is detected in prospect, as the second testing result;
Fusing device is tracked, for carrying out integrated treatment according to first testing result and second testing result,
Determine the moving object for including in the video and its corresponding collision factor information;
Judgment means determine whether send out between the moving object for the collision factor information according to the moving object
Raw collision.
In addition, present invention also provides a kind of crash detection device, which includes:
Processor;And
One or more machine readable medias of machine readable instructions are stored with, when the processor execution machine can
When reading instruction, so that the equipment executes such as method described in any item of the claim 1 to 8.
In scheme provided by the present application, after obtaining video to be detected, the view is detected based on deep learning technology
The moving object for including in frequency separates the background in the video as the first testing result, while based on the background technology of wiping out
And prospect, and the moving object for including in the video is detected in the prospect, as the second testing result, then according to institute
State the first testing result and second testing result and carry out integrated treatment, determine the moving object for including in the video and its
Corresponding collision factor information, so according to the collision factor information of the moving object determine between the moving object whether
It collides.Since deep learning technology is utilized in the first testing result, detection accuracy is higher, and with the second testing result pair
First testing result is supplemented, and the precision of testing result is further improved, so that the moving object that is determined with this and its right
The collision factor information answered is more accurate, which thereby enhances the precision of collision detection.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of process flow diagram of collision checking method provided by the embodiments of the present application;
Fig. 2 is the signal detected using collision checking method provided by the embodiments of the present application to collision class traffic accident
Figure;
Fig. 3 is a kind of schematic diagram of crash detection device provided by the embodiments of the present application;
Fig. 4 is the schematic diagram of another crash detection device provided by the embodiments of the present application;
The same or similar appended drawing reference represents the same or similar component in attached drawing.
Specific embodiment
The application is described in further detail with reference to the accompanying drawing.
In a typical configuration of this application, terminal, the equipment of service network include one or more processors
(CPU), input/output interface, network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium
Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media, can be by any side
Method or technology realize that information stores.Information can be the device or other numbers of computer readable instructions, data structure, program
According to.The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory
(SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only memory
(ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory techniques, CD-ROM (CD-
ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storages
Equipment or any other non-transmission medium, can be used for storage can be accessed by a computing device information.
The embodiment of the present application provides a kind of collision checking method, and this method can examine the moving object in video
It surveys to judge whether to collide, the executing subject of this method can be user equipment, the network equipment or user equipment and network
Equipment is integrated constituted equipment by network, or is also possible to run on the application program of above equipment.The user
Equipment includes but is not limited to all kinds of terminal devices such as computer, mobile phone, tablet computer;The network equipment includes but is not limited to such as
Network host, single network server, multiple network server collection or set of computers based on cloud computing etc. are realized.Here,
Cloud is made of a large amount of hosts or network server for being based on cloud computing (Cloud Computing), wherein cloud computing is distributed
One kind of calculating, a virtual machine consisting of a loosely coupled set of computers.
Fig. 1 shows a kind of process flow diagram of collision checking method in the embodiment of the present application, including following processing step:
Step S101 obtains video to be detected.Video to be detected refers to the multiframe picture arranged according to timing, can be with
It shoots to obtain by all kinds of photographic devices.For example, these are waited for when this programme is applied to the collision accident detection in traffic accident
The video of detection can be to be obtained by the monitoring camera shooting being installed on road.
Step S102 detects the moving object for including in the video based on deep learning technology, as the first detection knot
Fruit.Before using deep learning technology detection, need to be trained the model detected by training set.Training set needs
When being made of the video for belonging to similar field, such as need to detect the collision accident in traffic accident, due to traffic accident
Involved in moving object generally comprise all kinds of motor vehicles, pedestrian, bicycle etc., therefore the video in training set be also required to include
These moving objects, and the training sample in training set is more, and the accuracy of obtained model when detecting is trained also to get over
It is high.
Since deep learning technology is other than the moving object for including in identification video, moving object can also be identified
Entity type.Classification for entity type can be set according to the field of the application of this programme, for example, being applied to
When collision accident detection in row traffic accident, entity type can be divided into motor vehicle, non-motor vehicle, pedestrian etc., can also be with
It is further subdivided into car, bus, bicycle, motorcycle, pedestrian, truck, tank truck etc..It is being based on depth as a result,
When habit technology detects the moving object for including in the video, the entity type of the moving object can also while be determined.
Step S103 separates background and prospect in the video based on the background technology of wiping out, and examines in the prospect
The moving object for including in the video is surveyed, as the second testing result.The step can be synchronous with the processing in step S102
It executes, i.e., the mode for detecting moving object at two kinds can synchronize progress, and obtain testing result respectively.The embodiment of the present application
Employed in the background technology of wiping out can be carried out using any existing mature technology, such as based on mixture Gaussian background model
The separation of foreground and background, moving object of the mobile target as the second testing result in extraction prospect.
Since when progress background wipes out technology, the stability of background image can cause a fixing to the precision of detection
It rings, such as background frame is fixed inconvenient, then the differentiation precision of its background and prospect is higher.Therefore, in the embodiment of the present application
The available video to be detected by the photographic device shooting being fixedly installed, so that the background of video to be detected is relatively fixed,
To improve the detection accuracy that background wipes out technology.
Step S104 carries out integrated treatment according to first testing result and second testing result, determine described in
The moving object and its corresponding collision factor information for including in video.When carrying out integrated treatment, two different sides are utilized
The purpose that formula respectively detects the moving object in video is the inspection for wiping out technology to deep learning technology using background
It surveys result to be supplemented, the case where avoiding the detection of deep learning technology there are missing inspections, to influence the accuracy of subsequent processing.
For example, moving object A is not detected in the first testing result, and moving object A is detected in the second testing result, be based on institute
It states the second testing result to supplement first testing result, using moving object A as including in finally determining video
One of moving object.
Whether the corresponding collision factor information of these moving objects refers to is determined between moving object touching
The information hit, such as can be movement velocity, the direction of motion, running track, contour of object etc..When detecting, contour of object can
It is indicated, can be indicated using (x, y, w, h) four vector, wherein x in the form of using detection block, y is to detect object
Body frame top left co-ordinate, w are the width of detection block, the height that h is detection block.These collision factor informations can be in video
The moving object detected carries out the mode of Kalman filter tracking to obtain.
In the embodiment of the present application, first testing result is supplemented based on second testing result, and carries out
When Kalman filter tracking, can first respectively to first testing result and the second testing result carry out Kalman filtering with
Track determines the collision factor observation of the first moving object collection in the video frame and collision factor predicted value and the second moving object
The collision factor observation and collision factor predicted value in the video frame of body collection.Wherein, the first moving object collection is the first inspection
The set of moving object in result is surveyed, the second moving object collection is the set of moving object in the second testing result, collides factor
Observation refers to the actually detected value arrived in each video frame, and the factor predicted value of collision refers to based on video frame pair before
The value that current video frame is estimated is then based on predicted value and observation, and finally determines in conjunction with the noise of the two
It collides factor information.
For colliding the speed in factor information, to the speed of moving object A when obtaining kth frame, then root first
Velocity amplitude when according to k-1 frame predicts rate predictions when k frame, it is assumed that thinks that moving object is at the uniform velocity based on data before
Movement, then velocity amplitude when can be using k-1 frame is as rate predictions when k frame, it is assumed that for 23km/h, while the noise of the value
Deviation is 5km/h (can use white Gaussian noise).And the speed that image actual measurement based on kth frame and before obtains
Value is the speed observation of moving object A, it is assumed that is 25km/h, noise bias value is 4km/h.
Since when carrying out Kalman filter tracking, observation is to need to obtain by actually detected, and predicted value is logical
The data for crossing preamble video frame are predicted to obtain, for the frame it is possible that only pre- if missing inspection occurs in a certain frame
Measured value, and the case where without observation.Thus, it is possible to the first moving object collection and the second moving object collection and corresponding collision
Factor observation and collision factor predicted value are matched, if some moving object A that the first moving object is concentrated only is got
Collision factor predicted value, and collisionless factor observation, but due to the difference of detection mode, the second moving object concentration includes
The collision factor predicted value of moving object A, then can be used as supplement, obtains the collision factor predicted value of moving object A and touches
Hit factor predicted value.
Further it is also possible to which a certain moving object A is not detected in deep learning technology, then the first moving object concentration is not wrapped
A containing moving object can not get the collision factor observation and collision factor predicted value of moving object A.And background is wiped out
Technology detects moving object A, then the second moving object concentration contains moving object A, and also gets moving object
The collision factor observation and collision factor predicted value of body A, it is possible thereby to the number of the moving object A concentrated with the second moving object
It is supplemented according to overall data, so as to prevent missing inspection, improves accuracy.
No matter when above-mentioned which kind of situation, to the first moving object collection and the second moving object in the embodiment of the present application
When collection and corresponding collision factor observation and collision factor predicted value are matched, Hungary matching algorithm can be used,
The data that two kinds of detection modes are obtained match to the greatest extent.
The moving object for including in the video and its corresponding collision factor observation and collision are being determined by matching
It, can be according to Kalman filtering algorithm, based on the corresponding collision of moving object for including in the video after factor predicted value
Factor observation and collision factor predicted value, calculate the collision factor information of the moving object, such as in conjunction with speed observation
Velocity amplitude of the moving object A in k frame is finally calculated with the covariance of rate predictions.
Step S105 determines whether touch between the moving object according to the collision factor information of the moving object
It hits.Since in actual scene, all kinds of moving objects will appear a series of phenomenons when colliding, by these phenomenon regularization,
It is indicated with specifically colliding factor information, then can be used as the standard for determining whether to collide.
For collision in traffic accident, if in motion process, the overlapping of moving object segmentation frame is more than certain threshold value,
Then think to collide, needs to pay close attention to wherein whether have the case where detection block disappearance (can not detect) at this time, if there is
One side or multi-party disappearance, then can regard as colliding.Also such as, both sides' motion profile can be further paid close attention to, if movement side
It then may not be collision in fact to parallel, and both be only to pass through, such as the vehicle of traveling have passed through the pedestrian to walk in the same direction
By the side of, and the pedestrian has been sheltered from.But if the angle of the direction of motion is larger, illustrate there is obvious relative motion, then illustrates very
It has been likely to occur collision.In addition, after general collision occurs, the moving objects such as other vehicles of surrounding, pedestrian will receive collision
The influence of accident so that it is even motionless to slow down within a certain period of time, therefore can be made using such phenomenon as a part of rule
For the foundation for determining whether to collide.
When traffic accident occurs, when colliding between different traffic main bodys, the traffic of different severity will cause
Accident.For example, the traffic accident for having pedestrian to participate in is often someone will member's injury, its severity generally can all be greater than between vehicle
Accident, and the traffic accident for thering are the oversize vehicles such as tank truck, bus, truck to participate in also tend to will cause great safety it is hidden
Suffer from, and if it is the collision between car, and speed is little, then it is believed that not being serious accident.As a result, according to
After the collision factor information of moving object determines whether collide between the moving object, however, it is determined that collide, also
The severity of the secondary collision can be determined according to the entity type and collision factor information of the moving object.
When determining the severity of the secondary collision according to the entity type and collision factor information of the moving object,
It can be extracted in the video and be related to the associated video frame of the secondary accident, then according only to described in the associated video frame
The entity type of movement entity and collision factor information determine the severity of the secondary collision.For example, the length of one section of video
Degree is 300 frames, wherein be related to impact generation process is the 80th to 120 frame, can only extract the 80th to 120 frame work at this time
For associated video frame, for carrying out the judgement of crash severity.
In addition, being also based on deep learning technology detection in the collision checking method that some embodiments of the application provide
Additional Event information in the associated video frame, these additional Event informations can be set according to actual application scenarios
And training, such as in the collision detection of traffic accident, additional Event information can be flue dust, fire behavior etc., if collision causes
On fire, the case where smoldering, its severity can be larger, therefore can believe further combined with additional events when determining severity
Breath, i.e., according to the collision factor information of the movement entity in the entity type of the moving object, the associated video frame
And additional Event information, determine the severity of the secondary collision.
It include such as Fig. 2 shows the scheme detected using the scheme of the application to collision class traffic accident, the program
Under processing step:
S201 chronologically inputs every frame video.
S202 carries out the detection of moving object using deep learning technology, can identify the specific reality of these moving objects
Body type, including car, bus, bicycle, motorcycle, pedestrian and truck, tank truck etc..
S203 is carrying out deep learning detection simultaneously, is wiping out technology using background and carry out foreground and background separation, before identification
Moving object in scape.
S204 detects that moving object carries out Kalman filter tracking to each in S202, while to separating in S203
The moving object of prospect out carries out Kalman filter tracking.
S205, the tracking object that the moving object and background subtraction that deep learning detects detect pass through Hungary Algorithm
It is matched, determines final moving object and collision factor information.
S206 carries out rule using the collision factor information such as the speed of moving object, direction, running track, detection block and sentences
It is fixed, meet preset condition and is considered to be collided.
S207 carries out collision accident deciding degree, determines severity for the associated video frame that judgement collides,
Such as whether thering is pedestrian to be involved in, whether being that public transport or truck etc. have the vehicle of major safety risks to be involved in.
S208 carries out the detection of the additional Event informations such as fire behavior, flue dust, to assist to define severity of injuries.
Based on the same inventive concept, crash detection device is additionally provided in the embodiment of the present application, the corresponding side of the equipment
Method is the method in previous embodiment, and its principle solved the problems, such as is similar to this method.
Crash detection device provided by the embodiments of the present application can detect the moving object in video is to judge
No to collide, the executing subject of this method can be user equipment, the network equipment or user equipment and the network equipment passes through net
Network is integrated constituted equipment, or is also possible to run on the application program of above equipment.The user equipment include but
It is not limited to all kinds of terminal devices such as computer, mobile phone, tablet computer;The network equipment include but is not limited to as network host,
Single network server, multiple network server collection or set of computers based on cloud computing etc. are realized.Here, cloud is by being based on cloud
The a large amount of hosts or network server for calculating (Cloud Computing) are constituted, wherein cloud computing is the one of distributed computing
Kind, a virtual machine consisting of a loosely coupled set of computers.
Fig. 3 shows a kind of structural schematic diagram of crash detection device in the embodiment of the present application, which includes input
Device 310, the first detection device 320, second detection device 330, tracking fusing device 340 and judgment means 350.Wherein, defeated
Enter device 310 for obtaining video to be detected.Video to be detected refers to the multiframe picture arranged according to timing, can be by each
Class photographic device shoots to obtain.For example, these are to be detected when this programme is applied to the collision accident detection in traffic accident
Video can be by be installed on road monitoring camera shooting obtain.
First detection device 320 is used to detect the moving object for including in the video based on deep learning technology, as
First testing result.Before using deep learning technology detection, need to instruct the model detected by training set
Practice.Training set needs be made of the video for belonging to similar field, such as need in traffic accident collision accident detection when, by
Moving object involved in traffic accident generally comprises all kinds of motor vehicles, pedestrian, bicycle etc., therefore the video in training set
It is also required to comprising these moving objects, and the training sample in training set is more, trains obtained model when detecting
Accuracy is also higher.
Deep learning technology can also identify the reality of moving object other than the moving object for including in identification video
Body type.Classification for entity type can be set according to the field of the application of this programme, be handed over for example, being applied to row
When collision accident in interpreter's event detects, entity type can be divided into motor vehicle, non-motor vehicle, pedestrian etc., can also be into one
Step is subdivided into car, bus, bicycle, motorcycle, pedestrian, truck, tank truck etc..It is being based on deep learning skill as a result,
When art detects the moving object for including in the video, the first detection device can also determine the entity of the moving object simultaneously
Type.
Second detection device 330 is used to separate background and prospect in the video based on the background technology of wiping out, and in institute
The moving object for detecting in prospect and including in the video is stated, as the second testing result.Processing in second detection device can
It being executed with synchronous with the processing in the first detection device, i.e., the mode for detecting moving object at two kinds can synchronize progress, and
Testing result is obtained respectively.The background technology of wiping out employed in the embodiment of the present application can be using any existing mature skill
Art, such as the separation of foreground and background is carried out based on mixture Gaussian background model, the mobile target in extraction prospect is as second
The moving object of testing result.
Since when progress background wipes out technology, the stability of background image can cause a fixing to the precision of detection
It rings, such as background frame is fixed inconvenient, then the differentiation precision of its background and prospect is higher.Therefore, in the embodiment of the present application
The available video to be detected by the photographic device shooting being fixedly installed of input unit, so that the background phase of video to be detected
To fixation, to improve the detection accuracy that background wipes out technology.
Fusing device 340 is tracked to be used to carry out General Office according to first testing result and second testing result
Reason, determines the moving object for including in the video and its corresponding collision factor information.When carrying out integrated treatment, two are utilized
The purpose that the different mode of kind respectively detects the moving object in video is to wipe out technology to depth using background
The case where testing result of habit technology is supplemented, and avoids the detection of deep learning technology there are missing inspections, to influence subsequent place
The accuracy of reason.For example, moving object A is not detected in the first testing result, and movement is detected in the second testing result
Object A supplements first testing result based on second testing result, determines using moving object A as final
Video in include one of moving object.
Whether the corresponding collision factor information of these moving objects refers to is determined between moving object touching
The information hit, such as can be movement velocity, the direction of motion, running track, contour of object etc..When detecting, contour of object can
It is indicated, can be indicated using (x, y, w, h) four vector, wherein x in the form of using detection block, y is to detect object
Body frame top left co-ordinate, w are the width of detection block, the height that h is detection block.These collision factor informations can be in video
The moving object detected carries out the mode of Kalman filter tracking to obtain.
In the embodiment of the present application, tracking fusing device based on second testing result to first testing result into
When going and supplement, and carrying out Kalman filter tracking, first first testing result and the second testing result can be carried out respectively
Kalman filter tracking, determine the collision factor observation of the first moving object collection in the video frame and collision factor predicted value with
And second moving object collection collision factor observation in the video frame and collision factor predicted value.Wherein, the first moving object
Body collection is the set of moving object in the first testing result, and the second moving object collection is the collection of moving object in the second testing result
It closes, collision factor observation refers to the actually detected value arrived in each video frame, and the factor predicted value of collision refers to based on it
The value that preceding video frame estimates current video frame is then based on predicted value and observation, and combines making an uproar for the two
Sound come finally determine its collide factor information.
For colliding the speed in factor information, to the speed of moving object A when obtaining kth frame, then root first
Velocity amplitude when according to k-1 frame predicts rate predictions when k frame, it is assumed that thinks that moving object is at the uniform velocity based on data before
Movement, then velocity amplitude when can be using k-1 frame is as rate predictions when k frame, it is assumed that for 23km/h, while the noise of the value
Deviation is 5km/h (can use white Gaussian noise).And the speed that image actual measurement based on kth frame and before obtains
Value is the speed observation of moving object A, it is assumed that is 25km/h, noise bias value is 4km/h.
Since when carrying out Kalman filter tracking, observation is to need to obtain by actually detected, and predicted value is logical
The data for crossing preamble video frame are predicted to obtain, for the frame it is possible that only pre- if missing inspection occurs in a certain frame
Measured value, and the case where without observation.Thus, it is possible to the first moving object collection and the second moving object collection and corresponding collision
Factor observation and collision factor predicted value are matched, if some moving object A that the first moving object is concentrated only is got
Collision factor predicted value, and collisionless factor observation, but due to the difference of detection mode, the second moving object concentration includes
The collision factor predicted value of moving object A, then can be used as supplement, obtains the collision factor predicted value of moving object A and touches
Hit factor predicted value.
Further it is also possible to which a certain moving object A is not detected in deep learning technology, then the first moving object concentration is not wrapped
A containing moving object can not get the collision factor observation and collision factor predicted value of moving object A.And background is wiped out
Technology detects moving object A, then the second moving object concentration contains moving object A, and also gets moving object
The collision factor observation and collision factor predicted value of body A, it is possible thereby to the number of the moving object A concentrated with the second moving object
It is supplemented according to overall data, so as to prevent missing inspection, improves accuracy.
No matter when above-mentioned which kind of situation, in the embodiment of the present application tracking fusing device to the first moving object collection and
When second moving object collection and corresponding collision factor observation and collision factor predicted value are matched, breast tooth can be used
Sharp matching algorithm, the data that two kinds of detection modes are obtained match to the greatest extent.
The moving object for including in the video and its corresponding collision factor observation and collision are being determined by matching
It, can be according to Kalman filtering algorithm, based on the corresponding collision of moving object for including in the video after factor predicted value
Factor observation and collision factor predicted value, calculate the collision factor information of the moving object, such as in conjunction with speed observation
Velocity amplitude of the moving object A in k frame is finally calculated with the covariance of rate predictions.
Judgment means 350 be used to be determined between the moving object according to the collision factor information of the moving object whether
It collides.Since in actual scene, all kinds of moving objects will appear a series of phenomenons when colliding, these phenomenons are advised
Then change, indicated with specifically colliding factor information, then can be used as the standard for determining whether to collide.
For collision in traffic accident, if in motion process, the overlapping of moving object segmentation frame is more than certain threshold value,
Then think to collide, needs to pay close attention to wherein whether have the case where detection block disappearance (can not detect) at this time, if there is
One side or multi-party disappearance, then can regard as colliding.Also such as, both sides' motion profile can be further paid close attention to, if movement side
It then may not be collision in fact to parallel, and both be only to pass through, such as the vehicle of traveling have passed through the pedestrian to walk in the same direction
By the side of, and the pedestrian has been sheltered from.But if the angle of the direction of motion is larger, illustrate there is obvious relative motion, then illustrates very
It has been likely to occur collision.In addition, after general collision occurs, the moving objects such as other vehicles of surrounding, pedestrian will receive collision
The influence of accident so that it is even motionless to slow down within a certain period of time, therefore can be made using such phenomenon as a part of rule
For the foundation for determining whether to collide.
When traffic accident occurs, when colliding between different traffic main bodys, the traffic of different severity will cause
Accident.For example, the traffic accident for having pedestrian to participate in is often someone will member's injury, its severity generally can all be greater than between vehicle
Accident, and the traffic accident for thering are the oversize vehicles such as tank truck, bus, truck to participate in also tend to will cause great safety it is hidden
Suffer from, and if it is the collision between car, and speed is little, then it is believed that not being serious accident.As a result, according to
After the collision factor information of moving object determines whether collide between the moving object, however, it is determined that collide, sentence
Disconnected device can also determine the severity of the secondary collision according to the entity type and collision factor information of the moving object.
When determining the severity of the secondary collision according to the entity type and collision factor information of the moving object,
Judgment means can be extracted in the video is related to the associated video frame of the secondary accident, then according only to the associated video frame
In the movement entity entity type and collision factor information determine the severity of the secondary collision.For example, one section
The length of video be 300 frames, wherein be related to impact generation process is the 80th to 120 frame, can only extract at this time the 80th to
120 frames are as associated video frame, for carrying out the judgement of crash severity.
In addition, the first detection device is also based on depth in the collision checking method that some embodiments of the application provide
Learning art detects the additional Event information in the associated video frame, these additional Event informations can be according to actual application
Scene is set and is trained, such as in the collision detection of traffic accident, additional Event information can be flue dust, fire behavior etc.,
If collision causes on fire, the case where smoldering, its severity can be larger, therefore judgment means can be with when determining severity
Further combined with additional Event information, i.e., according to the fortune in the entity type of the moving object, the associated video frame
The collision factor information and additional Event information of dynamic entity, determine the severity of the secondary collision.
In conclusion after obtaining video to be detected, being examined based on deep learning technology in scheme provided by the present application
The moving object for including in the video is surveyed, as the first testing result, while technology is wiped out based on background and separates the video
In background and prospect, and the moving object for including in the video is detected in the prospect, as the second testing result, so
Integrated treatment is carried out according to first testing result and second testing result afterwards, determines the movement for including in the video
Object and its corresponding collision factor information, and then the moving object is determined according to the collision factor information of the moving object
Between whether collide.Since deep learning technology is utilized in the first testing result, detection accuracy is higher, and with the second inspection
It surveys result to supplement the first testing result, the precision of testing result is further improved, so that the moving object determined with this
Body and its corresponding collision factor information are more accurate, which thereby enhance the precision of collision detection.
In addition, the entity type for the moving object that this programme can also clearly be collided using deep learning technology, by
This can further judge the severity of collision accident, provide information definitely for collision detection.
In addition, a part of the application can be applied to computer program product, such as computer program instructions, when its quilt
When computer executes, by the operation of the computer, it can call or provide according to the present processes and/or technical solution.
And the program instruction of the present processes is called, it is possibly stored in fixed or moveable recording medium, and/or pass through
Broadcast or the data flow in other signal-bearing mediums and transmitted, and/or be stored according to program instruction run calculating
In the working storage of machine equipment.Here, including an equipment as shown in Figure 4 according to one embodiment of the application, this sets
Standby includes the one or more machine readable medias 410 for being stored with machine readable instructions and the place for executing machine readable instructions
Manage device 420, wherein when the machine readable instructions are executed by the processor, so that the equipment is executed based on aforementioned according to this
The method and/or technology scheme of multiple embodiments of application.
It should be noted that the application can be carried out in the assembly of software and/or software and hardware, for example, can adopt
With specific integrated circuit (ASIC), general purpose computer or any other realized similar to hardware device.In one embodiment
In, the software program of the application can be executed by processor to realize above step or function.Similarly, the software of the application
Program (including relevant data structure) can be stored in computer readable recording medium, for example, RAM memory, magnetic or
CD-ROM driver or floppy disc and similar devices.In addition, hardware can be used to realize in some steps or function of the application, for example,
As the circuit cooperated with processor thereby executing each step or function.
It is obvious to a person skilled in the art that the application is not limited to the details of above-mentioned exemplary embodiment, Er Qie
In the case where without departing substantially from spirit herein or essential characteristic, the application can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and scope of the present application is by appended power
Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims
Variation is included in the application.Any reference signs in the claims should not be construed as limiting the involved claims.This
Outside, it is clear that one word of " comprising " does not exclude other units or steps, and odd number is not excluded for plural number.That states in device claim is multiple
Unit or device can also be implemented through software or hardware by a unit or device.The first, the second equal words are used to table
Show title, and does not indicate any particular order.
Claims (13)
1. a kind of collision checking method, wherein this method comprises:
Obtain video to be detected;
The moving object for including in the video is detected based on deep learning technology, as the first testing result;
Background and prospect in the video are separated based on the background technology of wiping out, and detects in the prospect and is wrapped in the video
The moving object contained, as the second testing result;
First testing result is supplemented based on second testing result, and carries out Kalman filter tracking, is determined
The moving object and its corresponding collision factor information for including in the video;
Determine whether collide between the moving object according to the collision factor information of the moving object;
Wherein, first testing result is supplemented based on second testing result, and carries out Kalman filter tracking,
Include:
Kalman filter tracking is carried out to first testing result and the second testing result respectively, determines the first moving object collection
The touching in the video frame of collision factor observation in the video frame and collision factor predicted value and the second moving object collection
Hit factor observation and collision factor predicted value, wherein the first moving object collection is moving object in the first testing result
Set, the second moving object collection be the second testing result in moving object set;
It is pre- to the first moving object collection and the second moving object collection and corresponding collision factor observation and collision factor
Measured value is matched, and determines that the moving object for including in the video and its corresponding collision factor observation and collision factor are pre-
Measured value;
According to Kalman filtering algorithm, based on the corresponding collision factor observation of moving object for including in the video and collision
Factor predicted value calculates the collision factor information of the moving object.
2. according to the method described in claim 1, wherein, the moving object for including in the video is detected based on deep learning technology
When body, further includes:
Determine the entity type of the moving object.
3. according to the method described in claim 2, wherein, determining the movement according to the collision factor information of the moving object
After whether colliding between object, further includes:
If it is determined that colliding, the tight of the secondary collision is determined according to the entity type of the moving object and collision factor information
Weight degree.
4. according to the method described in claim 3, wherein, however, it is determined that collide, according to the entity type of the moving object
And collision factor information determines the severity of the secondary collision, comprising:
If it is determined that colliding, is extracted in the video and be related to the associated video frame of the secondary accident;
Believed according to the collision factor of the movement entity in the entity type of the moving object and the associated video frame
Cease the severity for determining the secondary collision.
5. according to the method described in claim 4, wherein, this method further include:
The additional Event information in the associated video frame is detected based on deep learning technology;
Believed according to the collision factor of the movement entity in the entity type of the moving object and the associated video frame
Cease the severity for determining the secondary collision, comprising:
According to the collision factor information of the movement entity in the entity type of the moving object, the associated video frame with
And additional Event information, determine the severity of the secondary collision.
6. according to the method described in claim 1, wherein, obtaining video to be detected, comprising:
It obtains by the video to be detected for the photographic device shooting being fixedly installed.
7. a kind of crash detection device, wherein the equipment includes:
Input unit, for obtaining video to be detected;
First detection device, for detecting the moving object for including in the video based on deep learning technology, as the first inspection
Survey result;
Second detection device, for separating background and prospect in the video based on the background technology of wiping out, and in the prospect
The moving object for including in the middle detection video, as the second testing result;
Fusing device is tracked, for carrying out Kalman filter tracking to first testing result and the second testing result respectively,
Determine the collision factor observation of the first moving object collection in the video frame and collision factor predicted value and the second moving object
The collision factor observation and collision factor predicted value in the video frame of collection, wherein the first moving object collection is first
The set of moving object in testing result, the second moving object collection are the set of moving object in the second testing result;It is right
The first moving object collection and the second moving object collection and corresponding collision factor observation and collision factor predicted value into
Row matching determines the moving object for including in the video and its corresponding collision factor observation and collision factor predicted value;
And according to Kalman filtering algorithm, based on the corresponding collision factor observation of moving object for including in the video and collision
Factor predicted value calculates the collision factor information of the moving object;
Judgment means determine whether touch between the moving object for the collision factor information according to the moving object
It hits.
8. equipment according to claim 7, wherein first detection device is also used to based on deep learning technology
When detecting the moving object for including in the video, the entity type of the moving object is determined.
9. equipment according to claim 8, wherein the judgment means are also used in determining collide, according to institute
The entity type and collision factor information of stating moving object determine the severity of the secondary collision.
10. equipment according to claim 9, wherein the judgment means, for determine collide when, described
It is extracted in video and is related to the associated video frame of the secondary accident, according to the entity type of the moving object and the associated video
The collision factor information of the movement entity in frame determines the severity of the secondary collision.
11. equipment according to claim 10, first detection device are also used to detect institute based on deep learning technology
State the additional Event information in associated video frame;
The judgment means, for real according to the movement in the entity type of the moving object, the associated video frame
The collision factor information and additional Event information of body, determine the severity of the secondary collision.
12. equipment according to claim 7, wherein the input unit, for obtaining the photographic device by being fixedly installed
The video to be detected of shooting.
13. a kind of crash detection device, wherein the equipment includes:
Processor;And
One or more machine readable medias of machine readable instructions are stored with, when the processor executes the machine readable finger
When enabling, so that the equipment executes such as method described in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810106578.3A CN108458691B (en) | 2018-02-02 | 2018-02-02 | A kind of collision checking method and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810106578.3A CN108458691B (en) | 2018-02-02 | 2018-02-02 | A kind of collision checking method and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108458691A CN108458691A (en) | 2018-08-28 |
CN108458691B true CN108458691B (en) | 2019-04-19 |
Family
ID=63239296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810106578.3A Active CN108458691B (en) | 2018-02-02 | 2018-02-02 | A kind of collision checking method and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108458691B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276959A (en) * | 2019-06-26 | 2019-09-24 | 奇瑞汽车股份有限公司 | Processing method, device and the storage medium of traffic accident |
CN112446358A (en) * | 2020-12-15 | 2021-03-05 | 北京京航计算通讯研究所 | Target detection method based on video image recognition technology |
CN112507913A (en) * | 2020-12-15 | 2021-03-16 | 北京京航计算通讯研究所 | Target detection system based on video image recognition technology |
CN112651377B (en) * | 2021-01-05 | 2023-06-09 | 河北建筑工程学院 | Ice and snow sport accident detection method and device and terminal equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226891B (en) * | 2013-03-26 | 2015-05-06 | 中山大学 | Video-based vehicle collision accident detection method and system |
CN103258432B (en) * | 2013-04-19 | 2015-05-27 | 西安交通大学 | Traffic accident automatic identification processing method and system based on videos |
CN106127114A (en) * | 2016-06-16 | 2016-11-16 | 北京数智源科技股份有限公司 | Intelligent video analysis method |
-
2018
- 2018-02-02 CN CN201810106578.3A patent/CN108458691B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108458691A (en) | 2018-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108458691B (en) | A kind of collision checking method and equipment | |
CN108725440B (en) | Forward collision control method and apparatus, electronic device, program, and medium | |
US11209275B2 (en) | Motion detection method for transportation mode analysis | |
KR102198724B1 (en) | Method and apparatus for processing point cloud data | |
KR102205096B1 (en) | Transaction risk detection method and apparatus | |
CN105139425B (en) | A kind of demographic method and device | |
Bai et al. | Traffic anomaly detection via perspective map based on spatial-temporal information matrix. | |
CN105513349B (en) | Mountainous area highway vehicular events detection method based on double-visual angle study | |
CN109919008A (en) | Moving target detecting method, device, computer equipment and storage medium | |
CN111771207A (en) | Enhanced vehicle tracking | |
KR102195317B1 (en) | Method for Predicting Vehicle Collision Using Data Collected from Video Games | |
CN110032947A (en) | A kind of method and device that monitor event occurs | |
Twaddle et al. | Modeling the speed, acceleration, and deceleration of bicyclists for microscopic traffic simulation | |
CN111497741B (en) | Collision early warning method and device | |
JP2013174568A (en) | Moving body tracking device, moving body tracking method, and program | |
CN106227859A (en) | The method identifying the vehicles from gps data | |
Guo et al. | The efficacy of neural planning metrics: A meta-analysis of pkl on nuscenes | |
US10417358B2 (en) | Method and apparatus of obtaining feature information of simulated agents | |
CN103426178B (en) | Target tracking method and system based on mean shift in complex scene | |
Cao et al. | Vehicle motion analysis based on a monocular vision system | |
CN116358584A (en) | Automatic driving vehicle path planning method, device, equipment and medium | |
KR101117235B1 (en) | Apparatus and method for recognizing traffic accident | |
Vanpoperinghe et al. | Model-based detection and tracking of vehicle using a scanning laser rangefinder: A particle filtering approach | |
JP7468633B2 (en) | State estimation method, state estimation device, and program | |
CN110942642B (en) | Video-based traffic slow-driving detection method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |