CN108458691A - A kind of collision checking method and equipment - Google Patents

A kind of collision checking method and equipment Download PDF

Info

Publication number
CN108458691A
CN108458691A CN201810106578.3A CN201810106578A CN108458691A CN 108458691 A CN108458691 A CN 108458691A CN 201810106578 A CN201810106578 A CN 201810106578A CN 108458691 A CN108458691 A CN 108458691A
Authority
CN
China
Prior art keywords
moving object
collision
video
testing result
collision factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810106578.3A
Other languages
Chinese (zh)
Other versions
CN108458691B (en
Inventor
徐常亮
赵两可
刘军宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinhua Zhiyun Technology Co ltd
Original Assignee
Xinhua Zhiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinhua Zhiyun Technology Co ltd filed Critical Xinhua Zhiyun Technology Co ltd
Priority to CN201810106578.3A priority Critical patent/CN108458691B/en
Publication of CN108458691A publication Critical patent/CN108458691A/en
Application granted granted Critical
Publication of CN108458691B publication Critical patent/CN108458691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

This application provides a kind of collision checking method and equipment, in scheme provided by the present application, after obtaining video to be detected, the moving object for including in video is detected based on depth learning technology, as the first testing result, the background and foreground in technology separation video are wiped out based on background simultaneously, and the moving object for including in video is detected in the foreground, as the second testing result, then integrated treatment is carried out according to the first testing result and the second testing result, determine the moving object for including in video and its corresponding collision factor information, and then according to the collision factor information of moving object determine moving object between whether collide.Since depth learning technology is utilized in the first testing result, accuracy of detection is higher, and it is supplemented with second the first testing result of testing result pair, further improve the precision of testing result, so that the moving object and its corresponding collision factor information that determine with this are more accurate, the precision of collision detection is which thereby enhanced.

Description

A kind of collision checking method and equipment
Technical field
This application involves information technology field more particularly to a kind of collision checking method and equipment.
Background technology
In recent years, with the rapid development of economy, urban infrastructure construction and vehicles number all achieve prominent fly The development pushed ahead vigorously, while bringing great convenience, congested in traffic, traffic accident occurrence frequency is consequently increased, Influence the various aspects such as production, the life of people.One studied just as Modern Traffic of traffic accident is extremely important Research field.Traffic video research based on video image technology becomes an importance for solving traffic accident.Currently, base In the scheme that video image technology is detected traffic accident collision accident, accuracy of detection is limited, so as to cause detection As a result accuracy is not high, and also has no way of judging for the specific situation of traffic accident.
Apply for content
The purpose of the application is to provide a kind of collision detection scheme, to solve the problems, such as that accuracy of detection is not high.
To achieve the above object, this application provides a kind of collision checking method, this method includes:
Obtain video to be detected;
The moving object for including in the video is detected based on deep learning technology, as the first testing result;
Background and foreground in the video are detached based on the background technology of wiping out, and detects the video in the foreground In include moving object, as the second testing result;
Integrated treatment is carried out according to first testing result and second testing result, determines in the video and includes Moving object and its corresponding collision factor information;
Whether collide between determining the moving object according to the collision factor information of the moving object.
Another aspect based on the application additionally provides a kind of crash detection device, which includes:
Input unit, for obtaining video to be detected;
First detection device, for detecting the moving object for including in the video based on deep learning technology, as One testing result;
Second detection device, for detaching background and foreground in the video based on the background technology of wiping out, and described The moving object for including in the video is detected in foreground, as the second testing result;
Fusing device is tracked, for carrying out integrated treatment according to first testing result and second testing result, Determine the moving object for including in the video and its corresponding collision factor information;
Judgment means, for whether being sent out between determining the moving object according to the collision factor information of the moving object Raw collision.
In addition, present invention also provides a kind of crash detection device, which includes:
Processor;And
One or more machine readable medias of machine readable instructions are stored with, when the processor execution machine can When reading instruction so that the equipment executes such as method described in any item of the claim 1 to 8.
In scheme provided by the present application, after obtaining video to be detected, based on being regarded described in deep learning technology detection The moving object for including in frequency detaches the background in the video as the first testing result, while based on the background technology of wiping out And foreground, and the moving object for including in the video is detected in the foreground, as the second testing result, then according to institute State the first testing result and second testing result and carry out integrated treatment, determine the moving object for including in the video and its Corresponding collision factor information, so according to the collision factor information of the moving object determine between the moving object whether It collides.Since deep learning technology is utilized in the first testing result, accuracy of detection is higher, and with the second testing result pair First testing result is supplemented, and the precision of testing result is further improved so that the moving object that is determined with this and its right The collision factor information answered is more accurate, which thereby enhances the precision of collision detection.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of process chart of collision checking method provided by the embodiments of the present application;
Fig. 2 is the signal being detected to collision class traffic accident using collision checking method provided by the embodiments of the present application Figure;
Fig. 3 is a kind of schematic diagram of crash detection device provided by the embodiments of the present application;
Fig. 4 is the schematic diagram of another crash detection device provided by the embodiments of the present application;
Same or analogous reference numeral represents same or analogous component in attached drawing.
Specific implementation mode
The application is described in further detail below in conjunction with the accompanying drawings.
In a typical configuration of this application, terminal, the equipment of service network include one or more processors (CPU), input/output interface, network interface and memory.
Memory may include computer-readable medium in volatile memory, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media, can be by any side Method or technology realize information storage.Information can be computer-readable instruction, data structure, program device or other number According to.The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), fast flash memory bank or other memory techniques, CD-ROM (CD- ROM), digital versatile disc (DVD) or other optical storages, magnetic tape cassette, magnetic tape disk storage or other magnetic storages Equipment or any other non-transmission medium can be used for storage and can be accessed by a computing device information.
The embodiment of the present application provides a kind of collision checking method, and this method can examine the moving object in video It surveys to judge whether to collide, the executive agent of this method can be user equipment, the network equipment or user equipment and network Equipment is integrated constituted equipment by network, or can also be the application program for running on above equipment.The user Equipment includes but not limited to all kinds of terminal devices such as computer, mobile phone, tablet computer;The network equipment includes but not limited to such as The realizations such as network host, single network server, multiple network server collection or the set of computers based on cloud computing.Here, Cloud is made of a large amount of hosts or network server for being based on cloud computing (Cloud Computing), wherein cloud computing is distributed One kind of calculating, a virtual machine being made of the computer collection of a group loose couplings.
Fig. 1 shows a kind of process chart of collision checking method in the embodiment of the present application, including following processing step:
Step S101 obtains video to be detected.Video to be detected refers to the multiframe picture arranged according to sequential, can be with It shoots to obtain by all kinds of photographic devices.For example, when this programme is applied to the collision accident detection in traffic accident, these are waited for The video of detection can be shot to obtain by the monitoring camera being installed on road.
Step S102 detects the moving object for including in the video based on deep learning technology, as the first detection knot Fruit.Before using deep learning technology detection, need to be trained the model being detected by training set.Training set needs When being made of the video for belonging to similar field, such as need to detect the collision accident in traffic accident, due to traffic accident Involved in moving object generally comprise all kinds of motor vehicles, pedestrian, bicycle etc., therefore the video in training set be also required to include These moving objects, and the training sample in training set is more, and the accuracy of obtained model when detecting is trained also to get over It is high.
Since deep learning technology is other than identifying the moving object for including in video, moving object can also be identified Entity type.Classification for entity type can be set according to the field of the application of this programme, for example, being applied to When collision accident in row traffic accident detects, entity type can be divided into motor vehicle, non-motor vehicle, pedestrian etc., can also It is further subdivided into car, bus, bicycle, motorcycle, pedestrian, truck, oil truck etc..As a result, based on depth When habit technology detects the moving object for including in the video, the entity type of the moving object can also while be determined.
Step S103 detaches background and foreground in the video based on the background technology of wiping out, and is examined in the foreground The moving object for including in the video is surveyed, as the second testing result.The step can be synchronous with the processing in step S102 It executes, i.e., the mode of moving object is detected at two kinds can synchronize progress, and obtain testing result respectively.The embodiment of the present application Employed in the background technology of wiping out arbitrary existing mature technology may be used, such as carried out based on mixture Gaussian background model The separation of foreground and background, moving object of the mobile target as the second testing result in extraction foreground.
Since when progress background wipes out technology, the stability of background image can cause a fixing to the precision of detection It rings, such as background frame is fixed inconvenient, then the differentiation precision of its background and foreground is higher.Therefore, in the embodiment of the present application It can obtain by the video to be detected for the photographic device shooting being fixedly installed so that the background of video to be detected is relatively fixed, To improve the accuracy of detection that background wipes out technology.
Step S104 carries out integrated treatment according to first testing result and second testing result, determine described in The moving object for including in video and its corresponding collision factor information.When carrying out integrated treatment, two different sides are utilized The purpose that formula is respectively detected the moving object in video is the inspection for wiping out technology to deep learning technology using background It surveys result to be supplemented, the case where avoiding the detection of deep learning technology there are missing inspections, to influence the accuracy of subsequent processing. For example, moving object A is not detected in the first testing result, and moving object A is detected in the second testing result, be based on institute It states the second testing result to supplement first testing result, using moving object A as including in finally determining video One of moving object.
Whether the corresponding collision factor information of these moving objects refers to being touched between being determined for moving object The information hit, such as can be movement velocity, the direction of motion, running orbit, contour of object etc..When detecting, contour of object can It is indicated in the form of using detection block, (x, y, w, h) four vector may be used to indicate, wherein x, y are to detect object Body frame top left co-ordinate, w are the width of detection block, the height that h is detection block.These collision factor informations can be in video The moving object detected carries out the mode of Kalman filter tracking to obtain.
In the embodiment of the present application, first testing result is supplemented based on second testing result, and carries out When Kalman filter tracking, can first respectively to first testing result and the second testing result carry out Kalman filtering with Track determines the collision factor observation of the first moving object collection in the video frame and collision factor predicted value and the second moving object The collision factor observation in the video frame and collision factor predicted value of body collection.Wherein, the first moving object collection is the first inspection The set of moving object in result is surveyed, the second moving object collection is the set of moving object in the second testing result, collides factor Observation refers to the actually detected value arrived in each video frame, and the factor predicted value of collision refers to based on video frame pair before The value that current video frame is estimated, be then based on predicted value and observation, and is finally determined in conjunction with the noise of the two It collides factor information.
For colliding the speed in factor information, to the speed of moving object A when obtaining kth frame, then root first Velocity amplitude when according to k-1 frames predicts rate predictions when k frames, it is assumed that thinks that moving object is at the uniform velocity based on data before Movement, then velocity amplitude when can be using k-1 frames is as rate predictions when k frames, it is assumed that for 23km/h, while the noise of the value Deviation is 5km/h (white Gaussian noise may be used).And the speed that the practical measurement of image based on kth frame and before obtains Value is the speed observation of moving object A, it is assumed that is 25km/h, noise bias value is 4km/h.
Since when carrying out Kalman filter tracking, observation is to need to obtain by actually detected, and predicted value is logical The data for crossing preamble video frame are predicted to obtain, for the frame it is possible that only pre- if missing inspection occurs in a certain frame Measured value, and the case where without observation.Thus, it is possible to the first moving object collection and the second moving object collection and corresponding collision Factor observation and collision factor predicted value are matched, if some moving object A that the first moving object is concentrated only is got Collision factor predicted value, and collisionless factor observation, but due to the difference of detection mode, the second moving object concentration includes The collision factor predicted value of moving object A can then be used as supplement, obtain the collision factor predicted value of moving object A and touch Hit factor predicted value.
Further it is also possible to which a certain moving object A is not detected in deep learning technology, then the first moving object concentration is not wrapped A containing moving object can not get the collision factor observation and collision factor predicted value of moving object A.And background is wiped out Technology detects moving object A, then the second moving object concentration contains moving object A, and also gets moving object The collision factor observation and collision factor predicted value of body A, it is possible thereby to the number of the moving object A concentrated with the second moving object It is supplemented according to overall data, so as to prevent missing inspection, improves accuracy.
No matter when above-mentioned which kind of situation, to the first moving object collection and the second moving object in the embodiment of the present application When collection and corresponding collision factor observation and collision factor predicted value are matched, Hungary matching algorithm may be used, The data that two kinds of detection modes obtain are matched to the greatest extent.
The moving object for including in the video and its corresponding collision factor observation and collision are being determined by matching It, can be according to Kalman filtering algorithm, based on the corresponding collision of moving object for including in the video after factor predicted value Factor observation and collision factor predicted value, calculate the collision factor information of the moving object, such as in conjunction with speed observation Velocity amplitudes of the moving object A in k frames is finally calculated with the covariance of rate predictions.
Whether step S105 touches between determining the moving object according to the collision factor information of the moving object It hits.Since in actual scene, all kinds of moving objects will appear a series of phenomenons when colliding, by these phenomenon regularization, It is indicated with specifically colliding factor information, then it can be as the standard for determining whether to collide.
For collision in traffic accident, if in motion process, the overlapping of moving object segmentation frame is more than certain threshold value, Then think to collide, needs whether concern wherein has the case where detection block disappearance (can not detect) at this time, if there is One side or multi-party disappearance, then can regard as colliding.Also such as, both sides' movement locus can be further paid close attention to, if movement side It then may not be collision in fact to parallel, and both be only to pass through, such as the vehicle of traveling have passed through the pedestrian to walk in the same direction By the side of, and the pedestrian has been sheltered from.But if the angle of the direction of motion is larger, illustrate there is apparent relative motion, then illustrates very It has been likely to occur collision.In addition, after general collision occurs, the moving objects such as other vehicles of surrounding, pedestrian can be collided The influence of accident, it is even motionless to slow down within a certain period of time, therefore can make using such phenomenon as a part for rule To determine whether the foundation to collide.
When traffic accident occurs, when colliding between different traffic main bodys, the traffic of different severity can be caused Accident.For example, having the traffic accident that pedestrian participates in often someone will member's injury, its severity generally can all be more than between vehicle Accident, and have the oversize vehicles such as oil truck, bus, truck participate in traffic accident also tend to cause great safety hidden Suffer from, and if it is the collision between car, and speed is little, then it is believed that not being serious accident.As a result, according to The collision factor information of moving object determines whether collide between the moving object after, however, it is determined that collide, also The severity of the secondary collision can be determined according to the entity type and collision factor information of the moving object.
When determining the severity of the secondary collision according to the entity type and collision factor information of the moving object, The associated video frame for being related to the secondary accident can be extracted in the video, then according only to described in the associated video frame The entity type of movement entity and collision factor information determine the severity of the secondary collision.For example, the length of one section of video Degree is 300 frames, wherein be related to impact generation process is the 80th to 120 frame, can only extract the 80th to 120 frame work at this time For associated video frame, the judgement for carrying out crash severity.
In addition, in the collision checking method that some embodiments of the application provide, it is also based on deep learning technology detection Additional Event information in the associated video frame, these additional Event informations can be set according to actual application scenarios And training, such as in the collision detection of traffic accident, additional Event information can be flue dust, fire behavior etc., if collision causes On fire, the case where smoldering, its severity can be larger, therefore can believe further combined with additional events when determining severity Breath, i.e., according to the collision factor information of the movement entity in the entity type of the moving object, the associated video frame And additional Event information, determine the severity of the secondary collision.
Include such as Fig. 2 shows the scheme being detected to collision class traffic accident using the scheme of the application, the program Under processing step:
S201 is chronologically inputted per frame video.
S202 carries out the detection of moving object using deep learning technology, can identify the specific reality of these moving objects Body type, including car, bus, bicycle, motorcycle, pedestrian and truck, oil truck etc..
S203 is carrying out deep learning detection simultaneously, and wiping out technology using background carries out foreground and background separation, before identification Moving object in scape.
S204 detects that moving object carries out Kalman filter tracking to each in S202, while to being detached in S203 The moving object of the foreground gone out carries out Kalman filter tracking.
S205, the tracking object that the moving object and background subtraction that deep learning detects detect pass through Hungary Algorithm It is matched, determines final moving object and collision factor information.
S206 is sentenced using the collision such as the speed of moving object, direction, running orbit, detection block factor information into line discipline It is fixed, meet preset condition and is considered to be collided.
S207 carries out collision accident deciding degree, determines severity for the associated video frame that judgement collides, Such as whether thering is pedestrian to be involved in, whether be public transport or truck etc. have the vehicle of major safety risks to be involved in.
S208 carries out the detection of the additional Event informations such as fire behavior, flue dust, to assist to define severity of injuries.
Based on same inventive concept, crash detection device is additionally provided in the embodiment of the present application, the corresponding side of the equipment Method is the method in previous embodiment, and its principle solved the problems, such as is similar to this method.
Crash detection device provided by the embodiments of the present application can be detected the moving object in video is to judge No to collide, the executive agent of this method can be that user equipment, the network equipment or user equipment and the network equipment pass through net Network is integrated constituted equipment, or can also be the application program for running on above equipment.The user equipment include but It is not limited to all kinds of terminal devices such as computer, mobile phone, tablet computer;The network equipment include but not limited to as network host, The realizations such as single network server, multiple network server collection or the set of computers based on cloud computing.Here, cloud is by being based on cloud The a large amount of hosts or network server for calculating (Cloud Computing) are constituted, wherein cloud computing is the one of Distributed Calculation Kind, a virtual machine being made of the computer collection of a group loose couplings.
Fig. 3 shows that a kind of structural schematic diagram of crash detection device in the embodiment of the present application, the equipment include input Device 310, the first detection device 320, second detection device 330, tracking fusing device 340 and judgment means 350.Wherein, defeated Enter device 310 for obtaining video to be detected.Video to be detected refers to the multiframe picture arranged according to sequential, can be by each Class photographic device shoots to obtain.For example, when this programme is applied to the collision accident detection in traffic accident, these are to be detected Video can be shot to obtain by the monitoring camera that is installed on road.
First detection device 320 is used to detect the moving object for including in the video based on deep learning technology, as First testing result.Before using deep learning technology detection, need to instruct the model being detected by training set Practice.Training set needs be made of the video for belonging to similar field, such as need in traffic accident collision accident detection when, by All kinds of motor vehicles, pedestrian, bicycle etc., therefore the video in training set are generally comprised in the moving object involved in traffic accident It is also required to include these moving objects, and the training sample in training set is more, trains obtained model when detecting Accuracy is also higher.
Deep learning technology can also identify the reality of moving object other than identifying the moving object for including in video Body type.Classification for entity type can be set according to the field of the application of this programme, for example, being handed over applied to row When collision accident in interpreter's event detects, entity type can be divided into motor vehicle, non-motor vehicle, pedestrian etc., can also be into one Step is subdivided into car, bus, bicycle, motorcycle, pedestrian, truck, oil truck etc..As a result, based on deep learning skill When art detects the moving object for including in the video, the first detection device can also determine the entity of the moving object simultaneously Type.
Second detection device 330 is used to detach background and foreground in the video based on the background technology of wiping out, and in institute The moving object for being detected in foreground and including in the video is stated, as the second testing result.Processing in second detection device can It executing with synchronous with the processing in the first detection device, i.e., the mode of moving object is detected at two kinds can synchronize progress, and Testing result is obtained respectively.Arbitrary existing ripe skill may be used in the background technology of wiping out employed in the embodiment of the present application Art, such as the separation of foreground and background is carried out based on mixture Gaussian background model, the mobile target in extraction foreground is as second The moving object of testing result.
Since when progress background wipes out technology, the stability of background image can cause a fixing to the precision of detection It rings, such as background frame is fixed inconvenient, then the differentiation precision of its background and foreground is higher.Therefore, in the embodiment of the present application Input unit can be obtained by the video to be detected for the photographic device shooting being fixedly installed so that the background phase of video to be detected To fixation, to improve the accuracy of detection that background wipes out technology.
Fusing device 340 is tracked to be used to carry out General Office according to first testing result and second testing result Reason, determines the moving object for including in the video and its corresponding collision factor information.When carrying out integrated treatment, two are utilized The purpose that the different mode of kind is respectively detected the moving object in video is to wipe out technology to depth using background The testing result of habit technology is supplemented, the case where avoiding the detection of deep learning technology there are missing inspections, to influence follow-up place The accuracy of reason.For example, moving object A is not detected in the first testing result, and movement is detected in the second testing result Object A supplements first testing result based on second testing result, is determined using moving object A as final Video in include one of moving object.
Whether the corresponding collision factor information of these moving objects refers to being touched between being determined for moving object The information hit, such as can be movement velocity, the direction of motion, running orbit, contour of object etc..When detecting, contour of object can It is indicated in the form of using detection block, (x, y, w, h) four vector may be used to indicate, wherein x, y are to detect object Body frame top left co-ordinate, w are the width of detection block, the height that h is detection block.These collision factor informations can be in video The moving object detected carries out the mode of Kalman filter tracking to obtain.
In the embodiment of the present application, tracking fusing device based on second testing result to first testing result into When going and supplement, and carrying out Kalman filter tracking, first first testing result and the second testing result can be carried out respectively Kalman filter tracking, determine the collision factor observation of the first moving object collection in the video frame and collision factor predicted value with And second moving object collection collision factor observation in the video frame and collision factor predicted value.Wherein, the first moving object Body collection is the set of moving object in the first testing result, and the second moving object collection is the collection of moving object in the second testing result It closes, collision factor observation refers to the actually detected value arrived in each video frame, and the factor predicted value of collision refers to being based on it The value that preceding video frame estimates current video frame is then based on predicted value and observation, and combines making an uproar for the two Sound come finally determine its collide factor information.
For colliding the speed in factor information, to the speed of moving object A when obtaining kth frame, then root first Velocity amplitude when according to k-1 frames predicts rate predictions when k frames, it is assumed that thinks that moving object is at the uniform velocity based on data before Movement, then velocity amplitude when can be using k-1 frames is as rate predictions when k frames, it is assumed that for 23km/h, while the noise of the value Deviation is 5km/h (white Gaussian noise may be used).And the speed that the practical measurement of image based on kth frame and before obtains Value is the speed observation of moving object A, it is assumed that is 25km/h, noise bias value is 4km/h.
Since when carrying out Kalman filter tracking, observation is to need to obtain by actually detected, and predicted value is logical The data for crossing preamble video frame are predicted to obtain, for the frame it is possible that only pre- if missing inspection occurs in a certain frame Measured value, and the case where without observation.Thus, it is possible to the first moving object collection and the second moving object collection and corresponding collision Factor observation and collision factor predicted value are matched, if some moving object A that the first moving object is concentrated only is got Collision factor predicted value, and collisionless factor observation, but due to the difference of detection mode, the second moving object concentration includes The collision factor predicted value of moving object A can then be used as supplement, obtain the collision factor predicted value of moving object A and touch Hit factor predicted value.
Further it is also possible to which a certain moving object A is not detected in deep learning technology, then the first moving object concentration is not wrapped A containing moving object can not get the collision factor observation and collision factor predicted value of moving object A.And background is wiped out Technology detects moving object A, then the second moving object concentration contains moving object A, and also gets moving object The collision factor observation and collision factor predicted value of body A, it is possible thereby to the number of the moving object A concentrated with the second moving object It is supplemented according to overall data, so as to prevent missing inspection, improves accuracy.
No matter when above-mentioned which kind of situation, in the embodiment of the present application tracking fusing device to the first moving object collection and When second moving object collection and corresponding collision factor observation and collision factor predicted value are matched, breast tooth may be used Sharp matching algorithm so that the data that two kinds of detection modes obtain can match to the greatest extent.
The moving object for including in the video and its corresponding collision factor observation and collision are being determined by matching It, can be according to Kalman filtering algorithm, based on the corresponding collision of moving object for including in the video after factor predicted value Factor observation and collision factor predicted value, calculate the collision factor information of the moving object, such as in conjunction with speed observation Velocity amplitudes of the moving object A in k frames is finally calculated with the covariance of rate predictions.
Judgment means 350 be used for according to the collision factor information of the moving object determine between the moving object whether It collides.Since in actual scene, all kinds of moving objects will appear a series of phenomenons when colliding, these phenomenons are advised Then change, is indicated with specifically colliding factor information, then it can be as the standard for determining whether to collide.
For collision in traffic accident, if in motion process, the overlapping of moving object segmentation frame is more than certain threshold value, Then think to collide, needs whether concern wherein has the case where detection block disappearance (can not detect) at this time, if there is One side or multi-party disappearance, then can regard as colliding.Also such as, both sides' movement locus can be further paid close attention to, if movement side It then may not be collision in fact to parallel, and both be only to pass through, such as the vehicle of traveling have passed through the pedestrian to walk in the same direction By the side of, and the pedestrian has been sheltered from.But if the angle of the direction of motion is larger, illustrate there is apparent relative motion, then illustrates very It has been likely to occur collision.In addition, after general collision occurs, the moving objects such as other vehicles of surrounding, pedestrian can be collided The influence of accident, it is even motionless to slow down within a certain period of time, therefore can make using such phenomenon as a part for rule To determine whether the foundation to collide.
When traffic accident occurs, when colliding between different traffic main bodys, the traffic of different severity can be caused Accident.For example, having the traffic accident that pedestrian participates in often someone will member's injury, its severity generally can all be more than between vehicle Accident, and have the oversize vehicles such as oil truck, bus, truck participate in traffic accident also tend to cause great safety hidden Suffer from, and if it is the collision between car, and speed is little, then it is believed that not being serious accident.As a result, according to The collision factor information of moving object determines whether collide between the moving object after, however, it is determined that collide, sentence Disconnected device can also determine the severity of the secondary collision according to the entity type and collision factor information of the moving object.
When determining the severity of the secondary collision according to the entity type and collision factor information of the moving object, Judgment means can extract the associated video frame for being related to the secondary accident in the video, then according only to the associated video frame In the movement entity entity type and collision factor information determine the severity of the secondary collision.For example, one section The length of video be 300 frames, wherein be related to impact generation process is the 80th to 120 frame, can only extract at this time the 80th to 120 frames are as associated video frame, the judgement for carrying out crash severity.
In addition, in the collision checking method that some embodiments of the application provide, the first detection device is also based on depth Learning art detects the additional Event information in the associated video frame, these additional Event informations can be according to actual application Scene is set and is trained, such as in the collision detection of traffic accident, and additional Event information can be flue dust, fire behavior etc., If collision causes on fire, the case where smoldering, its severity can be larger, therefore judgment means can be with when determining severity Further combined with additional Event information, i.e., according to the fortune in the entity type of the moving object, the associated video frame The collision factor information and additional Event information of dynamic entity, determine the severity of the secondary collision.
In conclusion in scheme provided by the present application, after obtaining video to be detected, examined based on deep learning technology The moving object for including in the video is surveyed, as the first testing result, while technology is wiped out based on background and detaches the video In background and foreground, and the moving object for including in the video is detected in the foreground, as the second testing result, so Integrated treatment is carried out according to first testing result and second testing result afterwards, determines the movement for including in the video Object and its corresponding collision factor information, and then the moving object is determined according to the collision factor information of the moving object Between whether collide.Since deep learning technology is utilized in the first testing result, accuracy of detection is higher, and with the second inspection It surveys the first testing result of result pair to supplement, further improves the precision of testing result so that the moving object determined with this Body and its corresponding collision factor information are more accurate, which thereby enhance the precision of collision detection.
In addition, the entity type for the moving object that this programme can also clearly be collided using deep learning technology, by This can further judge the severity of collision accident, and information definitely is provided for collision detection.
In addition, the part of the application can be applied to computer program product, such as computer program instructions, when its quilt When computer executes, by the operation of the computer, it can call or provide according to the present processes and/or technical solution. And the program instruction of the present processes is called, it is possibly stored in fixed or moveable recording medium, and/or pass through Broadcast or the data flow in other signal loaded mediums and be transmitted, and/or be stored in the calculating run according to program instruction In the working storage of machine equipment.Here, including an equipment as shown in Figure 4 according to one embodiment of the application, this sets Standby includes the one or more machine readable medias 410 for being stored with machine readable instructions and the place for executing machine readable instructions Manage device 420, wherein when the machine readable instructions are executed by the processor so that the equipment is executed based on aforementioned according to this The method and/or technology scheme of multiple embodiments of application.
It should be noted that the application can be carried out in the assembly of software and/or software and hardware, for example, can adopt With application-specific integrated circuit (ASIC), general purpose computer or any other realized similar to hardware device.In one embodiment In, the software program of the application can be executed by processor to realize above step or function.Similarly, the software of the application Program (including relevant data structure) can be stored in computer readable recording medium storing program for performing, for example, RAM memory, magnetic or CD-ROM driver or floppy disc and similar devices.In addition, hardware can be used to realize in some steps or function of the application, for example, Coordinate to execute the circuit of each step or function as with processor.
It is obvious to a person skilled in the art that the application is not limited to the details of above-mentioned exemplary embodiment, Er Qie In the case of without departing substantially from spirit herein or essential characteristic, the application can be realized in other specific forms.Therefore, no matter From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and scope of the present application is by appended power Profit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent requirements of the claims Variation is included in the application.Any reference signs in the claims should not be construed as limiting the involved claims.This Outside, it is clear that one word of " comprising " is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in device claim is multiple Unit or device can also be realized by a unit or device by software or hardware.The first, the second equal words are used for table Show title, and does not represent any particular order.

Claims (17)

1. a kind of collision checking method, wherein this method includes:
Obtain video to be detected;
The moving object for including in the video is detected based on deep learning technology, as the first testing result;
Background and foreground in the video are detached based on the background technology of wiping out, and detects in the foreground and is wrapped in the video The moving object contained, as the second testing result;
Integrated treatment is carried out according to first testing result and second testing result, determines the fortune for including in the video Animal body and its corresponding collision factor information;
Whether collide between determining the moving object according to the collision factor information of the moving object.
2. according to the method described in claim 1, wherein, being carried out according to first testing result and second testing result Integrated treatment determines the moving object for including in the video and its corresponding collision factor information, including:
First testing result is supplemented based on second testing result, and carries out Kalman filter tracking, is determined The moving object for including in the video and its corresponding collision factor information.
3. according to the method described in claim 2, wherein, being carried out to first testing result based on second testing result Supplement, and Kalman filter tracking is carried out, including:
Kalman filter tracking is carried out to first testing result and the second testing result respectively, determines the first moving object collection The touching in the video frame of collision factor observation in the video frame and collision factor predicted value and the second moving object collection Hit factor observation and collision factor predicted value, wherein the first moving object collection is moving object in the first testing result Set, the second moving object collection be the second testing result in moving object set;
It is pre- to the first moving object collection and the second moving object collection and corresponding collision factor observation and collision factor Measured value is matched, and determines that the moving object for including in the video and its corresponding collision factor observation and collision factor are pre- Measured value;
According to Kalman filtering algorithm, based on the corresponding collision factor observation of moving object for including in the video and collision Factor predicted value calculates the collision factor information of the moving object.
4. according to the method described in claim 1, wherein, the moving object for including in the video is detected based on deep learning technology When body, further include:
Determine the entity type of the moving object.
5. according to the method described in claim 4, wherein, the movement is determined according to the collision factor information of the moving object After whether colliding between object, further include:
If it is determined that colliding, the tight of the secondary collision is determined according to the entity type of the moving object and collision factor information Weight degree.
6. according to the method described in claim 5, wherein, however, it is determined that collide, according to the entity type of the moving object And collision factor information determines the severity of the secondary collision, including:
If it is determined that colliding, the associated video frame for being related to the secondary accident is extracted in the video;
Believed according to the collision factor of the movement entity in the entity type of the moving object and the associated video frame Breath determines the severity of the secondary collision.
7. according to the method described in claim 6, wherein, this method further includes:
The additional Event information in the associated video frame is detected based on deep learning technology;
Believed according to the collision factor of the movement entity in the entity type of the moving object and the associated video frame Breath determines the severity of the secondary collision, including:
According to the collision factor information of the movement entity in the entity type of the moving object, the associated video frame with And additional Event information, determine the severity of the secondary collision.
8. according to the method described in claim 1, wherein, obtain video to be detected, including:
It obtains by the video to be detected for the photographic device shooting being fixedly installed.
9. a kind of crash detection device, wherein the equipment includes:
Input unit, for obtaining video to be detected;
First detection device, for detecting the moving object for including in the video based on deep learning technology, as the first inspection Survey result;
Second detection device, for detaching background and foreground in the video based on the background technology of wiping out, and in the foreground The moving object for including in the middle detection video, as the second testing result;
Fusing device is tracked, for carrying out integrated treatment according to first testing result and second testing result, is determined The moving object for including in the video and its corresponding collision factor information;
Judgment means, for whether being touched between determining the moving object according to the collision factor information of the moving object It hits.
10. equipment according to claim 9, wherein the tracking fusing device, for being based on second testing result First testing result is supplemented, and carries out Kalman filter tracking, determines the moving object for including in the video And its corresponding collision factor information.
11. equipment according to claim 10, wherein the tracking fusing device, for respectively to first detection As a result Kalman filter tracking is carried out with the second testing result, determines that the collision factor of the first moving object collection in the video frame is seen The collision factor observation and the factor of collision in the video frame of measured value and collision factor predicted value and the second moving object collection Predicted value, wherein the first moving object collection is the set of moving object in the first testing result, second moving object Collection is the set of moving object in the second testing result;To the first moving object collection and the second moving object collection and accordingly Collision factor observation and collision factor predicted value matched, determine the moving object for including in the video and its correspondence Collision factor observation and collision factor predicted value;And according to Kalman filtering algorithm, based on including in the video The corresponding collision factor observation of moving object and collision factor predicted value, calculate the collision factor information of the moving object.
12. equipment according to claim 9, wherein first detection device is additionally operable to based on deep learning technology When detecting the moving object for including in the video, the entity type of the moving object is determined.
13. equipment according to claim 12, wherein the judgment means are additionally operable in determining collide, according to The entity type and collision factor information of the moving object determine the severity of the secondary collision.
14. equipment according to claim 13, wherein the judgment means, for determine collide when, described Extraction is related to the associated video frame of the secondary accident in video, according to the entity type of the moving object and the associated video The collision factor information of the movement entity in frame determines the severity of the secondary collision.
15. equipment according to claim 14, first detection device are additionally operable to detect institute based on deep learning technology State the additional Event information in associated video frame;
The judgment means are used for the entity type according to the moving object, the movement reality in the associated video frame The collision factor information and additional Event information of body, determine the severity of the secondary collision.
16. equipment according to claim 9, wherein the input unit, for obtaining the photographic device by being fixedly installed The video to be detected of shooting.
17. a kind of crash detection device, wherein the equipment includes:
Processor;And
One or more machine readable medias of machine readable instructions are stored with, when the processor executes the machine readable finger When enabling so that the equipment executes such as method described in any item of the claim 1 to 8.
CN201810106578.3A 2018-02-02 2018-02-02 A kind of collision checking method and equipment Active CN108458691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810106578.3A CN108458691B (en) 2018-02-02 2018-02-02 A kind of collision checking method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810106578.3A CN108458691B (en) 2018-02-02 2018-02-02 A kind of collision checking method and equipment

Publications (2)

Publication Number Publication Date
CN108458691A true CN108458691A (en) 2018-08-28
CN108458691B CN108458691B (en) 2019-04-19

Family

ID=63239296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810106578.3A Active CN108458691B (en) 2018-02-02 2018-02-02 A kind of collision checking method and equipment

Country Status (1)

Country Link
CN (1) CN108458691B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276959A (en) * 2019-06-26 2019-09-24 奇瑞汽车股份有限公司 Processing method, device and the storage medium of traffic accident
CN112446358A (en) * 2020-12-15 2021-03-05 北京京航计算通讯研究所 Target detection method based on video image recognition technology
CN112507913A (en) * 2020-12-15 2021-03-16 北京京航计算通讯研究所 Target detection system based on video image recognition technology
CN112651377A (en) * 2021-01-05 2021-04-13 河北建筑工程学院 Ice and snow movement accident detection method and device and terminal equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226891A (en) * 2013-03-26 2013-07-31 中山大学 Video-based vehicle collision accident detection method and system
CN103258432A (en) * 2013-04-19 2013-08-21 西安交通大学 Traffic accident automatic identification processing method and system based on videos
CN106127114A (en) * 2016-06-16 2016-11-16 北京数智源科技股份有限公司 Intelligent video analysis method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226891A (en) * 2013-03-26 2013-07-31 中山大学 Video-based vehicle collision accident detection method and system
CN103258432A (en) * 2013-04-19 2013-08-21 西安交通大学 Traffic accident automatic identification processing method and system based on videos
CN106127114A (en) * 2016-06-16 2016-11-16 北京数智源科技股份有限公司 Intelligent video analysis method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276959A (en) * 2019-06-26 2019-09-24 奇瑞汽车股份有限公司 Processing method, device and the storage medium of traffic accident
CN112446358A (en) * 2020-12-15 2021-03-05 北京京航计算通讯研究所 Target detection method based on video image recognition technology
CN112507913A (en) * 2020-12-15 2021-03-16 北京京航计算通讯研究所 Target detection system based on video image recognition technology
CN112651377A (en) * 2021-01-05 2021-04-13 河北建筑工程学院 Ice and snow movement accident detection method and device and terminal equipment

Also Published As

Publication number Publication date
CN108458691B (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN108458691B (en) A kind of collision checking method and equipment
Hu et al. Joint monocular 3D vehicle detection and tracking
KR102205096B1 (en) Transaction risk detection method and apparatus
US20200290608A1 (en) Forward collision control method and apparatus, electronic device, program, and medium
Essa et al. Simulated traffic conflicts: do they accurately represent field-measured conflicts?
Bai et al. Traffic anomaly detection via perspective map based on spatial-temporal information matrix.
CN103927508B (en) Target vehicle tracking method and device
CN105513349B (en) Mountainous area highway vehicular events detection method based on double-visual angle study
CN111771207A (en) Enhanced vehicle tracking
CN104463903A (en) Pedestrian image real-time detection method based on target behavior analysis
Alozi et al. Evaluating the safety of autonomous vehicle–pedestrian interactions: An extreme value theory approach
CN110032947A (en) A kind of method and device that monitor event occurs
Pyo et al. Front collision warning based on vehicle detection using CNN
JP2013174568A (en) Moving body tracking device, moving body tracking method, and program
US11748593B2 (en) Sensor fusion target prediction device and method for vehicles and vehicle including the device
CN109766867A (en) Travel condition of vehicle determines method, apparatus, computer equipment and storage medium
CN111497741B (en) Collision early warning method and device
CN110097121A (en) A kind of classification method of driving trace, device, electronic equipment and storage medium
KR20180127245A (en) Method for Predicting Vehicle Collision Using Data Collected from Video Games
CN114972911A (en) Method and equipment for collecting and processing output data of automatic driving perception algorithm model
Guo et al. The efficacy of neural planning metrics: A meta-analysis of pkl on nuscenes
Guerrieri et al. Real-time social distance measurement and face mask detection in public transportation systems during the COVID-19 pandemic and post-pandemic Era: Theoretical approach and case study in Italy
JP2021043146A (en) Obstacle detection system and obstacle detection method
US10417358B2 (en) Method and apparatus of obtaining feature information of simulated agents
Lim et al. Event-driven track management method for robust multi-vehicle tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant