CN110910415A - Parabolic detection method, device, server and computer readable medium - Google Patents

Parabolic detection method, device, server and computer readable medium Download PDF

Info

Publication number
CN110910415A
CN110910415A CN201911189562.4A CN201911189562A CN110910415A CN 110910415 A CN110910415 A CN 110910415A CN 201911189562 A CN201911189562 A CN 201911189562A CN 110910415 A CN110910415 A CN 110910415A
Authority
CN
China
Prior art keywords
target object
target
determining
video
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911189562.4A
Other languages
Chinese (zh)
Inventor
周学武
张韩宾
张韵东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co Ltd
Original Assignee
Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co Ltd filed Critical Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co Ltd
Priority to CN201911189562.4A priority Critical patent/CN110910415A/en
Publication of CN110910415A publication Critical patent/CN110910415A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

Embodiments of the present disclosure disclose a method, apparatus, server and computer readable medium for detecting a parabola. One embodiment of the method comprises detecting whether a target object exists in a target video; in response to the existence, analyzing the target video, and determining the moving distance and the moving time of the target object; determining whether the target object is a free-fall body according to the moving distance and the moving time; in response to determining that it is a free fall, at least one video frame of the target video associated with the target object is stored. The embodiment realizes that whether the target object is an aerial parabola or not is judged by the machine, and the video of the target object is stored, so that later evidence obtaining is facilitated.

Description

Parabolic detection method, device, server and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, a server, and a computer-readable medium for detecting a parabola.
Background
The high-altitude parabolic model brings great harm to the society, so the high-altitude parabolic model problem is always concerned. The traditional high-altitude parabolic detection method still uses the traditional method: for example, 1, a falling object is determined by manually determining whether the object is a parabolic object or by using ultrasonic waves. 2. And judging whether the object is an aerial parabola or not by adopting a visual detection scheme. The first method has the problem that the evidence cannot be accurately obtained in the later period because the record cannot be carried out. In the second method, because the conventional visual detection scheme adopts a target detection method based on conventional image processing, it has the problems of slow detection speed and low detection accuracy.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a parabola detection method, apparatus, server and computer readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method of detecting a parabola, comprising:
detecting whether a target object exists in a target video; in response to the existence, analyzing the target video, and determining the moving distance and the moving time of the target object; determining whether the target object is a free-fall body according to the moving distance and the moving time; in response to determining that it is a free fall, at least one video frame of the target video associated with the target object is stored.
In a second aspect, some embodiments of the present disclosure provide an apparatus for parabola detection, comprising: a target detection module configured to detect whether a target object exists in a target video; a target calculation module configured to analyze the target video in response to the presence, and determine a moving distance and a moving time of the target object; a target determination module configured to determine whether the target object is a free-fall object according to the movement distance and the movement time; a target storage module configured to store at least one video frame of the target video associated with the target object in response to determining that it is the freefall.
In a third aspect, some embodiments of the present disclosure provide an apparatus for parabola detection, comprising: a monitoring screen: displaying a monitoring picture in real time; a camera: collecting front-end video data; a server: the system comprises a target detection module, a target calculation module, a target determination module and a target storage module; wireless routing: the communication equipment is connected with the camera and the server; a network switch: and the communication equipment is connected with the monitoring screen and the server.
In a fourth aspect, some embodiments of the present disclosure provide a server comprising:
one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method of detecting a parabola as disclosed in the first aspect.
In a fifth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method for detecting a parabola disclosed in the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: by detecting the target object, judging whether the target object is a free-fall body according to the moving speed and the moving time of the target object, and storing the video of the corresponding target object which is the free-fall body. This allows the machine to determine whether the object is a sky parabola and also facilitates later forensics because of the stored video of the target object. In addition, optionally, the machine judgment result can be used for making early warning in advance, and the adoption of the single-shot detector model for detecting the target object is further beneficial to improving the speed and the precision of detecting the target object.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is an architectural diagram of an exemplary system in which some embodiments of the present disclosure may be applied;
fig. 2 is a flow diagram of some embodiments of a method of parabolic detection according to the present disclosure;
fig. 3 is a schematic structural diagram of a parabolic detection apparatus according to some embodiments of the present disclosure;
FIG. 4 is a schematic structural diagram of a server according to some embodiments of the present disclosure;
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which the parabolic detection method of some embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a wireless router 101, a plurality of cameras 102, a network switch 103, a server 104, and a monitor screen 105. The wireless router 101 is used to provide a medium for communication links between the plurality of cameras 102, the network switch 103, and the network switch 103 is used to provide a medium for communication links between the wireless router 101, the server 104, and the monitor screen 105. Network switch 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The server 104 may be a server that provides various services, such as a server that processes images captured by a camera. It should be noted that the method for detecting a parabola provided by the embodiment of the present disclosure may be executed by the camera 102, and may also be executed by the server 104. And is not particularly limited herein.
The server 104 may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules, for example, to provide distributed services, or as a single piece of software or software module. And is not particularly limited herein.
Video information is collected through the camera 102 and sent to the server 104 for processing, whether an object image in the video is a free fall or not is determined through corresponding calculation of the video information through the server 104, the object image is determined to be the free fall, the information is sent to the monitoring screen, and early warning information is sent out.
It should be understood that the number of cameras, monitor screens, network switches, wireless routes, and servers in fig. 1 are merely illustrative. There may be any number of cameras, monitor screens, network switches, wireless routes, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a method of parabolic detection according to the present disclosure is shown. The parabola detection method comprises the following steps:
step 201, detecting whether a target object exists in a target video.
In some embodiments, the execution body may be a camera. The target video may be video information collected by the execution main body, may be video information collected in real time, or may be stored video information. The target object is an object thrown in the air. As an example, the executive body may detect whether a glass bottle is being dropped on a certain high building.
In an alternative manner of some embodiments of the present disclosure, in step 201, a single-shot detector model may be used to detect a target object in a target video, where the single-shot detector model, compared with a standard single-shot detector model, a basic network of the single-shot detector model is replaced with a resnet18 model, the resnet18 model includes 17 convolutional layers and 1 full link layer, and the number of channels of each layer is set to be 32 in the resnet18 model.
The ssd (single Shot multi box detector) approach is based on a feed-forward convolutional network that generates a set of rectangular boxes of fixed size, scores the object classes present in these boxes, and then generates the final detection result using a non-maximum suppression method. The network layer in the early stage is based on some standard structure for high quality picture classification, which we refer to as the basic network. resnet18 belongs to a deep neural network, and its basic structure is 18 layers, and is composed of 17 convolutional layers plus 1 full link layer, although the number of convolutional layers may be designed. The main function of resnet18 is to solve the performance degradation problem after the network depth is deepened, and because the model is small, the basic network of the single-shot detector model is replaced by the model, so that the operation speed can be increased, and the image accuracy is not reduced. The number of channels in each layer of the resnet18 is 64, and in order to increase the processing speed, the number of channels is reduced by half and is set to 32, so that the corresponding calculation parameters are also reduced by half. The following steps are generally adopted in SSD model training: 1. and inputting a picture into a pre-trained classification network to obtain feature maps with different sizes. The conventional VGG16 network is modified, and the FC6 and FC7 layers of the VGG16 are converted into convolutional layers; all Dropout and FC8 layers are removed. 2. Extracting feature maps of Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2 and Conv11_2 layers, and then respectively constructing detection results of 6 different scales at each point on the feature map layers; and respectively detecting and classifying the detection results to generate a plurality of detection results. 3. And combining the detection results obtained by different feature maps, and inhibiting a part of overlapped or incorrect detection results by a non-maximum value inhibition NMS method to generate a final detection result set. However, in addition to the above SSD model training, in order to increase the detection speed, only the feature maps of the convolutional layer Conv7 layer and the Conv8_2 layer were extracted in step 2. The above Non-Maximum Suppression NMS (Non-Maximum Suppression, NMS) is an element which suppresses a Non-Maximum, and can be understood as a local Maximum search. This local representation is a neighborhood.
In some optional implementations of some embodiments of the present disclosure, a set of rectangular boxes of a fixed size is generated using a feed-forward convolution network included in the single-shot detector model, and object classes of objects of the target video existing in the rectangular boxes are scored and/or probabilities are determined; sorting each rectangular frame in the rectangular frame set according to the grade and/or the probability; based on the above ordering, executing the determining step: respectively determining whether the overlapping degree of other rectangular frames in the rectangular frame set and the rectangular frame with the maximum probability and/or the highest score is larger than a set threshold value from the rectangular frame with the maximum probability and/or the highest score; determining that the degree of overlap is greater than the threshold, deleting all other rectangular boxes, marking the rectangular box with the maximum probability and/or the highest score, and placing the rectangular box with the maximum probability and/or the highest score into a detection result set; determining whether the set of rectangular boxes is empty; and in response to determining that the rectangular frame set is empty, determining the detection result set as a final detection result set.
In response to determining that the set of rectangular boxes is not empty, continuing to perform the determining step.
The ranking may be performed according to the score and/or the probability of the object category of the target object in the rectangular frame, and may be the score or the probability obtained by a classifier such as a support vector machine svm (support vector machine). For example, when the scores of the plurality of rectangular boxes are the same, the rectangular boxes with the same score may be sorted according to the probability, if the probabilities of the plurality of rectangular boxes are the same, the plurality of rectangular boxes with the same probability may be sorted according to the score, and when there is no rectangular box with the same score or probability, the rectangular boxes with the same probability may be sorted according to the single score or probability. The overlap degree (IOU) is an area ratio of the overlap region, and the threshold value of the overlap degree is generally set to 0.3 to 0.5, for example.
As an example, the non-maximum suppressing NMS approach may be used to obtain the final set of detection results as follows: locating a horse in the video, finding a pile of rectangular frames on the horse image, and judging which rectangular frames are useless. Firstly, 6 rectangular boxes form a rectangular box set, sorting is carried out according to the class classification probability of the classifier, and the probability rectangular boxes which belong to horses from small to large are respectively A, B, C, D, E, F. (1) Then, starting from a rectangular frame F with the maximum probability, respectively judging whether the overlapping degree IOU of A-E and F is greater than a certain set threshold value; (2) when the overlapping degree of B, D and F is judged to exceed or be larger than the threshold value, deleting B, D in the rectangular frame set; marking a first rectangular frame F, and putting the first rectangular frame F into a detection result set; (3) selecting the E with the highest probability from the rest of the rectangular frame set A, C, E, then judging whether the overlapping degree of the E and A, C is larger than the set threshold value, if so, deleting A, C from the rectangular frame set; and label E is the second rectangular box we keep, put it into the test result set. This is repeated until the set of rectangular boxes is empty, and the set of test results is the set of test results that we are to find finally.
In response to the existence, the target video is analyzed to determine the moving distance and moving time of the target object, step 202.
In some embodiments, based on the target object obtained in step 201, the execution subject (e.g., the server shown in fig. 1) performs analysis on the target video, for example, calculating the number of frames of the video frame to determine the moving time, and calculating the moving distance of the target object by using the position information of the target object in the video.
As an alternative to some embodiments, determining the moving distance of the target object comprises the steps of: recording the coordinates of the target object; comparing coordinates between the front frame and the rear frame of the target object; calculating a distance difference between the coordinates of the previous frame and the coordinates of the subsequent frame as a moving distance; and calculating the time difference of the target object to the previous and next frames as the moving time according to the number of frames. Wherein the previous frame is a first frame in which the presence of the target is detected; the subsequent frame is a frame arbitrarily selected from a plurality of frames after the first frame on which the target object is displayed. As an example, a tenth frame is selected as a subsequent frame, the coordinates of the tenth frame are (0, 0.7), and the coordinates of the first frame are (0, 0.1); the moving distance of the target object can be calculated in a conversion manner from the frame coordinate difference (here, 0.6 m), for example, 0.6 m. As an example, the number of frames recorded by the current video detection is 40 frames per second, and the detection is performed with a stable number of frames, the time difference between two frames is 0.025 seconds, and the tenth frame is also selected as the following frame, so that the time difference between the preceding and following frames is 0.25 seconds.
And step 203, determining whether the target object is a free-fall body according to the moving distance and the moving time.
In some embodiments, the velocity or acceleration of the target object is calculated based on the moving distance and moving time obtained in step 202. Thereby, it is determined whether or not the motion state of the target object is a free fall. As an example, according to the moving distance and moving time of the target object (e.g., glass bottle) thrown by the building in the video information, the speed or acceleration of the glass bottle is calculated to determine whether it is a free fall.
As an alternative to some embodiments, determining whether the target object is free-fall comprises: calculating the target by the moving distance and the moving timeAn acceleration of the object; setting an acceleration threshold value; and when the acceleration is larger than the threshold value, determining that the target object is a free-falling body. As an example, the acceleration of a work tool in a particular object or preset scene is typically 10m/s2Where a particular object or work tool in a preset scene is an item that is demarcated or determined in advance, may be trained as sample data. So that the acceleration threshold thereof can be set to 8m/s2If the acceleration calculated by the offset distance and the time difference of the front frame and the rear frame of the dropped glass bottle is greater than the acceleration threshold value of 9m/s2 or 12m/s2, the glass bottle is determined to be in free fall.
In response to determining that it is a free fall, at least one video frame of the target video associated with the target object is stored, step 204.
In some embodiments, a video frame is the number of frames of a picture transmitted in 1 Second of time, typically denoted as fps (FramesPer Second, number of frames transmitted per Second). And the video frame with the target object is the associated video frame, and the execution body can store at least one video frame with the target object. It may be stored in a server or other associated storage device, such as a hard disk, cloud, usb disk. As an example, only video frames with glass bottles in the video are stored in the storage device, and video frames without glass bottles are not stored.
In some optional implementations of this embodiment, the communicatively connected device is controlled to perform the early warning operation in response to determining that the device is a free-fall. As an example, when the detected video information is a falling object, a corresponding alarm device is controlled to send an early warning prompt to security personnel on the monitoring screen, such as sending information to the monitoring screen, sounding an alarm bell, flashing an alarm light, and the like. The method includes, but is not limited to, the above warning modes.
In some embodiments of the present disclosure, by detecting a target object, it is determined whether the target object is a free fall according to a moving speed and time of the target object, and a video of the corresponding target object, which is a free fall, is stored. This allows the machine to determine whether the object is a sky parabola and also facilitates later forensics because the video of the target object is stored. In addition, optionally, the result of the machine judgment can also be used for making early warning; optionally, the single-shot detector model is adopted to detect the target object, so that the speed and the precision of detecting the target object are improved.
With further reference to fig. 3, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a parabolic detection apparatus, which correspond to those illustrated in fig. 2, and which may be particularly applicable in various electronic devices.
As shown in fig. 3, the parabolic detection apparatus 300 of some embodiments includes: the target detection module 301: detecting whether a target object exists in a target video; the target calculation module 302: in response to the existence, analyzing the target video, and determining the moving distance and the moving time of the target object; the target determination module 303: determining whether the target object is a free-fall body according to the moving distance and the moving time; the target storage module 304: in response to determining that it is a free fall, at least one video frame of the target video associated with the target object is stored.
In an optional implementation manner of some embodiments, the target detection module may detect the target object in the target video by using a single-Shot detector model, where compared with a standard single-Shot detector model, an underlying network of the single-Shot detector model is changed to a resnet18 model, and the number of channels of each layer in the resnet18 model is set to be 32.
In an alternative implementation of some embodiments, only the feature maps of convolutional layer Conv7 layer and Conv8_2 layer are extracted in SSD model training
The server shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the server 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the server 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates a server 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium of some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: detecting whether a target object exists in a target video; in response to the existence, analyzing the target video, and determining the moving distance and the moving time of the target object; determining whether the target object is a free-fall body according to the moving distance and the moving time; in response to determining that it is a free fall, at least one video frame of the target video associated with the target object is stored.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a target detection module, a target calculation module, a target determination module, and a target storage module. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves. For example, the target detection module may also be described as a "module that detects whether a target object exists in a target video".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (9)

1. A method of parabolic detection, comprising:
detecting whether a target object exists in a target video;
in response to the presence, analyzing the target video to determine a movement distance and a movement time of the target object;
determining whether the target object is a free-fall body according to the moving distance and the moving time;
in response to determining that it is the free fall, storing at least one video frame of the target video associated with the target object.
2. The method of claim 1, wherein the detecting whether a target object is present in a target video comprises:
and detecting the target object in the target video by adopting a single-emission detector model, wherein compared with a standard single-emission detector model, a basic network of the single-emission detector model is changed into a resnet18 model, the resnet18 model comprises 17 convolutional layers and 1 full link layer, and the number of channels of each layer is set to be 32 in the resnet18 model.
3. The method of claim 2, wherein the detecting the target object in the target video using a single-shot detector model comprises:
generating a set of rectangular boxes of fixed size using a feed-forward convolutional network comprised by the single-shot detector model, and scoring and/or determining probabilities for object classes of objects of the target video present in these rectangular boxes;
sorting each rectangular frame in the rectangular frame set according to the grade and/or probability;
based on the above ordering, executing the determining step: respectively determining whether the overlapping degree of other rectangular frames in the rectangular frame set and the rectangular frame with the maximum probability and/or the highest score is larger than a set threshold value from the rectangular frame with the maximum probability and/or the highest score; if the overlapping degree is determined to be larger than the threshold value, deleting all other rectangular boxes, marking the rectangular box with the maximum probability and/or the highest score, and placing the rectangular box with the maximum probability and/or the highest score into a detection result set; determining whether the set of rectangular boxes is empty; in response to determining that the set of rectangular boxes is empty, determining the set of detection results as a final set of detection results;
in response to determining that the set of rectangular boxes is not empty, continuing to perform the determining step.
4. The method of claim 2, the determining a movement distance and a movement time of the target object, comprising:
recording the coordinates of the target object;
comparing the coordinates between the front and rear frames of the target object;
calculating a distance difference between the coordinates of the previous frame and the coordinates of the subsequent frame as the movement distance;
and calculating the time difference of the previous frame and the next frame of the target object as the moving time according to the frame number.
5. The method of claim 3, the determining whether the target object is free-fall, comprising:
calculating the acceleration of the target object according to the moving distance and the moving time;
setting an acceleration threshold value;
and when the acceleration is larger than the threshold value, determining that the target object is a free-fall body.
6. The method of one of claims 1-5, further comprising:
in response to determining that it is the free fall, controlling the communicatively connected device to perform an early warning operation.
7. An apparatus for parabolic detection, comprising:
a target detection module configured to detect whether a target object exists in a target video;
a target calculation module configured to analyze the target video in response to presence, determining a movement distance and a movement time of the target object;
a target determination module configured to determine whether the target object is a free-fall body according to the movement distance and the movement time;
a target storage module configured to store at least one video frame of the target video associated with the target object in response to determining that it is the freefall.
8. A server, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
9. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
CN201911189562.4A 2019-11-28 2019-11-28 Parabolic detection method, device, server and computer readable medium Pending CN110910415A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911189562.4A CN110910415A (en) 2019-11-28 2019-11-28 Parabolic detection method, device, server and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911189562.4A CN110910415A (en) 2019-11-28 2019-11-28 Parabolic detection method, device, server and computer readable medium

Publications (1)

Publication Number Publication Date
CN110910415A true CN110910415A (en) 2020-03-24

Family

ID=69820081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911189562.4A Pending CN110910415A (en) 2019-11-28 2019-11-28 Parabolic detection method, device, server and computer readable medium

Country Status (1)

Country Link
CN (1) CN110910415A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553257A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 High-altitude parabolic early warning method
CN111860195A (en) * 2020-06-25 2020-10-30 郭艺斌 Security detection method and security detection device based on big data
CN112016414A (en) * 2020-08-14 2020-12-01 熵康(深圳)科技有限公司 Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system
CN112308000A (en) * 2020-11-06 2021-02-02 安徽清新互联信息科技有限公司 High-altitude parabolic detection method based on space-time information
CN112330743A (en) * 2020-11-06 2021-02-05 安徽清新互联信息科技有限公司 High-altitude parabolic detection method based on deep learning
CN113673333A (en) * 2020-08-10 2021-11-19 广东电网有限责任公司 Fall detection algorithm in electric power field operation

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004326270A (en) * 2003-04-22 2004-11-18 Koito Ind Ltd Falling object on street detection device
CN104378549A (en) * 2014-10-30 2015-02-25 东莞宇龙通信科技有限公司 Snapshot method and device and terminal
CN108520219A (en) * 2018-03-30 2018-09-11 台州智必安科技有限责任公司 A kind of multiple dimensioned fast face detecting method of convolutional neural networks Fusion Features
CN108960015A (en) * 2017-05-24 2018-12-07 优信拍(北京)信息科技有限公司 A kind of vehicle system automatic identifying method and device based on deep learning
CN109801256A (en) * 2018-12-15 2019-05-24 华南理工大学 A kind of image aesthetic quality appraisal procedure based on area-of-interest and global characteristics
CN109872341A (en) * 2019-01-14 2019-06-11 中建三局智能技术有限公司 A kind of throwing object in high sky detection method based on computer vision and system
CN110175649A (en) * 2019-05-28 2019-08-27 南京信息工程大学 It is a kind of about the quick multiscale estimatiL method for tracking target detected again
CN110188719A (en) * 2019-06-04 2019-08-30 北京字节跳动网络技术有限公司 Method for tracking target and device
US10402692B1 (en) * 2019-01-22 2019-09-03 StradVision, Inc. Learning method and learning device for fluctuation-robust object detector based on CNN using target object estimating network adaptable to customers' requirements such as key performance index, and testing device using the same
CN110287806A (en) * 2019-05-30 2019-09-27 华南师范大学 A kind of traffic sign recognition method based on improvement SSD network
CN110334650A (en) * 2019-07-04 2019-10-15 北京字节跳动网络技术有限公司 Object detecting method, device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004326270A (en) * 2003-04-22 2004-11-18 Koito Ind Ltd Falling object on street detection device
CN104378549A (en) * 2014-10-30 2015-02-25 东莞宇龙通信科技有限公司 Snapshot method and device and terminal
CN108960015A (en) * 2017-05-24 2018-12-07 优信拍(北京)信息科技有限公司 A kind of vehicle system automatic identifying method and device based on deep learning
CN108520219A (en) * 2018-03-30 2018-09-11 台州智必安科技有限责任公司 A kind of multiple dimensioned fast face detecting method of convolutional neural networks Fusion Features
CN109801256A (en) * 2018-12-15 2019-05-24 华南理工大学 A kind of image aesthetic quality appraisal procedure based on area-of-interest and global characteristics
CN109872341A (en) * 2019-01-14 2019-06-11 中建三局智能技术有限公司 A kind of throwing object in high sky detection method based on computer vision and system
US10402692B1 (en) * 2019-01-22 2019-09-03 StradVision, Inc. Learning method and learning device for fluctuation-robust object detector based on CNN using target object estimating network adaptable to customers' requirements such as key performance index, and testing device using the same
CN110175649A (en) * 2019-05-28 2019-08-27 南京信息工程大学 It is a kind of about the quick multiscale estimatiL method for tracking target detected again
CN110287806A (en) * 2019-05-30 2019-09-27 华南师范大学 A kind of traffic sign recognition method based on improvement SSD network
CN110188719A (en) * 2019-06-04 2019-08-30 北京字节跳动网络技术有限公司 Method for tracking target and device
CN110334650A (en) * 2019-07-04 2019-10-15 北京字节跳动网络技术有限公司 Object detecting method, device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SERBAN OPRISESCU ET AL: "Detection of thrown objects using ToF cameras", 《2013 IEEE 9TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP)》 *
谢渊东: "基于3G网络的公安应急视频传输系统的设计与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
陈慧岩: "《智能车辆理论与应用》", 31 July 2018 *
龙力: "基于多特征融合的图像分类与检测应用研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553257A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 High-altitude parabolic early warning method
CN111860195A (en) * 2020-06-25 2020-10-30 郭艺斌 Security detection method and security detection device based on big data
CN111860195B (en) * 2020-06-25 2024-03-01 广州珠江商业经营管理有限公司 Security detection method and security detection device based on big data
CN113673333A (en) * 2020-08-10 2021-11-19 广东电网有限责任公司 Fall detection algorithm in electric power field operation
CN112016414A (en) * 2020-08-14 2020-12-01 熵康(深圳)科技有限公司 Method and device for detecting high-altitude parabolic event and intelligent floor monitoring system
CN112308000A (en) * 2020-11-06 2021-02-02 安徽清新互联信息科技有限公司 High-altitude parabolic detection method based on space-time information
CN112330743A (en) * 2020-11-06 2021-02-05 安徽清新互联信息科技有限公司 High-altitude parabolic detection method based on deep learning
CN112308000B (en) * 2020-11-06 2023-03-07 安徽清新互联信息科技有限公司 High-altitude parabolic detection method based on space-time information
CN112330743B (en) * 2020-11-06 2023-03-10 安徽清新互联信息科技有限公司 High-altitude parabolic detection method based on deep learning

Similar Documents

Publication Publication Date Title
CN110910415A (en) Parabolic detection method, device, server and computer readable medium
US10699125B2 (en) Systems and methods for object tracking and classification
CN109145680B (en) Method, device and equipment for acquiring obstacle information and computer storage medium
EP3382643B1 (en) Automated object tracking in a video feed using machine learning
KR20190062171A (en) Deep learning-based real-time detection and correction of compromised sensors in autonomous machines
US9870511B2 (en) Method and apparatus for providing image classification based on opacity
CN111291697B (en) Method and device for detecting obstacles
JP2016076073A (en) Data processing device, data processing method, and computer program
CN110956137A (en) Point cloud data target detection method, system and medium
US20240071215A1 (en) Detection method and apparatus of abnormal vehicle, device, and storage medium
CN112613569B (en) Image recognition method, training method and device for image classification model
CN111680535A (en) Method and system for real-time prediction of one or more potential threats in video surveillance
CN105787062A (en) Method and equipment for searching for target object based on video platform
JP2014059729A (en) Object detection and identification unit and method for the same, and dictionary data generation method used for object detection and identification
CN111563398A (en) Method and device for determining information of target object
CN113177968A (en) Target tracking method and device, electronic equipment and storage medium
CN111832658B (en) Point-of-interest information processing method and device, electronic equipment and storage medium
CN113011298A (en) Truncated object sample generation method, target detection method, road side equipment and cloud control platform
US10885704B1 (en) 3D mapping by distinguishing between different environmental regions
US11829959B1 (en) System and methods for fully autonomous potholes detection and road repair determination
CN113361386B (en) Virtual scene processing method, device, equipment and storage medium
CN115061386A (en) Intelligent driving automatic simulation test system and related equipment
CN112668364A (en) Behavior prediction method and device based on video
JP6158967B1 (en) Environmental pollution prediction system and method
CN115100689B (en) Object detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination