CN116128922A - Object drop detection method, device, medium and equipment based on event camera - Google Patents

Object drop detection method, device, medium and equipment based on event camera Download PDF

Info

Publication number
CN116128922A
CN116128922A CN202310020378.7A CN202310020378A CN116128922A CN 116128922 A CN116128922 A CN 116128922A CN 202310020378 A CN202310020378 A CN 202310020378A CN 116128922 A CN116128922 A CN 116128922A
Authority
CN
China
Prior art keywords
event
feature map
event stream
events
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310020378.7A
Other languages
Chinese (zh)
Inventor
王程
顾旭升
林修弘
臧彧
刘伟权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202310020378.7A priority Critical patent/CN116128922A/en
Publication of CN116128922A publication Critical patent/CN116128922A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geophysics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the application provides an object drop detection method, device, medium and equipment based on an event camera. The method comprises the following steps: acquiring an event stream output by an event camera, wherein the event stream comprises event information of a plurality of continuous events, and the event information comprises occurrence coordinates and occurrence time of the events; generating a feature map corresponding to the event stream according to the occurrence coordinates and the occurrence time corresponding to each event, wherein the feature map is used for describing the time features and/or the position features of the events contained in the event stream; performing cluster recognition according to the feature value corresponding to each coordinate in the feature map, and determining the position information of at least one target area with a target object in the feature map; and determining the motion trail of the target object based on the position information of each target area in the feature map corresponding to at least one continuous event stream. According to the technical scheme, the accuracy of the falling object identification is improved, and the identification effect is guaranteed.

Description

Object drop detection method, device, medium and equipment based on event camera
Technical Field
The present application relates to the field of computer technologies, and in particular, to an object drop detection method, device, medium, and apparatus based on an event camera.
Background
The target detection of the falling object refers to detecting the object which moves freely from the high altitude to the ground, and restoring and visually displaying the track of the free falling object. In the current technical scheme, the target detection of the dropped object is usually based on a traditional camera, when the detection is carried out, the traditional camera needs to be exposed for a certain time, if the speed of a moving object is high, obvious motion blur can be generated, more object appearance and motion information are lost, the recognition is difficult, the traditional camera also has the problem of insufficient contrast, clear images are difficult to effectively obtain in a challenging illumination environment, and the acquisition of target area information is hindered. Therefore, how to improve the accuracy of the identification of the dropped object and ensure the identification effect becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides an object falling detection method, device, medium and equipment based on an event camera, so that the accuracy of falling object identification can be improved at least to a certain extent, and the identification effect is ensured.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
According to an aspect of the embodiments of the present application, there is provided an object drop detection method based on an event camera, the method including:
acquiring an event stream output by an event camera, wherein the event stream comprises event information of a plurality of continuous events, and the event information comprises occurrence coordinates and occurrence time of the events;
generating a feature map corresponding to the event stream according to the occurrence coordinates and the occurrence time corresponding to each event, wherein the feature map is used for describing the time features and/or the position features of the events contained in the event stream;
performing cluster recognition according to the feature value corresponding to each coordinate in the feature map, and determining the position information of at least one target area with a target object in the feature map;
and determining the motion trail of the target object based on the position information of each target area in the feature map corresponding to at least one continuous event stream.
According to an aspect of the embodiments of the present application, there is provided an object drop detection device based on an event camera, the device including:
the acquisition module is used for acquiring an event stream output by the event camera, wherein the event stream comprises event information of a plurality of continuous events, and the event information comprises occurrence coordinates and occurrence time of the events;
the feature map generating module is used for generating a feature map corresponding to the event stream according to the occurrence coordinates and the occurrence time corresponding to each event, wherein the feature map is used for describing the time features and/or the position features of the events contained in the event stream;
the moving target identification module is used for carrying out cluster identification according to the characteristic value corresponding to each coordinate in the characteristic diagram and determining the position information of at least one target area with a target object in the characteristic diagram;
and the track processing module is used for determining the motion track of the target object based on the position information of each target area in the feature map corresponding to at least one continuous event stream.
According to an aspect of the embodiments of the present application, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements an object drop detection method based on an event camera as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the event camera based object drop detection method as described in the above embodiments.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the object drop detection method based on the event camera provided in the above-described embodiment.
In the technical solutions provided in some embodiments of the present application, by obtaining an event stream output by an event camera, where the event stream includes event information of a plurality of continuous events, where the event information includes occurrence coordinates and occurrence time of the events, and according to the occurrence coordinates and occurrence time corresponding to each event, a feature map corresponding to the event stream is generated, where the feature map may be used to describe a temporal feature and/or a location feature of the event included in the event stream, so that cluster recognition is performed according to a feature value corresponding to each coordinate in the feature map, location information of at least one target area in the feature map where a target object exists is determined, and then a motion track of the target object is determined based on location information of each target area in the feature map corresponding to the continuous at least one event stream.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
FIG. 1 illustrates a flow diagram of an event camera based object drop detection method according to one embodiment of the present application;
FIG. 2 illustrates a block diagram of an event camera based object drop detection apparatus according to one embodiment of the present application;
fig. 3 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
Fig. 1 shows a flow diagram of an event camera based object drop detection method according to one embodiment of the present application. The method can be applied to terminal equipment, including but not limited to one or more of a smart phone, a tablet computer, a portable computer and a desktop computer, and can also be applied to a server, such as a physical server or a cloud server, etc., which is not particularly limited in the application.
Referring to fig. 1, the method for detecting object drop based on the event camera at least includes steps S110 to S140, and is described in detail as follows (hereinafter, the method is described as applied to a terminal device, which is hereinafter referred to as a terminal):
in step S110, an event stream output by an event camera is acquired, where the event stream includes event information of a plurality of consecutive events, and the event information includes occurrence coordinates and occurrence time of the events.
In this embodiment, the raw data output by the event camera is an event, and a single event can be represented as: e= [ x, y, t, p ], where x, y represents coordinates at which event e occurs, t is time at which event e occurs, p is polarity of event e, and when p=1, it represents an increase in luminance, and when p= -1, it represents a decrease in luminance. When the change of the brightness logarithmic value of the pixel point after the last generation of the event response exceeds a certain threshold value c, the event camera generates an event at the pixel position.
The terminal may acquire the events output by the event camera according to a fixed time length or a fixed number to form an event stream, for example, the events within thirty milliseconds are used as an event stream, or ten thousands of events are used as an event stream continuously, etc. Those skilled in the art may choose the corresponding event stream acquisition mode based on previous experience, which is not particularly limited in this application.
It should be noted that, in order to make effective use of event information in the event during the subsequent processing, there may be partial overlapping of events in adjacent event streams, and the overlapping proportion may range from 1% to 99%, for example, 1%, 20%, 50%, 80%, or 90%, etc., where the above numbers are merely exemplary, and the application is not limited thereto.
In step S120, a feature map corresponding to the event stream is generated according to the occurrence coordinates and occurrence time corresponding to each event, where the feature map describes the time feature and/or the position feature of the event included in the event stream.
In this embodiment, the terminal may obtain the occurrence coordinates and occurrence time corresponding to each event from the event stream, and determine the occurrence time and number of the events at each coordinate location, so as to generate a feature map corresponding to the event stream, where the feature map may be used to describe the time feature and/or the location feature of the event included in the event stream.
It should be noted that one or more feature maps may be generated for the same event stream, and different feature maps may describe different features of the event stream.
In one embodiment of the present application, when generating the feature map, the terminal may count the number of events occurring on the same coordinate and corresponding time information according to the occurrence coordinate and the occurrence time corresponding to each event. Then, the terminal may generate a feature map of average time accumulation corresponding to the event stream according to the sum of the number of events occurring on the same coordinate and the time information of the occurring events.
Specifically, the feature value at each coordinate in the feature map of the average time accumulation can be calculated by the following formula:
Figure BDA0004041611000000041
wherein L is ij For the number of events generated on the (i, j) pixel, t is time information, ζ ij Is the set of events on the (i, j) pixel.
It should be noted that, t may be time information after normalization processing, and the normalization processing may be performed on the timestamp according to the following formula: t= (T Original, original -T min )/(T max -T min ) Wherein T is Original, original For the occurrence time of the event, T max For last in event streamTimestamp of an event occurring, T min Is the timestamp of the earliest occurring event in the event stream.
Or, the terminal may generate a feature map updated according to the latest time of the event occurring on the same coordinate.
Specifically, the feature value at each coordinate in the feature map of the latest time accumulation can be calculated according to the following formula:
T ij =max(t):t∈ξ ij
wherein t is time information, and the same applies to the time, ζ, after normalization processing ij Is the set of events on the (i, j) pixel.
Thus, the feature map obtained by the above two processing methods is a statistical feature about the event trigger time, and in an example, after calculating the feature value on each coordinate value, the determined feature value may be normalized, so as to facilitate subsequent processing.
Specifically, the feature value may be normalized according to the following formula:
Figure BDA0004041611000000051
where Δt is the time span of the event stream, i.e. the largest timestamp minus the smallest timestamp in the event stream.
In one embodiment of the present application, after generating the average time-accumulated feature map and/or the latest time-accumulated feature map corresponding to the event stream, the method further includes:
and removing the background and noise from the feature map according to the feature value corresponding to each coordinate in the feature map, the average value of the time stamps of the occurred events and the time span of the event stream.
In this embodiment, the background and partial noise in the feature map can be removed by the following formula:
Figure BDA0004041611000000052
wherein T is avg (i, j) represents the average value of event time stamps on (i, j) pixels or (i, j) pixels and a certain neighborhood range thereof, and the setting of the neighborhood range can be adjusted according to the practical application scene. When ρ (i, j). Ltoreq.0, the event is considered to be caused by the background; when ρ (i, j)>Lambda is considered to be generated by a moving object, lambda is a threshold value set according to the environment, lambda>0。
Therefore, the background and the noise point in the feature map are removed in the mode, the accuracy of the subsequent clustering result can be improved, and the accuracy of the motion trail of the subsequent target object is further improved.
Or, the terminal may generate a 0/1 feature map corresponding to the event stream according to the coordinates of the event, where the position corresponding to the coordinates of the event is 1, and otherwise is 0.
Or the terminal can generate a feature map of event times accumulation corresponding to the event stream according to the number of the events occurring on the same coordinate. Specifically, the number of events occurring on each pixel point may be multiplied by a fixed amplification factor, and the specific formula of the feature value corresponding to the corresponding position in the feature map is as follows:
T ij =L ij *μ,
wherein L is ij The number of events generated on the (i, j) pixel, μ, is the magnification factor, which can be preset by one skilled in the art.
It should be appreciated that, according to actual implementation needs, a person skilled in the art may determine a generation manner of the feature map, which may include one or more of the generation manners described above, which is not limited in this application.
In one embodiment of the present application, before performing cluster recognition according to the feature value corresponding to each coordinate in the feature map, and determining that there is location information of at least one target area of the target object in the feature map, the method further includes:
and carrying out convolution processing on the characteristic values in the characteristic map by adopting a convolution kernel, an activation function and a corresponding activation threshold value so as to remove isolated points from the characteristic map.
In one embodiment, the convolution kernel may be a matrix with all elements of 1, and in practical application, the adjustment and setting of the convolution kernel may be performed according to an application scenario.
In an embodiment, specifically, the feature values in the feature map may be convolved by the following activation function:
Figure BDA0004041611000000061
where η is a threshold (i.e., an activation threshold) determined by the event camera performance parameter and the environment setting, the target detection object, etc., and is generally small, and an empirical value may be obtained through several experiments. The convolution kernel is a matrix with elements of all 1, and the matrix size is 2 x eta+1. For example, when η=1, the size of the convolution kernel is 3×3, and the corresponding convolution kernel is:
Figure BDA0004041611000000062
in other embodiments, other activation functions may be employed by those skilled in the art, as this application is not particularly limited.
In one embodiment of the present application, after the corresponding feature map is generated, gray values corresponding to the feature values may be obtained according to the feature values corresponding to each coordinate in the feature map to perform visualization processing on the feature map, and in one example, the normalized feature values may be multiplied by 255 to determine the gray values corresponding to the normalized feature values, so as to perform visualization processing on the feature map.
In another example, the gray values corresponding to the feature values may be obtained in combination with a time sequence, for example, the later the time of occurrence of an event, the larger or smaller the corresponding gray value, which are in a monotonically increasing or monotonically decreasing relationship. Therefore, according to the visualized characteristic diagram, the sequence of occurrence of the events can be determined according to the brightness relationship of the pixel points.
Referring to fig. 1, in step S130, cluster recognition is performed according to the feature value corresponding to each coordinate in the feature map, so as to determine the position information of at least one target area where the target object exists in the feature map.
In this embodiment, the terminal may use a clustering algorithm to cluster the feature values corresponding to each coordinate in the feature map, so as to identify the location information of the target area where the target object (i.e. the falling object) exists from the feature map. In an example, a box may be used to mark the location of the target object in the feature map, and corner information of the box may be determined as the location information.
In one embodiment of the present application, performing cluster recognition according to a feature value corresponding to each coordinate in the feature map, and determining location information of at least one target area where a target object exists in the feature map includes:
and carrying out cluster recognition on the feature value corresponding to each coordinate in the feature map by adopting a DBSACN density clustering algorithm according to a preset neighborhood radius and the minimum sample point number in the neighborhood radius, so that the DBSACN density clustering algorithm feeds back the position information of at least one target area where the target object exists.
In this embodiment, DBSACN density clustering algorithm parameters eps and min_samples may be preset, where eps is a neighborhood radius of the algorithm scan, and min_samples is the minimum number of sample points in the neighborhood radius, specifically, one point that is not visited may be selected, and all nearby points that are within eps (including eps) from the point may be found.
If the number of nearby points is ≡min_samples, then the current point forms a cluster with its nearby points and the departure point is marked as accessed. And then recursively, all points within the cluster that are not marked as accessed are processed in the same way, thereby expanding the cluster. If the number of nearby points is less than min_samples, then the point is temporarily marked as a noise point. If the cluster is sufficiently expanded, i.e. the points within the cluster are marked as accessed, other points that are not accessed are processed in the same way, and finally clustering of the points is achieved.
According to the clustering result, marking and framing the points of different labels fed back by the algorithm, wherein the position information of the outline is as follows:
Figure BDA0004041611000000071
wherein pt1, pt2 are a pair of diagonal points of the marked rectangular frame, respectively. And, after determining the rectangular frame position, the rectangular frame may be visualized in the feature map.
It should be understood that those skilled in the art may also select other clustering methods according to actual implementation requirements, which are not particularly limited in this application.
Referring to fig. 1, in step S140, a motion trajectory of the target object is determined based on the position information of each target area in the feature map corresponding to at least one continuous event stream.
In this embodiment, it should be understood that the dropping process of the object needs to be continued for a period of time, and if the time span of the event stream is too small, when determining the motion track of the target object, the determining may be performed in combination with the position information of each target area in the feature map corresponding to at least one continuous event stream, for example, may be performed in combination with the position information of each target area in the feature map corresponding to three continuous event streams, or the like. The terminal can map the position information of each target area in the feature map corresponding to at least one event stream into the same map, and perform curve fitting according to the position information so as to determine and display the motion trail of the target object.
In one embodiment of the present application, determining a motion trajectory of the target object based on position information of each target area in a feature map corresponding to at least one continuous event stream includes:
determining the central position of each target area according to the position information of each target area in the feature map corresponding to at least one continuous event stream;
and interpolating according to the central position of each target area, and fitting a curve to generate the motion trail of the target object.
In this embodiment, the terminal may determine the center position of each target area according to the position information of each target area in the feature map corresponding to at least one event stream, and interpolate according to the center position of each target area to generate the motion track of the target object by using a fitting curve. Therefore, interpolation and generation of the motion trail are carried out through the central position, linearity of the generated motion trail can be guaranteed, and accuracy of the motion trail is improved.
Therefore, according to the object drop detection method based on the event camera, which is mainly based on the event stream, the time information of the event can be fully utilized, the required calculated amount is small, the calculation efficiency is high, the method can be used for real-time processing, and the detection efficiency is improved. And based on the analysis mode of the event stream, the small-volume moving target can be accurately detected, and the event camera detects the target area and only captures the moving information, so that the privacy of the environment is improved.
The following describes an embodiment of an apparatus of the present application that may be used to perform the event camera-based object drop detection method of the above-described embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method for detecting object dropping based on an event camera.
Fig. 2 shows a block diagram of an event camera based object drop detection device according to one embodiment of the present application.
Referring to fig. 2, an object drop detection apparatus based on an event camera according to an embodiment of the present application includes:
an obtaining module 210, configured to obtain an event stream output by the event camera, where the event stream includes event information of a plurality of continuous events, and the event information includes occurrence coordinates and occurrence time of the events;
a feature map generating module 220, configured to generate a feature map corresponding to the event stream according to the occurrence coordinates and the occurrence time corresponding to each event, where the feature map describes time features and/or position features of the event included in the event stream;
the moving target identifying module 230 is configured to perform cluster identification according to a feature value corresponding to each coordinate in the feature map, and determine location information of at least one target area where a target object exists in the feature map;
the track processing module 240 is configured to determine a motion track of the target object based on position information of each target area in the feature map corresponding to the continuous at least one event stream.
In one embodiment of the present application, the feature map generating module 220 is configured to: determining the number and time information of the events occurring on the same coordinate according to the occurrence coordinates and the occurrence time corresponding to each event; generating a feature map of average time accumulation corresponding to the event stream according to the sum of the number of the events occurring on the same coordinate and the time information of the events occurring; and/or generating a feature map updated according to the latest time of the event occurring on the same coordinate, wherein the latest time corresponds to the event stream; and/or generating a feature map of event number accumulation corresponding to the event stream according to the number of the events occurring on the same coordinate; and/or generating a 0/1 feature map corresponding to the event stream according to the coordinates of the event, wherein the position corresponding to the coordinates of the event is set to be 1, and otherwise, the position is set to be 0.
In one embodiment of the present application, the feature map generating module 220 is further configured to: and removing the background and noise from the feature map according to the feature value corresponding to each coordinate in the feature map, the average value of the time stamps of the occurred events and the time span of the event stream.
In one embodiment of the present application, the feature map generating module 220 is further configured to: and carrying out convolution processing on the characteristic values in the characteristic graph by adopting a convolution kernel, an activation function and a corresponding activation threshold value to remove isolated points from the characteristic graph, wherein the convolution kernel is a matrix with all elements of 1.
In one embodiment of the present application, the moving object identification module 230 is configured to: and carrying out cluster recognition on the feature value corresponding to each coordinate in the feature map by adopting a DBSACN density clustering algorithm according to a preset neighborhood radius and the minimum sample point number in the neighborhood radius, so that the DBSACN density clustering algorithm feeds back the position information of at least one target area where the target object exists.
In one embodiment of the present application, the track processing module 240 is configured to determine a center position of each target area according to position information of each target area in the feature map corresponding to the continuous at least one event stream; and interpolating according to the central position of each target area, and fitting a curve to generate the motion trail of the target object.
In one embodiment of the present application, the feature map generating module 220 is further configured to: and according to the characteristic value corresponding to each coordinate in the characteristic diagram, acquiring a gray value corresponding to each characteristic value so as to perform visualization processing on the characteristic diagram.
Fig. 3 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
It should be noted that, the computer system of the electronic device shown in fig. 3 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 3, the computer system includes a central processing unit (Central Processing Unit, CPU) 301 that can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 302 or a program loaded from a storage section 308 into a random access Memory (Random Access Memory, RAM) 303. In the RAM 303, various programs and data required for the system operation are also stored. The CPU 301, ROM 302, and RAM 303 are connected to each other through a bus 304. An Input/Output (I/O) interface 305 is also connected to bus 304.
The following components are connected to the I/O interface 305: an input section 306 including a keyboard, a mouse, and the like; an output portion 307 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, a speaker, and the like; a storage section 308 including a hard disk or the like; and a communication section 309 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. The drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 310 as needed, so that a computer program read therefrom is installed into the storage section 308 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 309, and/or installed from the removable medium 311. When executed by a Central Processing Unit (CPU) 301, performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a usb disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. An object drop detection method based on an event camera, comprising:
acquiring an event stream output by an event camera, wherein the event stream comprises event information of a plurality of continuous events, and the event information comprises occurrence coordinates and occurrence time of the events;
generating a feature map corresponding to the event stream according to the occurrence coordinates and the occurrence time corresponding to each event, wherein the feature map is used for describing the time features and/or the position features of the events contained in the event stream;
performing cluster recognition according to the feature value corresponding to each coordinate in the feature map, and determining the position information of at least one target area with a target object in the feature map;
and determining the motion trail of the target object based on the position information of each target area in the feature map corresponding to at least one continuous event stream.
2. The method of claim 1, wherein generating a feature map corresponding to the event stream from the occurrence coordinates and the occurrence time corresponding to each event comprises:
determining the number and time information of the events occurring on the same coordinate according to the occurrence coordinates and the occurrence time corresponding to each event;
generating a feature map of average time accumulation corresponding to the event stream according to the sum of the number of the events occurring on the same coordinate and the time information of the events occurring;
and/or generating a feature map updated according to the latest time of the event occurring on the same coordinate, wherein the latest time corresponds to the event stream;
and/or generating a feature map of event number accumulation corresponding to the event stream according to the number of the events occurring on the same coordinate;
and/or generating a 0/1 feature map corresponding to the event stream according to the coordinates of the event, wherein the position corresponding to the coordinates of the event is set to be 1, and otherwise, the position is set to be 0.
3. The method according to claim 2, wherein after generating the average time-accumulated feature map and/or the latest time-accumulated feature map for the event stream, the method further comprises:
and removing the background and noise from the feature map according to the feature value corresponding to each coordinate in the feature map, the average value of the time stamps of the occurred events and the time span of the event stream.
4. The method according to claim 2, wherein before performing cluster recognition according to the feature value corresponding to each coordinate in the feature map, determining that there is position information of at least one target area of the target object in the feature map, the method further includes:
and carrying out convolution processing on the characteristic values in the characteristic map by adopting a convolution kernel, an activation function and a corresponding activation threshold value so as to remove isolated points from the characteristic map.
5. The method according to claim 1, wherein determining the location information of at least one target area in the feature map where the target object exists according to cluster recognition performed on the feature value corresponding to each coordinate in the feature map includes:
and carrying out cluster recognition on the feature value corresponding to each coordinate in the feature map by adopting a DBSACN density clustering algorithm according to a preset neighborhood radius and the minimum sample point number in the neighborhood radius, so that the DBSACN density clustering algorithm feeds back the position information of at least one target area where the target object exists.
6. The method of claim 1, wherein determining the motion trajectory of the target object based on the position information of each target region in the feature map corresponding to the continuous at least one event stream comprises:
determining the central position of each target area according to the position information of each target area in the feature map corresponding to at least one continuous event stream;
and interpolating according to the central position of each target area, and fitting a curve to generate the motion trail of the target object.
7. The method according to any one of claims 1 to 6, wherein after generating a feature map corresponding to the event stream according to occurrence coordinates and occurrence time corresponding to each of the events, the method further comprises:
and according to the characteristic value corresponding to each coordinate in the characteristic diagram, acquiring a gray value corresponding to each characteristic value so as to perform visualization processing on the characteristic diagram.
8. An object drop detection device based on an event camera, comprising:
the acquisition module is used for acquiring an event stream output by the event camera, wherein the event stream comprises event information of a plurality of continuous events, and the event information comprises occurrence coordinates and occurrence time of the events;
the feature map generating module is used for generating a feature map corresponding to the event stream according to the occurrence coordinates and the occurrence time corresponding to each event, wherein the feature map is used for describing the time features and/or the position features of the events contained in the event stream;
the moving target identification module is used for carrying out cluster identification according to the characteristic value corresponding to each coordinate in the characteristic diagram and determining the position information of at least one target area with a target object in the characteristic diagram;
and the track processing module is used for determining the motion track of the target object based on the position information of each target area in the feature map corresponding to at least one continuous event stream.
9. A computer readable medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the event camera based object drop detection method according to any of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which when executed by the one or more processors cause the one or more processors to implement the event camera based object drop detection method of any of claims 1 to 7.
CN202310020378.7A 2023-01-06 2023-01-06 Object drop detection method, device, medium and equipment based on event camera Pending CN116128922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310020378.7A CN116128922A (en) 2023-01-06 2023-01-06 Object drop detection method, device, medium and equipment based on event camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310020378.7A CN116128922A (en) 2023-01-06 2023-01-06 Object drop detection method, device, medium and equipment based on event camera

Publications (1)

Publication Number Publication Date
CN116128922A true CN116128922A (en) 2023-05-16

Family

ID=86311251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310020378.7A Pending CN116128922A (en) 2023-01-06 2023-01-06 Object drop detection method, device, medium and equipment based on event camera

Country Status (1)

Country Link
CN (1) CN116128922A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237676A (en) * 2023-11-09 2023-12-15 中核国电漳州能源有限公司 Method for processing small target drop track of nuclear power plant based on event camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237676A (en) * 2023-11-09 2023-12-15 中核国电漳州能源有限公司 Method for processing small target drop track of nuclear power plant based on event camera
CN117237676B (en) * 2023-11-09 2024-03-01 中核国电漳州能源有限公司 Method for processing small target drop track of nuclear power plant based on event camera

Similar Documents

Publication Publication Date Title
CN108492287B (en) Video jitter detection method, terminal equipment and storage medium
CN108961303B (en) Image processing method and device, electronic equipment and computer readable medium
CN106560840B (en) A kind of image information identifying processing method and device
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
CN108647587B (en) People counting method, device, terminal and storage medium
WO2021013049A1 (en) Foreground image acquisition method, foreground image acquisition apparatus, and electronic device
CN111241928B (en) Face recognition base optimization method, system, equipment and readable storage medium
CN110390295B (en) Image information identification method and device and storage medium
CN112509003B (en) Method and system for solving target tracking frame drift
EP3585052A1 (en) Image identification method, device, apparatus, and data storage medium
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
CN111667504A (en) Face tracking method, device and equipment
CN110784644A (en) Image processing method and device
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN116128922A (en) Object drop detection method, device, medium and equipment based on event camera
CN111080665B (en) Image frame recognition method, device, equipment and computer storage medium
WO2022048578A1 (en) Image content detection method and apparatus, electronic device, and readable storage medium
CN108509876B (en) Object detection method, device, apparatus, storage medium, and program for video
CN110390226B (en) Crowd event identification method and device, electronic equipment and system
CN113610835A (en) Human shape detection method for nursing camera
CN113052019A (en) Target tracking method and device, intelligent equipment and computer storage medium
JPWO2018179119A1 (en) Video analysis device, video analysis method, and program
CN108093153B (en) Target tracking method and device, electronic equipment and storage medium
CN110852250A (en) Vehicle weight removing method and device based on maximum area method and storage medium
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination