WO2022165735A1 - 一种运动物体检测方法及系统 - Google Patents

一种运动物体检测方法及系统 Download PDF

Info

Publication number
WO2022165735A1
WO2022165735A1 PCT/CN2021/075434 CN2021075434W WO2022165735A1 WO 2022165735 A1 WO2022165735 A1 WO 2022165735A1 CN 2021075434 W CN2021075434 W CN 2021075434W WO 2022165735 A1 WO2022165735 A1 WO 2022165735A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
event data
image block
candidate frame
pixel
Prior art date
Application number
PCT/CN2021/075434
Other languages
English (en)
French (fr)
Inventor
牟晓正
Original Assignee
豪威芯仑传感器(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 豪威芯仑传感器(上海)有限公司 filed Critical 豪威芯仑传感器(上海)有限公司
Priority to EP21923755.9A priority Critical patent/EP4290404A1/en
Publication of WO2022165735A1 publication Critical patent/WO2022165735A1/zh
Priority to US18/226,818 priority patent/US20230368397A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Definitions

  • the invention relates to the technical field of data processing, and in particular, to a moving object detection scheme.
  • Object detection methods based on traditional image sensors usually need to traverse the entire image. These detection methods include traditional machine learning algorithms, such as Adaboost, Random Forest, etc.; and currently widely studied deep learning algorithms, such as YOLO, Faster RCNN, SSD, etc.
  • traditional machine learning algorithms such as Adaboost, Random Forest, etc.
  • deep learning algorithms such as YOLO, Faster RCNN, SSD, etc.
  • the size ratio of the target object in the image is relatively small, which will cause a lot of redundant computing power to be consumed in areas other than the target object, which is a great challenge to the real-time performance of the algorithm.
  • the traditional image when the moving speed of the object is too fast, the traditional image usually produces motion blur. In this way, the characteristics of the moving object in the image are not obvious or change, which leads to the failure of the traditional object detection and recognition algorithm.
  • the present invention provides a moving object detection method and system to try to solve or at least alleviate at least one of the above problems.
  • a method for detecting moving objects comprising the steps of: dividing a preset image template into a plurality of image blocks of the same size; The data comes from the dynamic vision sensor and is triggered by the relative motion of the object in the field of view and the dynamic vision sensor; according to the number of event data corresponding to each image block, at least one image block containing a moving object is determined; based on the determined image block, generate Object candidate frame, the object candidate frame points to the detected moving object.
  • the method according to the present invention further comprises the steps of: mapping the event data within a predetermined duration to a preset image template, and generating a time plane image corresponding to the predetermined duration, wherein the event data includes the coordinate position of the triggered event and the timestamp.
  • the method further includes the step of: using an image classification algorithm to identify the object candidate frame pointed to from the temporal plane image.
  • the category of moving objects is not limited to moving objects.
  • the following formula is used to count the event data contained in each image block:
  • H(b ij ) is included in b ij
  • the number of event data in a predetermined time period (x k , y k ) represents the coordinate position of the kth event data
  • N is the total number of event data received in the predetermined time period
  • ⁇ ( ) represents Dirac ⁇ function
  • [ ] represents the rounding function.
  • the step of determining at least one image block containing a moving object according to the number of event data corresponding to each image block includes: filtering out the number of contained event data greater than a first preset value. , as an image block containing moving objects.
  • the step of generating the object candidate frame includes: from the determined image blocks, searching for a plurality of continuous image blocks with shared edges; The continuous image blocks of , generate at least one minimum circumscribed rectangle as object candidate frame.
  • the step of generating the object candidate frame further includes: calculating, respectively, the detected object candidates within the current predetermined duration and the previous adjacent predetermined duration.
  • the overlap ratio of the frame; the object candidate frame whose overlap ratio is greater than the second preset value is used as the final object candidate frame corresponding to the current predetermined duration.
  • the method before the step of dividing the preset image template into a plurality of image blocks of the same size, the method further includes the step of: generating the preset image template based on the pixel unit array in the dynamic vision sensor.
  • the event data within a predetermined duration is mapped to a preset image template
  • the step of generating a temporal plane image corresponding to the duration includes: at each pixel of the preset image template, searching for Whether there is a triggered event consistent with the coordinate position of the pixel; and according to the search result, the pixel value of the pixel is binarized and assigned to generate a time plane image.
  • the event data within a predetermined duration is mapped to a preset image template
  • the step of generating a temporal plane image corresponding to the duration includes: at each pixel of the preset image template, calculating The relationship value between its coordinate position and the coordinate positions of all triggered events; based on the relationship value, update the pixel value of the pixel to generate a time plane image.
  • the first preset value is positively correlated with the size of the image block and the predetermined duration.
  • a computing device comprising: one or more processors; and a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more Multiple processors execute, one or more programs including instructions for performing any of the methods described above.
  • a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a computing device, cause the computing device to perform the above-described method. either method.
  • a moving object detection system comprising: the computing device as described above; a dynamic vision sensor, coupled to the computing device, adapted to be based on the relative relationship between the object in the field of view and the dynamic vision sensor Motion triggers events and outputs event data streams to computing devices.
  • the preset image template is divided into several image blocks of the same size, and the number of event data in each image block is counted separately, which speeds up the speed of locating all possible areas of moving objects and avoids the traditional method.
  • the traversal search of the entire area of the image in the algorithm greatly reduces the computing power.
  • the number of event data is counted in the way of image blocks instead of pixel by pixel, which greatly reduces the amount of computation and improves the robustness of object positioning.
  • FIG. 1 shows a schematic diagram of a moving object detection system 100 according to some embodiments of the present invention
  • FIG. 2 shows a schematic diagram of a computing device 200 according to some embodiments of the present invention
  • FIG. 3 shows a flowchart of a moving object detection method 300 according to an embodiment of the present invention
  • 4A and 4B show schematic diagrams of determining object candidate frames according to an embodiment of the present invention
  • FIG. 5 shows a flowchart of a moving object detection method 500 according to another embodiment of the present invention.
  • FIG. 6 shows a schematic diagram of a moving object detection result according to an embodiment of the present invention.
  • DVS Dynamic Vision Sensor
  • the sensor has a pixel unit array composed of multiple pixel units, and each pixel unit responds to and records the area with rapid changes in light intensity only when it senses a change in light intensity. That is, each pixel unit in the DVS can independently respond to and record areas with rapid changes in light intensity.
  • DVS adopts an event-triggered processing mechanism, its output is an asynchronous event data stream rather than an image frame.
  • the event data stream is, for example, light intensity change information (such as light intensity change timestamp and light intensity threshold) and triggered pixels. The coordinate position of the unit.
  • DVS dynamic vision sensors
  • the response speed of DVS is no longer limited by traditional exposure time and frame rate, and it can detect up to 10,000 High-speed objects moving at a frame/second rate; 2) DVS has a larger dynamic range, and can accurately sense and output scene changes in low-light or high-exposure environments; 3) DVS consumes less power; Pixel units respond independently to changes in light intensity, so DVS is not affected by motion blur.
  • a DVS-based moving object detection scheme is proposed. Taking into account the defects of the existing moving object detection schemes, this scheme considers using the characteristics of DVS with low data redundancy, fast response, and freedom from motion blur, and uses a certain algorithm to process the output event data stream to realize the Rapid detection and recognition of moving objects to solve the problems of high computing power and power consumption, and low performance in accuracy and response speed of traditional moving object detection algorithms.
  • FIG. 1 shows a schematic diagram of a moving object detection system 100 according to an embodiment of the present invention.
  • the system 100 includes a dynamic vision sensor (DVS) 110 and a computing device 200 that are coupled to each other.
  • DVD dynamic vision sensor
  • FIG. 1 is only an example, and the embodiments of the present invention do not limit the number of DVSs and computing devices included in the system 100 .
  • the dynamic vision sensor 110 monitors the movement of objects in the field of view in real time, and once it detects that there is an object in the field of view (relative to the dynamic vision sensor 110 ) moving (ie, the light in the field of view changes), a pixel event is triggered (or, simply referred to as "events"), output event data for dynamic pixels (ie, pixel units whose brightness changes).
  • event data output within a period of time constitute the event data stream.
  • Each event data in the event data stream includes at least the coordinate position of the triggered event (ie, the pixel unit whose brightness changes) and the timestamp information of the triggered time.
  • the specific composition of the dynamic vision sensor 110 will not be elaborated here.
  • Computing device 200 receives event data streams from dynamic vision sensors 110 and processes the event data streams to detect moving objects. Still further, the computing device 200 may also identify the category of the detected moving object (eg, person, car, cat, soccer ball, etc.). Afterwards, the system 100 may perform subsequent processing based on the detection results, such as tracking moving objects and the like.
  • the category of the detected moving object eg, person, car, cat, soccer ball, etc.
  • the system 100 may perform subsequent processing based on the detection results, such as tracking moving objects and the like.
  • FIG. 2 shows a schematic block diagram of a computing device 200 according to an embodiment of the present invention.
  • computing device 200 typically includes system memory 206 and one or more processors 204 .
  • Memory bus 208 may be used for communication between processor 204 and system memory 206 .
  • the processor 204 may be any type of process, including but not limited to: a microprocessor ( ⁇ P), a microcontroller ( ⁇ P/ ⁇ C/DSP), a digital information processor (DSP), or any of these combination.
  • Processor 204 may include one or more levels of cache, such as L1 cache 210 and L2 cache 212 , processor core 214 , and registers 216 .
  • Exemplary processor cores 214 may include arithmetic logic units (ALUs), floating point units (FPUs), digital signal processing cores (DSP cores), or any combination thereof.
  • the example memory controller 218 may be used with the processor 204 , or in some implementations, the memory controller 218 may be an internal part of the processor 204 .
  • system memory 206 may be any type of memory including, but not limited to, volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof.
  • System memory 206 may include operating system 220 , one or more applications 222 , and program data 224 .
  • applications 222 may be arranged to execute instructions using program data 224 by one or more processors 204 on an operating system.
  • Computing device 200 also includes storage device 232 including removable storage 236 and non-removable storage 238, both of which are connected to storage interface bus 234.
  • Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (eg, output device 242 , peripheral interface 244 , and communication device 246 ) to base configuration 202 via bus/interface controller 230 .
  • Example output devices 242 include graphics processing unit 248 and audio processing unit 250 . They may be configured to facilitate communication via one or more A/V ports 252 with various external devices such as displays or speakers.
  • Example peripheral interfaces 244 may include serial interface controller 254 and parallel interface controller 256, which may be configured to facilitate communication via one or more I/O ports 258 and input devices such as keyboard, mouse, pen, etc.
  • the example communication device 246 may include a network controller 260 that may be arranged to facilitate communication via one or more communication ports 264 with one or more other computing devices 262 over a network communication link.
  • a network communication link may be one example of a communication medium.
  • Communication media may typically embody computer readable instructions, data structures, program modules in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media.
  • a "modulated data signal" may be a signal of which one or more of its data sets or changes may be made in such a way as to encode information in the signal.
  • communication media may include wired media, such as wired or leased line networks, and various wireless media, such as acoustic, radio frequency (RF), microwave, infrared (IR), or other wireless media.
  • RF radio frequency
  • IR infrared
  • the term computer readable medium as used herein may include both storage media and communication media.
  • computing device 200 may be implemented as part of a small form factor portable (or mobile) electronic device such as a cellular telephone, digital camera, personal digital assistant (PDA), personal media player device, wireless web browsing device , personal headsets, application-specific devices, or hybrid devices that can include any of the above.
  • computing device 200 may be implemented as a micro-computing module or the like. The embodiments of the present invention do not limit this.
  • the computing device 200 is configured to perform the moving object detection method according to the present invention.
  • the application 222 of the computing device 200 includes a plurality of program instructions for executing the method 300 and the method 500 according to the present invention.
  • the computing device 200 can also be used as a part of the dynamic vision sensor 110 to process the event data stream to realize moving object detection.
  • FIG. 3 shows a flowchart of a moving object detection method 300 according to an embodiment of the present invention.
  • Method 300 is performed in computing device 200 . It should be noted that, due to space limitations, the descriptions of the method 300 and the system 100 are complementary to each other, and repeated parts will not be repeated.
  • the method 300 begins at step S310.
  • step S310 the preset image template is divided into a plurality of image blocks with the same size.
  • the method 300 further includes the step of generating a preset image template. Specifically, based on the pixel unit array in the dynamic vision sensor 110, a preset image template is generated. In one embodiment, the size of the preset image template is consistent with the size of the pixel unit array. Assuming that the pixel unit array is a 20 ⁇ 30 array, the size of the preset image template is also 20 ⁇ 30. In other words, each pixel in the preset image template corresponds to a pixel unit in the pixel unit array.
  • the adjacent image blocks may be left and right adjacent image blocks, and/or upper and lower adjacent image blocks. All image blocks may overlap, or only image blocks in a partial area may overlap. The embodiments of the present invention do not limit this.
  • the size of the overlapping portion is smaller than the size of the image block.
  • each event data e(x, y, t) includes the coordinate position (x, y) of the corresponding triggered event and the timestamp t of the triggered time.
  • the computing device 200 when acquiring the event data stream, the computing device 200 performs a moving object detection process every time the event data stream within a predetermined time period, and detects a moving object therein.
  • the timestamp of the first received event data in this time period is t 0 .
  • the timestamp t of the subsequently received event data satisfies t 0 >T, the event data is stopped, and T is the predetermined duration.
  • step S320 the number of event data within a predetermined duration included in each image block is counted.
  • event data is triggered by the relative motion of objects in the field of view and the dynamic vision sensor 110 .
  • the following formula is used to count the event data contained in each image block:
  • the number of event data in a predetermined time period, (x k , y k ) represents the coordinate position of the kth event data, N is the total number of event data received in the predetermined time period, ⁇ ( ) represents Dirac ⁇ function, [ ] represents the rounding function.
  • step S330 at least one image block containing a moving object is determined according to the quantity of event data corresponding to each image block.
  • all image blocks containing a quantity of event data greater than a first preset value are screened out as image blocks containing moving objects. These image patches constitute possible regions of moving objects.
  • the first preset value is positively related to the size of the image block and the predetermined duration. That is, the larger the size of the image block and the longer the predetermined duration T, the larger the first preset value.
  • the value of the first preset value is 100
  • the corresponding image size is 1280 ⁇ 800
  • the size of the image block is 32 ⁇ 32
  • the predetermined duration is 30ms, which is not limited thereto.
  • FIG. 4A and 4B illustrate schematic diagrams of determining object candidate frames according to an embodiment of the present invention.
  • FIG. 4A shows a preset image template.
  • the preset image template is divided into 6 rows and 8 columns, a total of 48 image blocks, and the screened images containing moving objects are depicted with bold lines.
  • Image blocks that is, image block a, image block b, image block c, image block d, image block e, image block f, image block g, image block h, image block i and image block j shown in FIG. 10 image blocks.
  • an object candidate frame is generated.
  • the object candidate frame can contain moving objects to the greatest extent, that is, the object candidate frame points to the detected moving objects.
  • the object candidate frame is obtained by combining all the image blocks determined in step S330.
  • search for a plurality of consecutive image blocks with shared edges then based on the searched continuous image blocks, generate at least one minimum circumscribed rectangle as a corresponding object candidate frame.
  • image block a, image block b, image block c, image block d, image block e, image block f and image block g belong to continuous image blocks with shared edges; image block h, image block Patch i and patch j belong to consecutive image patches that share edges with each other. That is, a total of two groups of consecutive image blocks are found.
  • their minimum circumscribed rectangles are respectively generated.
  • a minimum circumscribed rectangle can be generated, that is, the object candidate frame 410 shown in Fig. 4B;
  • Consecutive image blocks can generate another minimum circumscribed rectangle, that is, the object candidate frame 420 shown in FIG. 4B .
  • the minimum circumscribed rectangle is made for the continuous image blocks with shared edges to ensure that multiple separate Moving objects are not merged together.
  • the method 300 divides the preset image template into several image blocks of the same size, and counts the number of event data in each image block respectively, which speeds up the speed of locating all possible areas of the moving object, and avoids the need for traditional algorithms to The traversal search of the area greatly reduces the computing power.
  • the number of event data is counted in the way of image blocks instead of pixel by pixel, which greatly reduces the amount of computation and improves the robustness of object positioning.
  • the method further includes the step of: using the object candidates generated respectively in two adjacent predetermined time periods. frame, and further process the object candidate frame within the current predetermined time period to determine the final object candidate frame.
  • the following shows a process of further processing an object candidate frame according to an embodiment of the present invention to determine a final object candidate frame.
  • the overlap ratio of each object candidate frame generated in the current predetermined time period and the previous adjacent predetermined time period is first calculated respectively.
  • the following formula is used to calculate the overlap rate O:
  • R curr represents the object candidate frame obtained in the current predetermined period
  • R prev represents the object candidate frame obtained in the previous adjacent predetermined period
  • represents the operation of taking the overlapping part of the two object candidate frames
  • Area( ⁇ ) means to find the area of the overlapping part.
  • the overlap rate is calculated for all object candidate frames one by one.
  • the object candidate frame whose overlap ratio is greater than the second preset value is used as the final object candidate frame corresponding to the current predetermined time period.
  • the value range of the second preset value is generally [0.5, 0.8] to ensure the continuity of the detected moving objects in space and time.
  • the final object candidate frame in the current predetermined duration is determined according to the overlap ratio with the object candidate frame in the previous predetermined duration, which can effectively avoid the interference of some noise event data on the detection result.
  • FIG. 5 shows a schematic flowchart of a moving object detection method 500 according to still another embodiment of the present invention.
  • the method 500 is executed on the basis of the method 300, so the content consistent with the method 300 is not repeated here, and is represented by the same step identifier.
  • the computing device 200 when the computing device 200 receives the event data stream from the dynamic vision sensor 110 , the computing device 200 further includes the step of: mapping the event data within a predetermined duration to a preset image template, and generating a corresponding image template for the predetermined duration. Time plane image.
  • step S510 is performed to generate a temporal plane image by using the event data within a predetermined duration; on the other hand, according to the above steps S320 to S340 As described above, count the number of event data within a predetermined period of time to generate object candidate frames.
  • the binarized temporal plane image is generated as follows. Specifically, at each pixel of the preset image template, find out whether there is a triggered event that is consistent with the coordinate position of the pixel, and according to the search result (that is, the coordinate position is consistent or inconsistent), the pixel value of the pixel is binarized. to generate a temporal plane image. In other words, if the coordinate position of the triggered event is consistent with the coordinate position of the pixel, the pixel value of the pixel is 255; if the coordinate position of the triggered event is inconsistent with the coordinate position of the pixel, the pixel value of the pixel is 0. As shown in the following formula:
  • (x k , y k ) represents the coordinate position of the triggered event
  • (x, y) represents the coordinate position of the pixel
  • I(x, y) represents the pixel value at (x, y).
  • the pixel values are assigned to 0 and 255, so that the generated temporal plane image is a grayscale image.
  • the embodiment of the present invention does not limit the specific assignment, and may also be 0 and 1, or 0 and 1023, and so on. Even, it is possible to directly use the timestamp of the triggered event to assign a pixel value.
  • the temporal plane image is generated by means of event data accumulation. Specifically, at each pixel of the preset image template, the relationship value between its coordinate position and the coordinate position of all triggered events is calculated; and based on the relationship value, the pixel value of the corresponding pixel is updated to generate a temporal plane image. It can be represented by the following formula:
  • (x k , y k ) represents the coordinate position of the triggered event
  • (x, y) represents the coordinate position of the pixel
  • I(x, y) represents the pixel value at (x, y)
  • N is the predetermined
  • ⁇ ( ) represents the Dirac delta function
  • C is a constant.
  • ⁇ (x+yx k -y k )+C calculated by using the Dirac function represents the relationship value between the coordinate position of the pixel and the coordinate position of all triggered events.
  • step S520 is executed, and the image classification algorithm is used to identify the category of the moving object pointed to by the object candidate frame from the temporal plane image.
  • the object candidate frame is first mapped onto the temporal plane image, and then the category of the object in the object candidate frame is identified by using an image classification algorithm (eg, SVM, MobileNet, etc.).
  • the classification model can be trained and generated by using the training images marked with the categories in advance, and then the temporal plane image (or the image of the corresponding area of the object candidate frame, which is not limited in this embodiment of the present invention) is input into the classification model. In the model, it processes the corresponding area of the object candidate frame, and finally outputs the category of the object.
  • FIG. 6 shows a schematic diagram of a moving object detection result according to an embodiment of the present invention.
  • a moving object is detected, which is circled by a rectangular frame (ie, an object candidate frame).
  • a rectangular frame ie, an object candidate frame.
  • the temporal plane image formed by the event data is used to classify and identify the detected moving objects, which can reduce the missed detection and false detection caused by the blurred image caused by the rapid movement of the object in the traditional algorithm.
  • modules or units or components of the apparatus in the examples disclosed herein may be arranged in the apparatus as described in this embodiment, or alternatively may be positioned differently from the apparatus in this example in one or more devices.
  • the modules in the preceding examples may be combined into one module or further divided into sub-modules.
  • modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment.
  • the modules or units or components in the embodiments may be combined into one module or unit or component, and further they may be divided into multiple sub-modules or sub-units or sub-assemblies. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method so disclosed may be employed in any combination, unless at least some of such features and/or procedures or elements are mutually exclusive. All processes or units of equipment are combined.
  • Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种运动物体检测方法及系统。其中运动物体检测方法包括步骤:将预设图像模板分成多个尺寸相同的图像块;统计各图像块中包含的预定时长内的事件数据的数量,事件数据来自动态视觉传感器,由视场中物体和动态视觉传感器的相对运动触发;根据各图像块对应的事件数据的数量,确定出包含运动物体的至少一个图像块;基于所确定的图像块,生成物体候选框,物体候选框指向检测出的运动物体。本发明一并公开相应的计算设备。

Description

一种运动物体检测方法及系统 技术领域
本发明涉及数据处理技术领域,尤其涉及一种运动物体检测方案。
背景技术
基于传统图像传感器的物体检测方法,通常需要对整幅图像进行遍历搜索,这些检测方法包括传统的机器学习算法,如Adaboost、Random Forest等;以及当前被广泛研究的深度学习算法,如YOLO、Faster RCNN、SSD等。然而,通常目标物体在图像中所占的大小比例会比较小,这就会导致大量冗余的算力消耗在目标物体以外的区域,因而对算法的实时性是一个很大的挑战。并且,当物体运动速度过快时,传统图像通常都会产生运动模糊,这样,运动物体在图像中的特征不明显或者发生变化,进而导致传统物体检测和识别算法失败。
鉴于此,需要一种新的运动物体检测方案。
发明内容
本发明提供了一种运动物体检测方法及系统,以力图解决或者至少缓解上面存在的至少一个问题。
根据本发明的一个方面,提供了一种运动物体检测方法,包括步骤:将预设图像模板分成多个尺寸相同的图像块;统计各图像块中包含的预定时长内的事件数据的数量,事件数据来自动态视觉传感器,由视场中物体和动态视觉传感器的相对运动触发;根据各图像块对应的事件数据的数量,确定出包含运动物体的至少一个图像块;基于所确定的图像块,生成物体候选框,物体候选框指向检测出的运动物体。
可选地,根据本发明的方法还包括步骤:将预定时长内的事件数据映射到预设图像模板,生成预定时长段对应的时间平面图像,其中,事件数据中包含被触发事件的坐标位置及时间戳。
可选地,在根据本发明的方法中,在基于所确定的图像块,生成物体候选 框的步骤之后,还包括步骤:利用图像分类算法,从时间平面图像中识别出物体候选框所指向的运动物体的类别。
可选地,在根据本发明的方法中,利用以下公式来统计各图像块中包含的事件数据:
Figure PCTCN2021075434-appb-000001
其中,b ij表示第i行第j个图像块(i=1,2,...,m,j=1,2,...,n),H(b ij)为b ij所包含的预定时长内的事件数据的数量,(x k,y k)表示第k个事件数据的坐标位置,N为该预定时长内接收到的事件数据的总个数,δ(·)表示狄拉克δ函数,[·]表示取整函数。
可选地,在根据本发明的方法中,根据各图像块对应的事件数据的数量,确定出包含运动物体的至少一个图像块的步骤包括:筛选出所包含事件数据的数量大于第一预设值的图像块,作为包含运动物体的图像块。
可选地,在根据本发明的方法中,基于所确定的图像块,生成物体候选框的步骤包括:从所确定的图像块中,查找具有共享边的连续的多个图像块;基于所查找的连续的图像块,生成至少一个最小外接矩形,作为物体候选框。
可选地,在根据本发明的方法中,基于所确定的图像块,生成物体候选框的步骤还包括:分别计算当前预定时长与前一相邻预定时长段内,所检测出的各物体候选框的重叠率;将重叠率大于第二预设值的物体候选框,作为当前预定时长段最终所对应的物体候选框。
可选地,在根据本发明的方法中,在将预设图像模板分成多个尺寸相同的图像块的步骤之前,还包括步骤:基于动态视觉传感器中的像素单元阵列,生成预设图像模板。
可选地,在根据本发明的方法中,将预定时长内的事件数据映射到预设图像模板,生成该段时长对应的时间平面图像的步骤包括:在预设图像模板的各像素处,查找是否存在与像素的坐标位置一致的被触发事件;以及根据查找的结果,对像素的像素值进行二值化赋值,来生成时间平面图像。
可选地,在根据本发明的方法中,将预定时长内的事件数据映射到预设图像模板,生成该段时长对应的时间平面图像的步骤包括:在预设图像模板的各像素处,计算其坐标位置与所有被触发事件的坐标位置的关系值;基于关系值,更新该像素的像素值,来生成时间平面图像。
可选地,在根据本发明的方法中,第一预设值与图像块的尺寸及预定时长正相关。
根据本发明的另一方面,提供了一种计算设备,包括:一个或多个处理器;和存储器;一个或多个程序,其中一个或多个程序存储在存储器中并被配置为由一个或多个处理器执行,一个或多个程序包括用于执行如上所述方法中的任一方法的指令。
根据本发明的又一方面,提供了一种存储一个或多个程序的计算机可读存储介质,一个或多个程序包括指令,指令当计算设备执行时,使得计算设备执行如上所述方法中的任一方法。
根据本发明的又一方面,提供了一种运动物体检测系统,包括:如上所述的计算设备;动态视觉传感器,与计算设备相耦接,适于基于视场中对象和动态视觉传感器的相对运动而触发事件,并输出事件数据流给计算设备。
综上所述,根据本发明的方案,将预设图像模板分成若干尺寸一致的图像块,分别统计各图像块中的事件数据个数,加快了定位运动物体所有可能区域的速度,避免了传统算法中对图像整个区域的遍历搜索,很大程度上减少了算力。同时,按照图像块的方式统计事件数据的个数,而不是按照每个像素每个像素地逐像素去统计,一方面大大减少了计算量,另一方面也提高了物体定位的鲁棒性。
附图说明
为了实现上述以及相关目的,本文结合下面的描述和附图来描述某些说明性方面,这些方面指示了可以实践本文所公开的原理的各种方式,并且所有方面及其等效方面旨在落入所要求保护的主题的范围内。通过结合附图阅读下面的详细描述,本公开的上述以及其它目的、特征和优势将变得更加明显。遍及本公开,相同的附图标记通常指代相同的部件或元素。
图1示出了根据本发明一些实施例的运动物体检测系统100的示意图;
图2示出了根据本发明一些实施例的计算设备200的示意图;
图3示出了根据本发明一个实施例的运动物体检测方法300的流程图;
图4A和图4B示出了根据本发明一个实施例的确定物体候选框的示意图;
图5示出了根据本发明另一个实施例的运动物体检测方法500的流程图;
图6示出了根据本发明一个实施例的运动物体检测结果的示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
近年来,动态视觉传感器(Dynamic Vision Sensor,DVS)在计算机视觉领域中得到了越来越多的关注和应用。DVS是一种模拟基于脉冲触发式神经元的人类视网膜的生物拟态视觉传感器。传感器内部具有由多个像素单元构成的像素单元阵列,其中每个像素单元只有在感应到光强变化时,才会响应并记录光强快速变化的区域。也就是说,DVS内的每一个像素单元能够独立自主地响应并记录光强快速变化的区域。由于DVS采用事件触发的处理机制,故其输出是异步的事件数据流而非图像帧,事件数据流例如是光强变化信息(如,光强变化的时间戳和光强阈值)以及被触发像素单元的坐标位置。
基于以上工作原理特性,动态视觉传感器相比于传统视觉传感器的优越性可归纳为以下几个方面:1)DVS的响应速度不再受传统的曝光时间和帧速率限制,可以侦测到高达万帧/秒速率运动的高速物体;2)DVS具有更大的动态范围,在低光照或者高曝光环境下都能准确感应并输出场景变化;3)DVS功耗更低;4)由于DVS每个像素单元都是独立响应光强变化,因此DVS不会受运动模糊的影响。
根据本发明的实施方式,提出了一种基于DVS的运动物体检测方案。该方案考虑到现有运动物体检测方案的缺陷,想到利用DVS数据冗余量低、响应快速、以及不受运动模糊影响等特性,通过一定的算法来处理其输出的事件数据流,来实现对运动物体的快速检测和识别,以解决传统的运动物体检测算法在算力和功耗上的高需求、以及在准确率和响应速度上的低性能等问题。
图1示出了根据本发明一个实施例的运动物体检测系统100的示意图。如图1所示,系统100包括相互耦接的动态视觉传感器(DVS)110和计算设备200。应当了解,图1仅作为示例,本发明实施例对系统100中所包含的DVS和计算设备的数量并不做限制。
动态视觉传感器110实时监测视场中物体的运动变化,一旦其监测到视场 中有物体(相对于动态视觉传感器110)发生运动(即,视场中的光线发生变化),就会触发像素事件(或,简称为“事件”),输出动态像素(即,亮度发生变化的像素单元)的事件数据。一段时间内输出的若干个事件数据就构成了事件数据流。该事件数据流中每个事件数据至少包括被触发事件(即,亮度发生变化的像素单元)的坐标位置和被触发时刻的时间戳信息。关于动态视觉传感器110的具体组成,此处不做过多阐述。
计算设备200接收来自动态视觉传感器110的事件数据流,并对这些事件数据流进行处理,以检测出发生运动的物体。更进一步地,计算设备200还可以识别出所检测出的运动物体的类别(例如,人、车、猫、足球等)。之后,系统100可基于检测结果,进行后续的处理,如,跟踪运动物体等。
根据本发明的一种实施例,图2示出了根据本发明一个实施例的计算设备200的示意框图。
如图2所示,在基本的配置202中,计算设备200典型地包括系统存储器206和一个或者多个处理器204。存储器总线208可以用于在处理器204和系统存储器206之间的通信。
取决于期望的配置,处理器204可以是任何类型的处理,包括但不限于:微处理器(μP)、微控制器(μP/μC/DSP)、数字信息处理器(DSP)或者它们的任何组合。处理器204可以包括诸如一级高速缓存210和二级高速缓存212之类的一个或者多个级别的高速缓存、处理器核心214和寄存器216。示例的处理器核心214可以包括运算逻辑单元(ALU)、浮点数单元(FPU)、数字信号处理核心(DSP核心)或者它们的任何组合。示例的存储器控制器218可以与处理器204一起使用,或者在一些实现中,存储器控制器218可以是处理器204的一个内部部分。
取决于期望的配置,系统存储器206可以是任意类型的存储器,包括但不限于:易失性存储器(诸如RAM)、非易失性存储器(诸如ROM、闪存等)或者它们的任何组合。系统存储器206可以包括操作系统220、一个或者多个应用222以及程序数据224。在一些实施方式中,应用222可以布置为在操作系统上由一个或多个处理器204利用程序数据224执行指令。
计算设备200还包括储存设备232,储存设备232包括可移除储存器236和不可移除储存器238,可移除储存器236和不可移除储存器238均与储存接 口总线234连接。
计算设备200还可以包括有助于从各种接口设备(例如,输出设备242、外设接口244和通信设备246)到基本配置202经由总线/接口控制器230的通信的接口总线240。示例的输出设备242包括图形处理单元248和音频处理单元250。它们可以被配置为有助于经由一个或者多个A/V端口252与诸如显示器或者扬声器之类的各种外部设备进行通信。示例外设接口244可以包括串行接口控制器254和并行接口控制器256,它们可以被配置为有助于经由一个或者多个I/O端口258和诸如输入设备(例如,键盘、鼠标、笔、语音输入设备、触摸输入设备)或者其他外设(例如打印机、扫描仪等)之类的外部设备进行通信。示例的通信设备246可以包括网络控制器260,其可以被布置为便于经由一个或者多个通信端口264与一个或者多个其他计算设备262通过网络通信链路的通信。
网络通信链路可以是通信介质的一个示例。通信介质通常可以体现为在诸如载波或者其他传输机制之类的调制数据信号中的计算机可读指令、数据结构、程序模块,并且可以包括任何信息递送介质。“调制数据信号”可以是这样的信号,它的数据集中的一个或者多个或者它的改变可以在信号中编码信息的方式进行。作为非限制性的示例,通信介质可以包括诸如有线网络或者专线网络之类的有线介质,以及诸如声音、射频(RF)、微波、红外(IR)或者其它无线介质在内的各种无线介质。这里使用的术语计算机可读介质可以包括存储介质和通信介质二者。
一般地,计算设备200可以实现为小尺寸便携(或者移动)电子设备的一部分,这些电子设备可以是诸如蜂窝电话、数码照相机、个人数字助理(PDA)、个人媒体播放器设备、无线网络浏览设备、个人头戴设备、应用专用设备、或者可以包括上面任何功能的混合设备。在根据本发明的一种实施方式中,计算设备200可以被实现为微型计算模块等。本发明的实施例对此均不做限制。
在根据本发明的实施例中,计算设备200被配置为执行根据本发明的运动物体检测方法。其中,计算设备200的应用222中包含执行根据本发明的方法300和方法500的多条程序指令。
应当了解,在动态视觉传感器110具有足够的存储空间和算力的条件下,计算设备200也可以作为动态视觉传感器110的一部分,来对事件数据流进行 处理,实现运动物体检测。
图3示出了根据本发明一个实施例的运动物体检测方法300的流程图。方法300在计算设备200中执行。需要说明的是,篇幅所限,关于方法300和系统100的描述互为补充,重复部分不做赘述。
如图3所示,方法300始于步骤S310。
在步骤S310中,将预设图像模板分成多个尺寸相同的图像块。
根据本发明的实施例,方法300还包括生成预设图像模板的步骤。具体地,基于动态视觉传感器110中的像素单元阵列,生成预设图像模板。在一种实施例中,预设图像模板的尺寸同像素单元阵列的尺寸保持一致。假设像素单元阵列是一个20×30的阵列,那么预设图像模板的尺寸也是20×30。换言之,预设图像模板中每一个像素都对应像素单元阵列中的一个像素单元。在一种实施例中,假设预设图像模板的长为H,宽为W,将其分成m行n列来得到多个图像块,则每个图像块的尺寸记作:长为H/m,宽为W/n。如图4示出了根据本发明一个实施例的分割方式,其中,b ij(i=1,2,...m;j=1,2,...,n)表示图像块的编号。
在另一些实施例中,在分割预设图像模板时,相邻图像块之间也可以有一定的重叠。其中,相邻图像块可以是左右相邻的图像块,和/或,上下相邻的图像块。可以是全部图像块都有重叠,也可以仅是部分区域内的图像块之间有重叠。本发明的实施例对此均不作限制。通常,重叠部分的尺寸小于图像块的尺寸。这种分割方式,使得算法处理的粒度更细,可以在一定程度上提高最终生成物体候选框的精度,但不可避免地会为后续步骤增加一定算力。
在预设的时间段T内,计算设备200连续或有采样地接收并处理DVS输出的事件数据流。每个事件数据e(x,y,t)包含其对应的被触发事件的坐标位置(x,y)和被触发时刻的时间戳t。根据一种实施例,计算设备200在获取事件数据流时,每隔预定时长内的事件数据流,进行一次运动物体检测的处理,检测出其中的运动物体。记在该时间段内第一个接收到的事件数据的时间戳为t 0,当后续接收到的事件数据的时间戳t满足t-t 0>T时,即停止接收事件数据,T就是预定时长。
故在步骤S320中,统计各图像块中包含的预定时长内的事件数据的数量。如前文所述,事件数据由视场中物体和动态视觉传感器110的相对运动触发。
根据一种实施例,利用以下公式来统计各图像块中包含的事件数据:
Figure PCTCN2021075434-appb-000002
其中,b ij表示第i行第j个图像块(i=1,2,...,m,j=1,2,...,n),H(b ij)为b ij所包含的预定时长内的事件数据的数量,(x k,y k)表示第k个事件数据的坐标位置,N为该预定时长内接收到的事件数据的总个数,δ(·)表示狄拉克δ函数,[·]表示取整函数。
随后在步骤S330中,根据各图像块对应的事件数据的数量,确定出包含运动物体的至少一个图像块。
根据本发明的实施例,筛选出所包含事件数据的数量大于第一预设值的所有图像块,作为包含运动物体的图像块。这些图像块就构成了运动物体的可能区域。
其中,第一预设值与图像块的尺寸及预定时长正相关。即,图像块的尺寸越大、预定时长T越长,则第一预设值越大。在一种实施例中,第一预设值的取值为100,对应的图像尺寸为1280×800,图像块的尺寸为32×32,预定时长为30ms,不限于此。
图4A和图4B示出了根据本发明一个实施例的确定物体候选框的示意图。其中,图4A示出了一个预设图像模板,如图4A所示,预设图像模板被分成6行8列,共48个图像块,并用加粗线来描绘所筛选出的包含运动物体的图像块,即,图4A所示的图像块a、图像块b、图像块c、图像块d、图像块e、图像块f、图像块g、图像块h、图像块i和图像块j共10个图像块。
随后,在步骤S340中,基于所确定的图像块,生成物体候选框。该物体候选框能够最大程度地包含运动物体,即,物体候选框指向了检测出的运动物体。
在一种实施例中,当视场中同时有不止一个运动物体时,通常也会有不止一个物体候选框来分别指向各运动物体。
根据本发明的实施例,通过合并经步骤S330确定的所有图像块,来得到物体候选框。在一种实施例中,先从所确定的图像块中,查找具有共享边的连续的多个图像块;再基于所查找的连续的图像块,生成至少一个最小外接矩形,作为对应的物体候选框。
继续如图4A所示,图像块a、图像块b、图像块c、图像块d、图像块e、 图像块f和图像块g属于相互有共享边的连续的图像块;图像块h、图像块i和图像块j属于相互有共享边的连续的图像块。即,共查找出两组连续的图像块。接着,基于这两组连续的图像块,分别生成其最小外接矩形。其中,基于“图像块a到图像块g”这组连续的图像块,可以生成一个最小外接矩形,即图4B中所示的物体候选框410;基于“图像块h到图像块j”这组连续的图像块,可以生成另一个最小外接矩形,即图4B中所示的物体候选框420。
也就是说,在利用所确定的图像块生成物体候选框时,不是对确定的所有图像块做最小外接矩形,而是对共享边的连续的图像块做最小外接矩形,以保证多个分开的运动物体不会被合并到一起。
至此,通过生成的一个或多个物体候选框,就能够确认预定时长内视场中的一个或多个运动物体。
基于上述描述,方法300将预设图像模板分成若干尺寸一致的图像块,分别统计各图像块中的事件数据个数,加快了定位运动物体所有可能区域的速度,避免了传统算法中对图像整个区域的遍历搜索,很大程度上减少了算力。同时,按照图像块的方式统计事件数据的个数,而不是按照每个像素每个像素地逐像素去统计,一方面大大减少了计算量,另一方面也提高了物体定位的鲁棒性。
根据本发明的另一些实施方式,考虑到噪声事件数据会对检测结果造成干扰,故在通过上述方式生成物体候选框之后,还包括步骤:利用相邻两个预定时长段内分别生成的物体候选框,对当前预定时长内的物体候选框进行进一步处理,以确定出最终的物体候选框。
以下示出根据本发明实施例的对物体候选框进一步处理,以确定出最终的物体候选框的过程。
在一种实施例中,先分别计算当前预定时长与前一相邻预定时长段内,所生成的各物体候选框的重叠率。其中,采用如下公式来计算重叠率O:
Figure PCTCN2021075434-appb-000003
式中,R curr表示当前预定时长段内得到的物体候选框,R prev表示前一相邻预定时长段内得到的物体候选框,∩表示取两个物体候选框交叠部分的操作,Area(·)表示求交叠部分的面积。
可选地,在计算重叠率时,若当前预定时长段内的物体候选框和/或前一相 邻预定时长段内的物体候选框不止一个,则对所有物体候选框逐个计算重叠率。当然,也可以选择前一预定时长段内落在当前预定时长段的物体候选框附近感兴趣区域或一定距离内的物体候选框进行重叠率的计算,本发明的实施例对此不做过多限制。
之后,将重叠率大于第二预设值的物体候选框,作为当前预定时长段最终所对应的物体候选框。在一些优选的实施例中,第二预设值的取值范围一般为[0.5,0.8],以保证所检测出的运动物体在空间和时间上的连续性。
根据与前一预定时长段内的物体候选框的重叠率,来确定当前预定时长段内最终的物体候选框,能够有效避免一些噪声事件数据对检测结果造成的干扰。
图5示出了根据本发明再一个实施例的运动物体检测方法500的流程示意图。方法500是在方法300的基础上执行的,故与方法300一致的内容此处不再赘述,并采用同样的步骤标识来表示。
根据本发明的再一些实施方式,计算设备200在接收来自动态视觉传感器110的事件数据流时,还包括步骤:将预定时长内的事件数据映射到预设图像模板,生成该预定时长段对应的时间平面图像。
如图5所示,计算设备200在执行步骤S310生成预设图像模板后,一方面执行步骤S510,利用预定时长内的事件数据生成时间平面图像;另一方面,按照上述步骤S320至步骤S340所述,统计预定时长内的事件数据的数量,以生成物体候选框。
根据一种实施例,利用如下方式来生成二值化的时间平面图像。具体地,在预设图像模板的各像素处,查找是否存在与像素的坐标位置一致的被触发事件,根据查找的结果(即,坐标位置一致或不一致),对该像素的像素值进行二值化赋值,来生成时间平面图像。换言之,若被触发事件的坐标位置与像素的坐标位置一致,则该像素的像素值为255;若被触发事件的坐标位置与像素的坐标位置不一致,则该像素的像素值为0。如下述公式所示:
Figure PCTCN2021075434-appb-000004
式中,(x k,y k)表示被触发事件的坐标位置,(x,y)表示像素的坐标位置,I(x,y)表示(x,y)处的像素值。
应当了解,此处仅作为示例,将像素值赋为0和255,使得所生成的时间 平面图像为灰度图像。但本发明的实施例并不限制具体的赋值,也可以是0和1,或者0和1023,等等。甚至,可以直接利用被触发事件的时间戳来为像素赋值。
根据另一种实施例,通过事件数据累加的方式来生成时间平面图像。具体地,在预设图像模板的各像素处,计算其坐标位置与所有被触发事件的坐标位置的关系值;再基于关系值,更新对应像素的像素值,来生成时间平面图像。可以通过如下公式来表示:
Figure PCTCN2021075434-appb-000005
式中,(x k,y k)表示被触发事件的坐标位置,(x,y)表示像素的坐标位置,I(x,y)表示(x,y)处的像素值,N为该预定时长内接收到的事件数据的总个数,δ(·)表示狄拉克δ函数,C为常数。其中,利用狄拉克函数计算的δ(x+y-x k-y k)+C就表示了像素的坐标位置与所有被触发事件的坐标位置的关系值。
以上仅示例性地示出根据本发明一些实施例的生成时间平面图像的方式。应当了解,在此基础上,任何利用事件数据流来生成时间平面图像的方法,均可以与本发明的实施例相结合,来实现运动物体检测方案。
之后,在生成预定时长对应的物体候选框后,执行步骤S520,利用图像分类算法,从时间平面图像中识别出物体候选框所指向的运动物体的类别。
根据一种实施例,先将物体候选框映射到时间平面图像上,之后,利用图像分类算法(如,SVM、MobileNet等)识别出物体候选框中的物体的类别。在一种实施例中,可以预先利用标注好类别的训练图像来训练生成分类模型,再将时间平面图像(或,物体候选框对应区域的图像,本发明实施例对此不作限制)输入该分类模型中,由其对物体候选框对应区域进行处理,最终输出物体的类别。
图6示出了根据本发明一个实施例的运动物体检测结果的示意图。如图6所示,在该时间平面图像中,检测出一个运动物体,用矩形框(即,物体候选框)将其圈出。经分类识别后,确认并输出该运动物体的类别为“人(person)”。
根据本发明的方法500,利用事件数据形成的时间平面图像,来对检测出的运动物体进行分类识别,能够减少传统算法中由于物体快速运动造成的图像模糊而引起的漏检和误检。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域那些技术人员应当理解在本文所公开的示例中的设备的模块或单元或组件可以布置在如该实施例中所描述的设备中,或者可替换地可以定位在与该示例中的设备不同的一个或多个设备中。前述示例中的模块可以组合为一个模块或者此外可以分成多个子模块。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
此外,所述实施例中的一些在此被描述成可以由计算机系统的处理器或者由执行所述功能的其它装置实施的方法或方法元素的组合。因此,具有用于实 施所述方法或方法元素的必要指令的处理器形成用于实施该方法或方法元素的装置。此外,装置实施例的在此所述的元素是如下装置的例子:该装置用于实施由为了实施该发明的目的的元素所执行的功能。
如在此所使用的那样,除非另行规定,使用序数词“第一”、“第二”、“第三”等等来描述普通对象仅仅表示涉及类似对象的不同实例,并且并不意图暗示这样被描述的对象必须具有时间上、空间上、排序方面或者以任意其它方式的给定顺序。
尽管根据有限数量的实施例描述了本发明,但是受益于上面的描述,本技术领域内的技术人员明白,在由此描述的本发明的范围内,可以设想其它实施例。此外,应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的,而非限制性的,本发明的范围由所附权利要求书限定。

Claims (14)

  1. 一种运动物体检测方法,包括步骤:
    将预设图像模板分成多个尺寸相同的图像块;
    统计所述各图像块中包含的预定时长内的事件数据的数量,所述事件数据来自动态视觉传感器,由视场中物体和动态视觉传感器的相对运动触发;
    根据各图像块对应的事件数据的数量,确定出包含运动物体的至少一个图像块;
    基于所确定的图像块,生成物体候选框,所述物体候选框指向检测出的运动物体。
  2. 如权利要求1所述的方法,还包括步骤:
    将所述预定时长内的事件数据映射到所述预设图像模板,生成所述预定时长段对应的时间平面图像,
    其中,所述事件数据中包含被触发事件的坐标位置及时间戳。
  3. 如权利要求2所述的方法,其中,在基于所确定的图像块,生成物体候选框的步骤之后,还包括步骤:
    利用图像分类算法,从所述时间平面图像中识别出所述物体候选框所指向的运动物体的类别。
  4. 如权利要求1-3中任一项所述的方法,其中,所述统计各图像块中包含的预定时长内的事件数据的数量的步骤包括:
    利用以下公式来统计各图像块中包含的事件数据:
    Figure PCTCN2021075434-appb-100001
    其中,b ij表示第i行第j个图像块(i=1,2,...,m,j=1,2,...,n),H(b ij)为b ij所包含的预定时长内的事件数据的数量,(x k,y k)表示第k个事件数据的坐标位置,N为该预定时长内接收到的事件数据的总个数,δ(·)表示狄拉克δ函数,[·]表示取整函数。
  5. 如权利要求1-4中任一项所述的方法,其中,所述根据各图像块对应的事件数据的数量,确定出包含运动物体的至少一个图像块的步骤包括:
    筛选出所包含事件数据的数量大于第一预设值的图像块,作为包含运动物体的图像块。
  6. 如权利要求1-5中任一项所述的方法,其中,所述基于所确定的图像块,生成物体候选框的步骤包括:
    从所确定的图像块中,查找具有共享边的连续的多个图像块;
    基于所查找的连续的图像块,生成至少一个最小外接矩形,作为物体候选框。
  7. 如权利要求1-6中任一项所述的方法,其中,所述基于所确定的图像块,生成物体候选框的步骤还包括:
    分别计算当前预定时长与前一相邻预定时长段内,所检测出的各物体候选框的重叠率;
    将所述重叠率大于第二预设值的物体候选框,作为当前预定时长段最终所对应的物体候选框。
  8. 如权利要求1-7中任一项所述的方法,其中,在所述将预设图像模板分成多个尺寸相同的图像块的步骤之前,还包括步骤:
    基于所述动态视觉传感器中的像素单元阵列,生成预设图像模板。
  9. 如权利要求2所述的方法,其中,所述将预定时长内的事件数据映射到预设图像模板,生成该段时长对应的时间平面图像的步骤包括:
    在所述预设图像模板的各像素处,查找是否存在与像素的坐标位置一致的被触发事件;以及
    根据查找的结果,对所述像素的像素值进行二值化赋值,来生成时间平面图像。
  10. 如权利要求2所述的方法,其中,所述将预定时长内的事件数据映射到预设图像模板,生成该段时长对应的时间平面图像的步骤包括:
    在所述预设图像模板的各像素处,计算其坐标位置与所有被触发事件的坐标位置的关系值;
    基于所述关系值,更新该像素的像素值,来生成时间平面图像。
  11. 如权利要求5所述的方法,其中,所述第一预设值与所述图像块的尺寸及预定时长正相关。
  12. 一种计算设备,包括:
    一个或多个处理器;和
    存储器;
    一个或多个程序,其中所述一个或多个程序存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序包括用于执行根据权利要求1-11所述方法中的任一方法的指令。
  13. 一种存储一个或多个程序的计算机可读存储介质,所述一个或多个程序包括指令,所述指令当计算设备执行时,使得所述计算设备执行根据权利要求1-11所述的方法中的任一方法。
  14. 一种运动物体检测系统,包括:
    如权利要求12所述的计算设备;
    动态视觉传感器,与所述计算设备相耦接,适于基于视场中对象和所述动态视觉传感器的相对运动而触发事件,并输出事件数据流给所述计算设备。
PCT/CN2021/075434 2021-02-02 2021-02-05 一种运动物体检测方法及系统 WO2022165735A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21923755.9A EP4290404A1 (en) 2021-02-02 2021-02-05 Method and system for detecting moving object
US18/226,818 US20230368397A1 (en) 2021-02-02 2023-07-27 Method and system for detecting moving object

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110145826.7 2021-02-02
CN202110145826.7A CN112966556B (zh) 2021-02-02 2021-02-02 一种运动物体检测方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/226,818 Continuation US20230368397A1 (en) 2021-02-02 2023-07-27 Method and system for detecting moving object

Publications (1)

Publication Number Publication Date
WO2022165735A1 true WO2022165735A1 (zh) 2022-08-11

Family

ID=76272097

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/075434 WO2022165735A1 (zh) 2021-02-02 2021-02-05 一种运动物体检测方法及系统

Country Status (4)

Country Link
US (1) US20230368397A1 (zh)
EP (1) EP4290404A1 (zh)
CN (1) CN112966556B (zh)
WO (1) WO2022165735A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588121A (zh) * 2022-11-03 2023-01-10 腾晖科技建筑智能(深圳)有限公司 基于传感数据和图像序列的塔吊吊物类别检测方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130322766A1 (en) * 2012-05-30 2013-12-05 Samsung Electronics Co., Ltd. Method of detecting global motion and global motion detector, and digital image stabilization (dis) method and circuit including the same
CN108765454A (zh) * 2018-04-25 2018-11-06 深圳市中电数通智慧安全科技股份有限公司 一种基于视频的烟雾检测方法、装置及设备终端
CN109409288A (zh) * 2018-10-25 2019-03-01 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备和存储介质
CN109716392A (zh) * 2018-05-22 2019-05-03 上海芯仑光电科技有限公司 一种光流计算方法及计算设备
CN111462155A (zh) * 2020-03-26 2020-07-28 深圳万拓科技创新有限公司 移动侦测方法、装置、计算机设备和存储介质

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8537219B2 (en) * 2009-03-19 2013-09-17 International Business Machines Corporation Identifying spatial locations of events within video image data
CN101901087B (zh) * 2010-07-27 2013-04-10 广东威创视讯科技股份有限公司 基于线性图像传感器的表面定位装置及方法
CN102496164B (zh) * 2011-11-14 2013-12-11 通号通信信息集团有限公司 一种事件检测方法及系统
KR102248404B1 (ko) * 2014-11-17 2021-05-07 삼성전자주식회사 움직임 분석 방법 및 움직임 분석 장치
CN108073929B (zh) * 2016-11-15 2023-11-24 北京三星通信技术研究有限公司 基于动态视觉传感器的物体检测方法及设备
CN108764078B (zh) * 2018-05-15 2019-08-02 上海芯仑光电科技有限公司 一种事件数据流的处理方法及计算设备
CN108777794B (zh) * 2018-06-25 2022-02-08 腾讯科技(深圳)有限公司 图像的编码方法和装置、存储介质、电子装置
CN111200735B (zh) * 2018-11-19 2023-03-17 华为技术有限公司 一种帧间预测的方法及装置
CN109492609B (zh) * 2018-11-27 2020-05-15 上海芯仑光电科技有限公司 一种检测车道线的方法和车辆、及计算设备
CN109544590B (zh) * 2018-11-27 2020-05-15 上海芯仑光电科技有限公司 一种目标跟踪方法及计算设备
CN110084253A (zh) * 2019-05-05 2019-08-02 厦门美图之家科技有限公司 一种生成物体检测模型的方法
CN112084826A (zh) * 2019-06-14 2020-12-15 北京三星通信技术研究有限公司 图像处理方法、图像处理设备以及监控系统
CN112101373A (zh) * 2019-06-18 2020-12-18 富士通株式会社 基于深度学习网络的对象检测方法、装置和电子设备
CN110428397A (zh) * 2019-06-24 2019-11-08 武汉大学 一种基于事件帧的角点检测方法
CN110334687A (zh) * 2019-07-16 2019-10-15 合肥工业大学 一种基于行人检测、属性学习和行人识别的行人检索增强方法
CN111696044B (zh) * 2020-06-16 2022-06-10 清华大学 一种大场景动态视觉观测方法及装置
CN112052837A (zh) * 2020-10-09 2020-12-08 腾讯科技(深圳)有限公司 基于人工智能的目标检测方法以及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130322766A1 (en) * 2012-05-30 2013-12-05 Samsung Electronics Co., Ltd. Method of detecting global motion and global motion detector, and digital image stabilization (dis) method and circuit including the same
CN108765454A (zh) * 2018-04-25 2018-11-06 深圳市中电数通智慧安全科技股份有限公司 一种基于视频的烟雾检测方法、装置及设备终端
CN109716392A (zh) * 2018-05-22 2019-05-03 上海芯仑光电科技有限公司 一种光流计算方法及计算设备
CN109409288A (zh) * 2018-10-25 2019-03-01 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备和存储介质
CN111462155A (zh) * 2020-03-26 2020-07-28 深圳万拓科技创新有限公司 移动侦测方法、装置、计算机设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115588121A (zh) * 2022-11-03 2023-01-10 腾晖科技建筑智能(深圳)有限公司 基于传感数据和图像序列的塔吊吊物类别检测方法及系统
CN115588121B (zh) * 2022-11-03 2023-07-04 腾晖科技建筑智能(深圳)有限公司 基于传感数据和图像序列的塔吊吊物类别检测方法及系统

Also Published As

Publication number Publication date
EP4290404A1 (en) 2023-12-13
CN112966556B (zh) 2022-06-10
US20230368397A1 (en) 2023-11-16
CN112966556A (zh) 2021-06-15

Similar Documents

Publication Publication Date Title
CN110738101B (zh) 行为识别方法、装置及计算机可读存储介质
US11379996B2 (en) Deformable object tracking
US11450146B2 (en) Gesture recognition method, apparatus, and device
CN109544590B (zh) 一种目标跟踪方法及计算设备
US10452893B2 (en) Method, terminal, and storage medium for tracking facial critical area
CN108960163B (zh) 手势识别方法、装置、设备和存储介质
CN109492609B (zh) 一种检测车道线的方法和车辆、及计算设备
US20210042938A1 (en) Data processing method and computing device
WO2022174523A1 (zh) 一种提取行人的步态特征的方法、步态识别方法及系统
WO2019218388A1 (zh) 一种事件数据流的处理方法及计算设备
US20130342636A1 (en) Image-Based Real-Time Gesture Recognition
US10157327B2 (en) Image processing device, image processing method, and program
WO2019222911A1 (zh) 一种光流计算方法及计算设备
WO2022002262A1 (zh) 基于计算机视觉的字符序列识别方法、装置、设备和介质
JP2022513214A (ja) シーン内の物体を追跡する方法
WO2022165735A1 (zh) 一种运动物体检测方法及系统
WO2022048578A1 (zh) 图像内容检测方法、装置、电子设备和可读存储介质
CN109815902B (zh) 一种行人属性区域信息获取方法、装置及设备
CN111177811A (zh) 一种应用于云平台的消防点位自动布图的方法
JP2020109644A (ja) 転倒検出方法、転倒検出装置及び電子機器
CN108875488B (zh) 对象跟踪方法、对象跟踪装置以及计算机可读存储介质
CN113743199A (zh) 工装穿戴检测方法、装置、计算机设备和存储介质
CN112818908A (zh) 关键点检测方法、装置、终端及存储介质
CN111753796A (zh) 图像中关键点的识别方法、装置、电子设备及存储介质
WO2022206679A1 (zh) 图像处理方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21923755

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021923755

Country of ref document: EP

Effective date: 20230904