WO2018058530A1 - 目标检测方法、装置以及图像处理设备 - Google Patents

目标检测方法、装置以及图像处理设备 Download PDF

Info

Publication number
WO2018058530A1
WO2018058530A1 PCT/CN2016/101093 CN2016101093W WO2018058530A1 WO 2018058530 A1 WO2018058530 A1 WO 2018058530A1 CN 2016101093 W CN2016101093 W CN 2016101093W WO 2018058530 A1 WO2018058530 A1 WO 2018058530A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
frame
detection
result
moving object
Prior art date
Application number
PCT/CN2016/101093
Other languages
English (en)
French (fr)
Inventor
白向晖
伍健荣
Original Assignee
富士通株式会社
白向晖
伍健荣
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社, 白向晖, 伍健荣 filed Critical 富士通株式会社
Priority to CN201680087593.7A priority Critical patent/CN109478333A/zh
Priority to PCT/CN2016/101093 priority patent/WO2018058530A1/zh
Publication of WO2018058530A1 publication Critical patent/WO2018058530A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • Embodiments of the present invention relate to the field of graphic image technologies, and in particular, to a target detection method, apparatus, and image processing apparatus.
  • target recognition based on a single image is employed, for example, vehicle/vehicle target recognition based on the entire image.
  • Each identified target can be marked (eg, by a rectangular mark).
  • Target recognition can include two aspects of function: region extraction and target classification; region extraction can detect the region of the object from the image (for example, the position in the image), and the target classification classifies the pixel information in each extracted region. It can be determined whether the object is a target of interest (eg, is not a vehicle).
  • the embodiments of the present invention provide an object detection method, an apparatus, and an image processing apparatus. It is expected that the calculation amount of target recognition in video processing can be reduced, and can be applied to real-time image processing with high processing time requirements.
  • a target detection method which detects a certain first frame and a subsequent one or more second frames in a video, where the target detection method includes:
  • a target detecting apparatus for detecting a certain first frame and a subsequent one or more second frames in a video, the target detecting apparatus comprising:
  • a target identification unit that performs image-based target recognition on the first frame to obtain a detection target in the first frame
  • a target tracking unit that performs target tracking on the detection target in the second frame
  • a motion detecting unit that performs moving object detection on the second frame
  • a target determining unit that determines a detection target in the second frame based on a result of the moving object detection and a result of the target tracking.
  • an image processing apparatus wherein the image processing apparatus includes the object detecting means as described above.
  • a computer readable program wherein when the program is executed in a target detecting device or an image processing device, the program causes the target detecting device or the image processing device to perform the above The target detection method.
  • a storage medium storing a computer readable program, wherein the computer readable program causes a target detecting device or an image processing device to perform a target detecting method as described above.
  • the beneficial effects of the embodiments of the present invention are: performing image-based object recognition on a certain first frame in the video; performing moving object detection and target tracking on the subsequent second frame, and the result of the moving object detection and the result of the target tracking
  • the detection target in the second frame is determined.
  • FIG. 1 is a schematic diagram of a target detecting method according to Embodiment 1 of the present invention.
  • FIG. 2 is another schematic diagram of a target detecting method according to Embodiment 1 of the present invention.
  • Embodiment 3 is a diagram showing an example of target recognition in Embodiment 1 of the present invention.
  • FIG. 4 is a diagram showing an example of target tracking in Embodiment 1 of the present invention.
  • Figure 5 is a view showing an example of moving object detection according to Embodiment 1 of the present invention.
  • FIG. 6 is a schematic diagram of multiple frames in a video according to Embodiment 1 of the present invention.
  • Figure 7 is a schematic diagram of a target detecting device according to a second embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a target determining unit according to Embodiment 2 of the present invention.
  • Figure 9 is a diagram showing an image processing apparatus according to a third embodiment of the present invention.
  • the embodiment of the invention provides a target detection method for detecting a certain first frame and a subsequent one or more second frames in a video.
  • FIG. 1 is a schematic diagram of a target detection method according to an embodiment of the present invention. As shown in FIG. 1, the target detection method includes:
  • Step 101 Perform image-based target recognition on the first frame, and obtain a detection target in the first frame.
  • Step 102 Perform target tracking on the detection target in the second frame.
  • Step 103 Perform moving object detection on the second frame.
  • Step 104 Determine a detection target in the second frame according to a result of the moving object detection and a result of the target tracking.
  • a certain frame in the video may be arbitrarily selected as the first frame, and the image-based target recognition is performed on the first frame.
  • the detection target in the first frame can be obtained, for example, the position of the plurality of objects whose target type is the vehicle in the first frame can be obtained.
  • first frame and the second frame in the embodiment of the present invention do not refer to frames with numbers or numbers 1 and 2 in the video, but only for convenience of description, two frames of different image processing are referred to from the title. Make a distinction on it.
  • the first frame of the embodiment of the present invention may also be referred to as a key frame or an important frame, for example, and the second frame may also be referred to as a normal frame or a normal frame, for example, and the present invention is not limited thereto.
  • the first frame may be an image frame numbered N in the video (N may be any positive integer), and the second frame may be the subsequent one or more image frames numbered N (eg, numbered N+1, N) +2,..., N+10 frames).
  • the plurality of second frames are not necessarily continuous, for example, numbers N+1, N+4, N+7, and N+10 may be used as the second frame, and the numbers are N+2, N+. 4 image frames are discarded, and so on.
  • moving object detection and target tracking may be performed, and determining the second frame according to the result of the moving object detection and the result of the target tracking Test target.
  • the detection targets in the first frame and the second frame may each be one or more.
  • the image-based object recognition is no longer performed in the second frame, and the moving object detection and target tracking are performed, the amount of calculation can be greatly reduced. Moreover, since the moving object detection and the target tracking are performed based on the target recognition result of the first frame, the accuracy of the target recognition in the second frame can be satisfied.
  • the present invention will be further described below by taking a first frame and a plurality of second frames as an example.
  • FIG. 2 is another schematic diagram of a target detection method according to an embodiment of the present invention. As shown in FIG. 2, the target detection method includes:
  • Step 201 Perform image-based object recognition on the first frame to obtain one or more detection targets in the first frame.
  • FIG. 3 is a diagram showing an example of target recognition according to an embodiment of the present invention, for example, an image mountable on a highway One frame of image captured by the head performs image-based object recognition. As shown in FIG. 3, a plurality of detection targets (e.g., vehicles) may be obtained, wherein each detection target is marked using a rectangular frame.
  • detection targets e.g., vehicles
  • the locations of these targets may be stored for processing of subsequent frames.
  • the detection target of the video may also be updated to the detection target in the current frame (eg, the first frame).
  • Step 202 Perform target tracking on the one or more detection targets in the second frame.
  • the target tracking of the second frame may be performed based on the target recognition result of the previous frame (for example, the first frame), thereby indicating the position of the detection target obtained in the previous frame in the second frame.
  • the target recognition result of the previous frame for example, the first frame
  • the target tracking of the second frame may be performed based on the target recognition result of the previous frame (for example, the first frame), thereby indicating the position of the detection target obtained in the previous frame in the second frame.
  • FIG. 4 is a diagram showing an example of target tracking according to an embodiment of the present invention.
  • target tracking may be performed on a second frame subsequent to the first frame shown in FIG. 3.
  • the position of a plurality of detection targets (e.g., vehicles) in the second frame may be obtained, wherein each of the tracked detection targets is marked using a rectangular frame.
  • the detection target obtained in step 201 is marked with a broken line frame
  • the newly appearing moving object with respect to FIG. 3 is marked with another broken line frame.
  • Step 203 Perform moving object detection on the second frame.
  • foreground detection or background monitoring may be performed on the second frame based on the target recognition result of the previous frame (for example, the first frame), thereby performing motion object detection to obtain one or the comparison with the previous frame.
  • a plurality of moving objects can mark the position of the detected moving object.
  • the present invention is not limited thereto.
  • a method of comparing with one or more previous frames may be employed, and moving objects are detected for the second frame to obtain a moving object.
  • moving objects are detected for the second frame to obtain a moving object.
  • FIG. 5 is a diagram showing an example of moving object detection according to an embodiment of the present invention.
  • moving object detection may be performed on the second frame subsequent to the first frame shown in FIG. 3.
  • a plurality of moving objects for example, vehicles
  • each moving object is marked using a rectangular frame.
  • the detection target obtained in step 201 is marked with a broken line frame
  • the newly appearing moving object with respect to FIG. 3 is marked with another broken line frame.
  • Step 204 comparing the result of the moving object detection with the result of the target tracking
  • the moving object detection can obtain one or more moving objects
  • the target tracking can obtain one or more tracked detection targets. For each moving object obtained by moving object detection, it can be judged Whether the moving object appears in the result of the target tracking, if it appears in the result of the target tracking, the moving object can be considered to have been recognized; if it does not appear in the result of the target tracking, the moving object can be considered as new The object that appears.
  • the moving object For example, for each moving object obtained, if the moving object has been detected in the previous frame, the position of the moving object and one of the tracking results will substantially coincide; if a moving object and tracking If all the targets in the result are not coincident, it can be stated that the moving object is not detected in the previous frame, and is a moving object that newly enters the frame.
  • one or more objects obtained by the moving object detection but not in the result of the target tracking can be taken as the newly appearing object.
  • Step 205 determining the newly appearing object belonging to the target type as a newly appearing target
  • a classification function may be applied to determine whether it belongs to the target type (for example, is it a vehicle), and if it belongs to the target type, the newly appearing object may be determined as a new appearance target; if not The target type can not be processed for this new object.
  • the area where the newly appearing moving object is located can be directly sent to the classifier to determine whether the moving object is the target to be detected.
  • Step 206 The newly appearing target and the target obtained by the target tracking are used as detection targets in the second frame.
  • the position of the newly appearing target obtained by the moving object detection, and the position of the target obtained by the target tracking can be obtained.
  • These targets one or more can be used as detection targets in the second frame, and in addition, the locations of these targets can also be stored for processing of subsequent frames.
  • the result of the detection of the moving object of the new target can be taken as the target detection result of the current frame (for example, the second frame) together with the result of the target tracking of the old target.
  • the detection target of the video may also be updated to the detection target in the current frame (eg, the second frame).
  • Step 207 it is judged whether there are other second frames; if yes, step 202 is continued, if not, the process may be ended, for example, the target recognition of the next first frame may be continued.
  • the video may include a plurality of first frames and a plurality of second frames; after performing image-based target recognition on a first frame (which may also be referred to as a key frame), multiple subsequent Two frames (which may also be referred to as normal frames) respectively perform moving object detection and target tracking, and determine the result based on the result of the moving object detection and the result of the target tracking detection
  • a first frame which may also be referred to as a key frame
  • multiple subsequent Two frames which may also be referred to as normal frames
  • the detection target of each second frame is described.
  • the video may include multiple first frames and multiple second frames, for example, having a predetermined number after each first frame.
  • image-based object recognition may be performed using the above steps 101 or 201; for each second frame, the processing based on moving object detection and target tracking may be performed using the above steps 202 to 206.
  • consecutive frames in the video can be divided into two types (key frames and normal frames) for processing, and each of the two key frames includes a plurality of normal frames.
  • the key frame has higher algorithm complexity and longer processing time; the normal frame has lower algorithm complexity and shorter processing time.
  • the overall average processing time can be reduced by a mixture of fewer key frames and more normal frames.
  • the target recognition result of the previous frame can be used.
  • the Mth second frame can use the detection result of the M-1 second frames, and the like.
  • FIG. 2 only schematically illustrates an embodiment of the present invention, but the present invention is not limited thereto.
  • the order of execution between the various steps can be appropriately adjusted, and other steps can be added or some of the steps can be reduced.
  • Those skilled in the art can appropriately modify the above based on the above contents, and are not limited to the description of the above drawings.
  • image-based object recognition is performed on a certain first frame in the video; moving object detection and target tracking are performed on the subsequent second frame, and the result is determined according to the result of the moving object detection and the result of the target tracking.
  • the detection target in the two frames.
  • Embodiments of the present invention provide a target detecting apparatus that detects a certain first frame and a subsequent one or more second frames in a video.
  • This embodiment 2 corresponds to the target detection method in Embodiment 1, and the same content will not be described again.
  • FIG. 7 is a schematic diagram of a target detecting apparatus according to an embodiment of the present invention. As shown in FIG. 7, the target detecting apparatus 700 includes:
  • a target identification unit 701 which performs image-based target recognition on the first frame to obtain a detection target in the first frame
  • a target tracking unit 702 which performs target tracking on the detection target in a second frame
  • a motion detecting unit 703 which performs moving object detection on the second frame
  • the target determining unit 704 determines the detection target in the second frame based on the result of the moving object detection and the result of the target tracking.
  • FIG. 8 is a schematic diagram of a target determining unit 704 according to an embodiment of the present invention. As shown in FIG. 8, the target determining unit 704 may include:
  • a result comparison unit 801 that compares the result of the moving object detection with the result of the target tracking
  • a new object obtaining unit 802 that takes one or more objects obtained by the moving object detection but not in the result of the target tracking as a newly appearing object;
  • a new target determining unit 803 that determines a newly appearing object belonging to the target type as a newly appearing target
  • the target obtaining unit 804 takes the newly appearing target and the target obtained by the target tracking as the detection target in the second frame.
  • the target detecting apparatus 700 may further include:
  • a location storage unit 705 stores the location of the newly appearing target obtained by the moving object detection and the location of the target obtained by the target tracking.
  • the target detecting apparatus 700 may further include:
  • the target update unit 706 updates the detection target of the video to the detection target in the current frame after obtaining the detection target in the current frame.
  • the first frame may have a predetermined number of second frames; for each second frame, moving object detection and target tracking may be separately performed, and according to the result of the moving object detection and the target The result of the tracking detection determines the detection target of each of the second frames.
  • the object detection device may also include other components or modules, and reference may be made to the prior art for the specific content of these components or modules.
  • image-based object recognition is performed on a certain first frame in the video; moving object detection and target tracking are performed on the subsequent second frame, and the result is determined according to the result of the moving object detection and the result of the target tracking.
  • the detection target in the two frames.
  • An embodiment of the present invention provides an image processing apparatus including the object detecting apparatus as described in Embodiment 2.
  • FIG. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus 900 may include a central processing unit (CPU) 100 and a memory 110; the memory 110 is coupled to the central processing unit 100.
  • the memory 110 can store various data; in addition, a program for information processing is stored, and the program is executed under the control of the central processing unit 100.
  • the functionality of the target detection device 700 can be integrated into the central processor 100.
  • the central processing unit 100 can be configured to implement the target detection method as described in Embodiment 1.
  • the target detecting device 700 can be configured separately from the central processing unit 100.
  • the target detecting device can be configured as a chip connected to the central processing unit 100, and the target detecting device can be realized by the control of the central processing unit 100.
  • the central processing unit 100 may be configured to perform control of detecting a certain first frame and a subsequent one or more second frames in the video: wherein the first frame is performed Image-based target recognition, obtaining a detection target in the first frame; performing target tracking on the detection target in the second frame; performing motion object detection on the second frame; and according to the moving object The result of the detection and the result of the target tracking determine the detection target in the second frame.
  • the image processing apparatus 900 may further include: an input/output (I/O) device 120, a display 130, and the like; wherein the functions of the above components are similar to those of the prior art, and are not described herein again. It is to be noted that the image processing apparatus 900 does not necessarily have to include all of the components shown in FIG. 9; in addition, the image processing apparatus 900 may further include components not shown in FIG. 9, and reference may be made to the related art.
  • I/O input/output
  • An embodiment of the present invention provides a computer readable program, wherein when the program is executed in a target detecting device or an image processing device, the program causes the target detecting device or the image processing device to perform the method as described in Embodiment 1. Target detection method.
  • An embodiment of the present invention provides a storage medium storing a computer readable program, wherein the computer readable program causes a target detecting device or an image processing device to perform the target detecting method as described in Embodiment 1.
  • the above apparatus and method of the present invention may be implemented by hardware or by hardware in combination with software.
  • the present invention relates to a computer readable program that, when executed by a logic component, enables the logic component to implement the apparatus or components described above, or to cause the logic component to implement the various methods described above Or steps.
  • the present invention also relates to a storage medium for storing the above program, such as a hard disk, a magnetic disk, an optical disk, a DVD, a flash memory, or the like.
  • the method/apparatus described in connection with the embodiments of the invention may be embodied directly in hardware, a software module executed by a processor, or a combination of both.
  • one or more of the functional block diagrams shown in FIG. 7 and/or one or more combinations of functional block diagrams may be
  • the respective software modules corresponding to the flow of the computer program may also correspond to the respective hardware modules.
  • These software modules may correspond to the respective steps shown in FIG. 1, respectively.
  • These hardware modules can be implemented, for example, by curing these software modules using a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • a storage medium can be coupled to the processor to enable the processor to read information from, and write information to, the storage medium; or the storage medium can be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC.
  • the software module can be stored in the memory of the mobile terminal or in a memory card that can be inserted into the mobile terminal.
  • the software module can be stored in the MEGA-SIM card or a large-capacity flash memory device.
  • One or more of the functional blocks described in the figures and/or one or more combinations of functional blocks may be implemented as a general purpose processor, digital signal processor (DSP) for performing the functions described herein.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • One or more of the functional blocks described with respect to the figures and/or one or more combinations of functional blocks may also be implemented as a combination of computing devices, eg, a combination of a DSP and a microprocessor, multiple microprocessors One or more microprocessors in conjunction with DSP communication or any other such configuration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种目标检测方法、装置以及图像处理设备。所述目标检测方法包括:对第一帧进行基于图像的目标识别,获得所述第一帧中的检测目标;在第二帧中对所述检测目标进行目标跟踪;对所述第二帧进行运动物体检测;以及根据所述运动物体检测的结果和所述目标跟踪的结果确定所述第二帧中的检测目标。由此,不仅可以满足目标识别的准确性,而且可以降低视频处理中目标识别的计算量,能够应用到对处理时间要求较高的实时图像处理中。

Description

目标检测方法、装置以及图像处理设备 技术领域
本发明实施例涉及图形图像技术领域,特别涉及一种目标检测方法、装置以及图像处理设备。
背景技术
在视频监控领域,一般需要检测出感兴趣的目标。例如在停车场的车辆检测中,需要对视频中出现的车辆进行实时监测。
在目前的方案中,采用的是基于单幅图像的目标识别,例如基于整个图像进行车辆/交通工具的目标识别。每个被识别出的目标可以被标记出(例如通过矩形标记)。目标识别可以包括两个方面的功能:区域提取和目标分类;区域提取可以从图像中检测出物体的区域(例如在图像中的位置),目标分类对每个提取区域内的像素信息进行分类,可以判定该物体是否为感兴趣的目标(例如是不是车辆)。
应该注意,上面对技术背景的介绍只是为了方便对本发明的技术方案进行清楚、完整的说明,并方便本领域技术人员的理解而阐述的。不能仅仅因为这些方案在本发明的背景技术部分进行了阐述而认为上述技术方案为本领域技术人员所公知。
发明内容
但是,发明人发现:目前基于图像的目标识别具有很高的计算量,难以应用到对处理时间要求较高的实时图像处理(例如高速公路的车辆检测场景)中。
本发明实施例提供一种目标检测方法、装置以及图像处理设备,期待能够降低视频处理中目标识别的计算量,能够应用到对处理时间要求较高的实时图像处理中。
根据本发明实施例的第一个方面,提供一种目标检测方法,对视频中的某个第一帧以及后续的一个或多个第二帧进行检测,所述目标检测方法包括:
对所述第一帧进行基于图像的目标识别,获得所述第一帧中的检测目标;
在所述第二帧中对所述检测目标进行目标跟踪;
对所述第二帧进行运动物体检测;
根据所述运动物体检测的结果和所述目标跟踪的结果确定所述第二帧中的检测 目标。
根据本发明实施例的第二个方面,提供一种目标检测装置,对视频中的某个第一帧以及后续的一个或多个第二帧进行检测,所述目标检测装置包括:
目标识别单元,其对所述第一帧进行基于图像的目标识别,获得所述第一帧中的检测目标;
目标跟踪单元,其在所述第二帧中对所述检测目标进行目标跟踪;
运动检测单元,其对所述第二帧进行运动物体检测;
目标确定单元,其根据所述运动物体检测的结果和所述目标跟踪的结果确定所述第二帧中的检测目标。
根据本发明实施例的第三个方面,提供一种图像处理设备,其中,所述图像处理设备包括如前所述的目标检测装置。
根据本发明实施例的又一个方面,提供一种计算机可读程序,其中当在目标检测装置或者图像处理设备中执行所述程序时,所述程序使得所述目标检测装置或者图像处理设备执行如上所述的目标检测方法。
根据本发明实施例的又一个方面,提供一种存储有计算机可读程序的存储介质,其中所述计算机可读程序使得目标检测装置或者图像处理设备执行如上所述的目标检测方法。
本发明实施例的有益效果在于:对视频中某个第一帧进行基于图像的目标识别;对后续的第二帧进行运动物体检测和目标跟踪,以及根据运动物体检测的结果和目标跟踪的结果确定该第二帧中的检测目标。由此,不仅可以满足目标识别的准确性,而且可以降低视频处理中目标识别的计算量,能够应用到对处理时间要求较高的实时图像处理中。
参照后文的说明和附图,详细公开了本发明的特定实施方式,指明了本发明的原理可以被采用的方式。应该理解,本发明的实施方式在范围上并不因而受到限制。在所附权利要求的精神和条款的范围内,本发明的实施方式包括许多改变、修改和等同。
针对一种实施方式描述和/或示出的特征可以以相同或类似的方式在一个或更多个其它实施方式中使用,与其它实施方式中的特征相组合,或替代其它实施方式中的特征。
应该强调,术语“包括/包含”在本文使用时指特征、整件、步骤或组件的存在, 但并不排除一个或更多个其它特征、整件、步骤或组件的存在或附加。
附图说明
参照以下的附图可以更好地理解本发明的很多方面。附图中的部件不是成比例绘制的,而只是为了示出本发明的原理。为了便于示出和描述本发明的一些部分,附图中对应部分可能被放大或缩小。
在本发明的一个附图或一种实施方式中描述的元素和特征可以与一个或更多个其它附图或实施方式中示出的元素和特征相结合。此外,在附图中,类似的标号表示几个附图中对应的部件,并可用于指示多于一种实施方式中使用的对应部件。
图1是本发明实施例1的目标检测方法的一示意图;
图2是本发明实施例1的目标检测方法的另一示意图;
图3是本发明实施例1的目标识别的一示例图;
图4是本发明实施例1的目标跟踪的一示例图;
图5是本发明实施例1的运动物体检测的一示例图;
图6是本发明实施例1的视频中多个帧的一示意图;
图7是本发明实施例2的目标检测装置的一示意图;
图8是本发明实施例2的目标确定单元的一示意图;
图9是本发明实施例3的图像处理设备的一示意图。
具体实施方式
参照附图,通过下面的说明书,本发明的前述以及其它特征将变得明显。在说明书和附图中,具体公开了本发明的特定实施方式,其表明了其中可以采用本发明的原则的部分实施方式,应了解的是,本发明不限于所描述的实施方式,相反,本发明包括落入所附权利要求的范围内的全部修改、变型以及等同物。
实施例1
本发明实施例提供一种目标检测方法,对视频中的某个第一帧以及后续的一个或多个第二帧进行检测。
图1是本发明实施例的目标检测方法的一示意图,如图1所示,所述目标检测方法包括:
步骤101,对第一帧进行基于图像的目标识别,获得所述第一帧中的检测目标;
步骤102,在所述第二帧中对所述检测目标进行目标跟踪;
步骤103,对所述第二帧进行运动物体检测;
步骤104,根据所述运动物体检测的结果和所述目标跟踪的结果确定所述第二帧中的检测目标。
在本实施例中,可以任意选取视频中的某一帧作为上述第一帧,对该第一帧进行基于图像的目标识别。由此可以获得该第一帧中的检测目标,例如可以获得目标类型为车辆的多个物体在该第一帧中的位置。关于具体如何进行基于图像的目标识别,可以参考相关技术,具体实施细节本发明不再赘述。
值得注意的是,本发明实施例的第一帧和第二帧并不是指视频中序号或者编号为1和2的帧,而仅是为了方便表述,将实施不同图像处理的两种帧从称谓上进行区分。本发明实施例的第一帧例如也可以称为关键帧或者重要帧,第二帧例如也可以称为正常帧或者普通帧;本发明不限于此。
例如,第一帧可以是视频中编号为N的图像帧(N可以为任意正整数),第二帧可以是该编号为N的后续一个或多个图像帧(例如编号为N+1,N+2,……,N+10的帧)。此外,多个第二帧也并不一定是连续的,例如可以使用编号为N+1,N+4,N+7和N+10作为第二帧,而将编号为N+2,N+4等图像帧丢弃,等等。
在本实施例中,对于该第一帧后的一个或多个第二帧,可以进行运动物体检测的和目标跟踪,并根据运动物体检测的结果和目标跟踪的结果确定该第二帧中的检测目标。其中第一帧和第二帧中的检测目标均可以是一个或多个。
由于在第二帧中不再进行基于图像的目标识别,而进行运动物体检测的和目标跟踪,可以大大降低计算量。并且,由于运动物体检测的和目标跟踪是基于第一帧的目标识别结果而进行的,能够满足第二帧中目标识别的准确性。
以下以一个第一帧和多个第二帧为例,对本发明进行进一步说明。
图2是本发明实施例的目标检测方法的另一示意图,如图2所示,所述目标检测方法包括:
步骤201,对第一帧进行基于图像的目标识别,获得所述第一帧中的一个或多个检测目标。
图3是本发明实施例的目标识别的一示例图,例如可以对高速公路上安装的摄像 头所拍摄到的一帧图像进行基于图像的目标识别。如图3所示,可以获得多个检测目标(例如车辆),其中每个检测目标使用矩形框被标记。
在本实施例中,在获得当前帧(例如第一帧)中的检测目标后,可以将这些目标的位置存储起来用于后续帧的处理。此外,还可以将所述视频的检测目标更新为当前帧(例如第一帧)中的检测目标。
步骤202,在第二帧中对所述一个或多个检测目标进行目标跟踪;
在本实施例中,可以基于前一帧(例如第一帧)的目标识别结果,对第二帧进行目标跟踪,由此可以标注出前一帧中获得的检测目标在第二帧中的位置。关于具体如何进行目标跟踪,可以参考相关技术,具体实施细节本发明不再赘述。
图4是本发明实施例的目标跟踪的一示例图,例如可以对图3所示的第一帧后续的第二帧进行目标跟踪。如图4所示,可以获得多个检测目标(例如车辆)在第二帧中的位置,其中每个被跟踪的检测目标使用矩形框被标记。
如图4所示,为方便观察,步骤201中获得的检测目标使用虚线框标记,而相对于图3新出现的运动物体使用另一虚线框标记。
步骤203,对第二帧进行运动物体检测;
在本实施例中,可以基于前一帧(例如第一帧)的目标识别结果,对第二帧进行前景检测或者背景监测,由此可以进行运动物体检测而获得与前面帧相比的一个或多个运动物体,可以标注出检测到的运动物体的位置。
但本发明不限于此,例如还可以采用和之前一个或多个帧进行比较的方法,对第二帧进行运动物体检测而获得运动物体。关于具体如何进行运动物体检测,可以参考相关技术,具体实施细节本发明不再赘述。
图5是本发明实施例的运动物体检测的一示例图,例如可以对图3所示的第一帧后续的第二帧进行运动物体检测。如图5所示,可以获得多个运动物体(例如车辆),其中每个运动物体使用矩形框被标记。
如图5所示,为方便观察,步骤201中获得的检测目标使用虚线框标记,而相对于图3新出现的运动物体使用另一虚线框标记。
步骤204,将所述运动物体检测的结果和所述目标跟踪的结果进行比较;
在本实施例中,运动物体检测可以获得一个或多个运动物体,目标跟踪可以获得一个或多个跟踪后的检测目标。对于由运动物体检测获得的每一个运动物体,可以判 断该运动物体是否出现在目标跟踪的结果中,如果出现在目标跟踪的结果中,则可以认为该运动物体已经被识别;如果没有出现在目标跟踪的结果中,则可以认为该运动物体是新出现的物体。
例如,对于得到的每一个运动物体,如果该运动物体在前一帧中已经被检出,则该运动物体与跟踪结果中的某一个目标的位置将会基本重合;如果某个运动物体与跟踪结果中的所有目标都不重合,则可以说明该运动物体在前一帧中未被检测出来,为新进入该帧的运动物体。
因此,可以将运动物体检测获得的但不在目标跟踪的结果中的一个或多个物体作为新出现物体。
步骤205,将属于目标类型的所述新出现物体确定为新出现目标;
在本实施例中,对于每个新出现物体,可以应用分类功能判断是否属于目标类型(例如是不是车辆),如果属于目标类型,则可以将该新出现物体确定为新出现目标;如果不属于目标类型,则可以不对该新出现物体进行处理。
例如,可以将新出现的运动物体所在的区域直接送入分类器,判断这个运动物体是否是需要检测的目标。
步骤206,将新出现目标以及目标跟踪获得的目标作为所述第二帧中的检测目标。
在本实施例中,可以获得由所述运动物体检测获得的所述新出现目标的位置,以及由所述目标跟踪获得的目标的位置。可以将这些目标(一个或多个)作为第二帧中的检测目标,此外,还可以将这些目标的位置存储起来用于后续帧的处理。
即,可以将新目标的运动物体检测的结果与旧目标的目标跟踪的结果一起作为当前帧(例如第二帧)的目标检测结果。
在本实施例中,在获得当前帧(例如第二帧)中的检测目标后,还可以将所述视频的检测目标更新为当前帧(例如第二帧)中的检测目标。
步骤207,判断是否还有其他第二帧;如果是则继续执行步骤202,如果不是则可以结束该过程,例如可以继续进行下一第一帧的目标识别。
以上对一个第一帧以及多个第二帧进行了示意性说明。在本实施例中,视频可以包括多个第一帧和多个第二帧;在对一个第一帧(也可以称为关键帧)进行基于图像的目标识别后,可以对多个后续的第二帧(也可以称为正常帧)分别进行运动物体检测和目标跟踪,以及根据所述运动物体检测的结果和所述目标跟踪检测的结果确定所 述每个第二帧的检测目标。
图6是本发明实施例的视频中多个帧的一示意图,如图6所示,该视频可以包括多个第一帧和多个第二帧,例如在每个第一帧后具有预定数目N(例如N=10)的第二帧。对于每个第一帧,可以使用上述步骤101或201进行基于图像的目标识别;对于每个第二帧,可以使用上述步骤202至步骤206进行基于运动物体检测的和目标跟踪的处理。
在本实施例中,可以将视频中的连续帧分为两种类型(关键帧与普通帧)分别进行处理,每两个关键帧之间包含若干个普通帧。关键帧的算法复杂度较高,处理时间较长;普通帧的算法复杂度较低,处理时间较短。由此,通过较少的关键帧搭配较多的普通帧的混合结构,可以实现整体平均处理时间的下降。
值得注意的是,对于视频的每个帧,可以使用前面帧的目标识别结果。例如第M个第二帧可以使用M-1个第二帧的检测结果,等等。此外,附图2仅示意性地对本发明实施例进行了说明,但本发明不限于此。例如可以适当地调整各个步骤之间的执行顺序,此外还可以增加其他的一些步骤或者减少其中的某些步骤。本领域的技术人员可以根据上述内容进行适当地变型,而不仅限于上述附图的记载。
由上述实施例可知,对视频中某个第一帧进行基于图像的目标识别;对后续的第二帧进行运动物体检测和目标跟踪,以及根据运动物体检测的结果和目标跟踪的结果确定该第二帧中的检测目标。由此,不仅可以满足目标识别的准确性,而且可以降低视频处理中目标识别的计算量,能够应用到对处理时间要求较高的实时图像处理中。
实施例2
本发明实施例提供一种目标检测装置,对视频中的某个第一帧以及后续的一个或多个第二帧进行检测。本实施例2对应于实施例1中的目标检测方法,相同的内容不再赘述。
图7是本发明实施例的目标检测装置的一示意图,如图7所示,目标检测装置700包括:
目标识别单元701,其对第一帧进行基于图像的目标识别,获得所述第一帧中的检测目标;
目标跟踪单元702,其在第二帧中对所述检测目标进行目标跟踪;
运动检测单元703,其对所述第二帧进行运动物体检测;
目标确定单元704,其根据所述运动物体检测的结果和所述目标跟踪的结果确定所述第二帧中的检测目标。
图8是本发明实施例的目标确定单元704的一示意图,如图8所示,目标确定单元704可以包括:
结果比较单元801,其将运动物体检测的结果和目标跟踪的结果进行比较;
新物体获得单元802,其将所述运动物体检测获得的但不在所述目标跟踪的结果中的一个或多个物体作为新出现物体;
新目标确定单元803,其将属于目标类型的新出现物体确定为新出现目标;以及
目标获得单元804,其将所述新出现目标以及所述目标跟踪获得的目标作为所述第二帧中的检测目标。
如图7所示,目标检测装置700还可以包括:
位置存储单元705,其存储由所述运动物体检测获得的所述新出现目标的位置以及由所述目标跟踪获得的目标的位置。
如图7所示,目标检测装置700还可以包括:
目标更新单元706,其在获得当前帧中的检测目标后将所述视频的检测目标更新为所述当前帧中的检测目标。
在本实施例中,第一帧后可以具有预定数目的多个第二帧;对于每个第二帧可以分别进行运动物体检测和目标跟踪,以及根据所述运动物体检测的结果和所述目标跟踪检测的结果确定所述每个第二帧的检测目标。
值得注意的是,以上仅对与本发明相关的各部件进行了说明,但本发明不限于此。目标检测装置还可以包括其他部件或者模块,关于这些部件或者模块的具体内容,可以参考现有技术。
由上述实施例可知,对视频中某个第一帧进行基于图像的目标识别;对后续的第二帧进行运动物体检测和目标跟踪,以及根据运动物体检测的结果和目标跟踪的结果确定该第二帧中的检测目标。由此,不仅可以满足目标识别的准确性,而且可以降低视频处理中目标识别的计算量,能够应用到对处理时间要求较高的实时图像处理中。
实施例3
本发明实施例提供一种图像处理设备,该图像处理设备包括如实施例2所述的目标检测装置。
图9是本发明实施例的图像处理设备的一示意图。如图9所示,图像处理设备900可以包括:中央处理器(CPU)100和存储器110;存储器110耦合到中央处理器100。其中该存储器110可存储各种数据;此外还存储信息处理的程序,并且在中央处理器100的控制下执行该程序。
在一个实施方式中,目标检测装置700的功能可以被集成到中央处理器100中。其中,中央处理器100可以被配置为实现如实施例1所述的目标检测方法。
在另一个实施方式中,目标检测装置700可以与中央处理器100分开配置,例如可以将目标检测装置配置为与中央处理器100连接的芯片,通过中央处理器100的控制来实现目标检测装置的功能。
在本实施例中,中央处理器100可以被配置为进行如下的控制:对视频中的某个第一帧以及后续的一个或多个第二帧进行检测:其中,对所述第一帧进行基于图像的目标识别,获得所述第一帧中的检测目标;在所述第二帧中对所述检测目标进行目标跟踪;对所述第二帧进行运动物体检测;以及根据所述运动物体检测的结果和所述目标跟踪的结果确定所述第二帧中的检测目标。
此外,如图9所示,图像处理设备900还可以包括:输入输出(I/O)设备120和显示器130等;其中,上述部件的功能与现有技术类似,此处不再赘述。值得注意的是,图像处理设备900也并不是必须要包括图9中所示的所有部件;此外,图像处理设备900还可以包括图9中没有示出的部件,可以参考现有技术。
本发明实施例提供一种计算机可读程序,其中当在目标检测装置或图像处理设备中执行所述程序时,所述程序使得所述目标检测装置或图像处理设备执行如实施例1所述的目标检测方法。
本发明实施例提供一种存储有计算机可读程序的存储介质,其中所述计算机可读程序使得目标检测装置或图像处理设备执行如实施例1所述的目标检测方法。
本发明以上的装置和方法可以由硬件实现,也可以由硬件结合软件实现。本发明涉及这样的计算机可读程序,当该程序被逻辑部件所执行时,能够使该逻辑部件实现上文所述的装置或构成部件,或使该逻辑部件实现上文所述的各种方法或步骤。本发明还涉及用于存储以上程序的存储介质,如硬盘、磁盘、光盘、DVD、flash存储器等。
结合本发明实施例描述的方法/装置可直接体现为硬件、由处理器执行的软件模块或二者组合。例如,图7中所示的功能框图中的一个或多个和/或功能框图的一个或多个组合(例如,目标识别单元、目标跟踪单元、运动检测单元以及目标确定单元等),既可以对应于计算机程序流程的各个软件模块,亦可以对应于各个硬件模块。这些软件模块,可以分别对应于图1所示的各个步骤。这些硬件模块例如可利用现场可编程门阵列(FPGA)将这些软件模块固化而实现。
软件模块可以位于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动磁盘、CD-ROM或者本领域已知的任何其它形式的存储介质。可以将一种存储介质耦接至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息;或者该存储介质可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。该软件模块可以存储在移动终端的存储器中,也可以存储在可插入移动终端的存储卡中。例如,若设备(如移动终端)采用的是较大容量的MEGA-SIM卡或者大容量的闪存装置,则该软件模块可存储在该MEGA-SIM卡或者大容量的闪存装置中。
针对附图中描述的功能方框中的一个或多个和/或功能方框的一个或多个组合,可以实现为用于执行本申请所描述功能的通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件或者其任意适当组合。针对附图描述的功能方框中的一个或多个和/或功能方框的一个或多个组合,还可以实现为计算设备的组合,例如,DSP和微处理器的组合、多个微处理器、与DSP通信结合的一个或多个微处理器或者任何其它这种配置。
以上结合具体的实施方式对本发明进行了描述,但本领域技术人员应该清楚,这些描述都是示例性的,并不是对本发明保护范围的限制。本领域技术人员可以根据本发明的精神和原理对本发明做出各种变型和修改,这些变型和修改也在本发明的范围内。

Claims (11)

  1. 一种目标检测方法,对视频中的某个第一帧以及后续的一个或多个第二帧进行检测,所述目标检测方法包括:
    对所述第一帧进行基于图像的目标识别,获得所述第一帧中的检测目标;
    在所述第二帧中对所述检测目标进行目标跟踪;
    对所述第二帧进行运动物体检测;
    根据所述运动物体检测的结果和所述目标跟踪的结果确定所述第二帧中的检测目标。
  2. 根据权利要求1所述的目标检测方法,其中,根据所述运动物体检测的结果和所述目标跟踪的结果确定所述第二帧中的检测目标,包括:
    将所述运动物体检测的结果和所述目标跟踪的结果进行比较;
    将所述运动物体检测获得的但不在所述目标跟踪的结果中的一个或多个物体作为新出现物体;
    将属于目标类型的所述新出现物体确定为新出现目标;以及
    将所述新出现目标以及所述目标跟踪获得的目标作为所述第二帧中的检测目标。
  3. 根据权利要求2所述的目标检测方法,其中,所述目标检测方法还包括:
    存储由所述运动物体检测获得的所述新出现目标的位置以及由所述目标跟踪获得的目标的位置。
  4. 根据权利要求1所述的目标检测方法,其中,所述第一帧后具有预定数目的多个第二帧;
    对于每个第二帧分别进行运动物体检测和目标跟踪,以及根据所述运动物体检测的结果和所述目标跟踪检测的结果确定所述每个第二帧的检测目标。
  5. 根据权利要求1所述的目标检测方法,其中,所述目标检测方法还包括:
    在获得当前帧中的检测目标后将所述视频的检测目标更新为所述当前帧中的检测目标。
  6. 一种目标检测装置,对视频中的某个第一帧以及后续的一个或多个第二帧进行检测,所述目标检测装置包括:
    目标识别单元,其对所述第一帧进行基于图像的目标识别,获得所述第一帧中的 检测目标;
    目标跟踪单元,其在所述第二帧中对所述检测目标进行目标跟踪;
    运动检测单元,其对所述第二帧进行运动物体检测;
    目标确定单元,其根据所述运动物体检测的结果和所述目标跟踪的结果确定所述第二帧中的检测目标。
  7. 根据权利要求6所述的目标检测装置,其中,所述目标确定单元包括:
    结果比较单元,其将所述运动物体检测的结果和所述目标跟踪的结果进行比较;
    新物体获得单元,其将所述运动物体检测获得的但不在所述目标跟踪的结果中的一个或多个物体作为新出现物体;
    新目标确定单元,其将属于目标类型的所述新出现物体确定为新出现目标;以及
    目标获得单元,其将所述新出现目标以及所述目标跟踪获得的目标作为所述第二帧中的检测目标。
  8. 根据权利要求7所述的目标检测装置,其中,所述目标检测装置还包括:
    位置存储单元,其存储由所述运动物体检测获得的所述新出现目标的位置以及由所述目标跟踪获得的目标的位置。
  9. 根据权利要求6所述的目标检测装置,其中,所述第一帧后具有预定数目的多个第二帧;
    对于每个第二帧分别进行运动物体检测和目标跟踪,以及根据所述运动物体检测的结果和所述目标跟踪检测的结果确定所述每个第二帧的检测目标。
  10. 根据权利要求6所述的目标检测装置,其中,所述目标检测装置还包括:
    目标更新单元,其在获得当前帧中的检测目标后将所述视频的检测目标更新为所述当前帧中的检测目标。
  11. 一种图像处理设备,其中,所述图像处理设备包括如权利要求6所述的目标检测装置。
PCT/CN2016/101093 2016-09-30 2016-09-30 目标检测方法、装置以及图像处理设备 WO2018058530A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680087593.7A CN109478333A (zh) 2016-09-30 2016-09-30 目标检测方法、装置以及图像处理设备
PCT/CN2016/101093 WO2018058530A1 (zh) 2016-09-30 2016-09-30 目标检测方法、装置以及图像处理设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/101093 WO2018058530A1 (zh) 2016-09-30 2016-09-30 目标检测方法、装置以及图像处理设备

Publications (1)

Publication Number Publication Date
WO2018058530A1 true WO2018058530A1 (zh) 2018-04-05

Family

ID=61762528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/101093 WO2018058530A1 (zh) 2016-09-30 2016-09-30 目标检测方法、装置以及图像处理设备

Country Status (2)

Country Link
CN (1) CN109478333A (zh)
WO (1) WO2018058530A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597917A (zh) * 2020-04-26 2020-08-28 河海大学 一种基于帧差分法的目标检测方法
CN111915639A (zh) * 2020-08-06 2020-11-10 广州市百果园信息技术有限公司 目标检测跟踪方法、装置、电子设备和存储介质
CN113223043A (zh) * 2021-03-26 2021-08-06 西安闻泰信息技术有限公司 一种移动目标的检测方法、装置、设备及介质
CN113269007A (zh) * 2020-02-14 2021-08-17 富士通株式会社 道路监控视频的目标跟踪装置以及方法
CN111915639B (zh) * 2020-08-06 2024-05-31 广州市百果园信息技术有限公司 目标检测跟踪方法、装置、电子设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112445318A (zh) * 2019-08-30 2021-03-05 龙芯中科技术股份有限公司 一种对象显示方法、装置、电子设备及存储介质
CN112651263A (zh) * 2019-10-09 2021-04-13 富士通株式会社 过滤背景物体的方法和装置
CN115731516A (zh) * 2022-11-21 2023-03-03 国能九江发电有限公司 一种基于目标跟踪的行为识别方法、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012073971A (ja) * 2010-09-30 2012-04-12 Fujifilm Corp 動画オブジェクト検出装置、方法、及びプログラム
CN102521841A (zh) * 2011-11-22 2012-06-27 四川九洲电器集团有限责任公司 多目标物体跟踪方法
CN103489199A (zh) * 2012-06-13 2014-01-01 通号通信信息集团有限公司 视频图像目标跟踪处理方法和系统
CN104574440A (zh) * 2014-12-30 2015-04-29 安科智慧城市技术(中国)有限公司 一种视频运动目标跟踪方法及装置
CN104680555A (zh) * 2015-02-13 2015-06-03 电子科技大学 基于视频监控的越界检测方法及越界监控系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008259161A (ja) * 2007-03-13 2008-10-23 Victor Co Of Japan Ltd 目標追尾装置
CN101399969B (zh) * 2007-09-28 2012-09-05 三星电子株式会社 基于运动相机的运动目标检测与跟踪的系统、设备和方法
JP5227639B2 (ja) * 2008-04-04 2013-07-03 富士フイルム株式会社 オブジェクト検出方法、オブジェクト検出装置、およびオブジェクト検出プログラム
CN101937563B (zh) * 2009-07-03 2012-05-30 深圳泰山在线科技有限公司 一种目标检测方法和设备及其使用的图像采集装置
CN103942536B (zh) * 2014-04-04 2017-04-26 西安交通大学 一种迭代更新轨迹模型的多目标跟踪方法
CN104966045B (zh) * 2015-04-02 2018-06-05 北京天睿空间科技有限公司 基于视频的飞机进出泊位自动检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012073971A (ja) * 2010-09-30 2012-04-12 Fujifilm Corp 動画オブジェクト検出装置、方法、及びプログラム
CN102521841A (zh) * 2011-11-22 2012-06-27 四川九洲电器集团有限责任公司 多目标物体跟踪方法
CN103489199A (zh) * 2012-06-13 2014-01-01 通号通信信息集团有限公司 视频图像目标跟踪处理方法和系统
CN104574440A (zh) * 2014-12-30 2015-04-29 安科智慧城市技术(中国)有限公司 一种视频运动目标跟踪方法及装置
CN104680555A (zh) * 2015-02-13 2015-06-03 电子科技大学 基于视频监控的越界检测方法及越界监控系统

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269007A (zh) * 2020-02-14 2021-08-17 富士通株式会社 道路监控视频的目标跟踪装置以及方法
CN111597917A (zh) * 2020-04-26 2020-08-28 河海大学 一种基于帧差分法的目标检测方法
CN111597917B (zh) * 2020-04-26 2022-08-05 河海大学 一种基于帧差分法的目标检测方法
CN111915639A (zh) * 2020-08-06 2020-11-10 广州市百果园信息技术有限公司 目标检测跟踪方法、装置、电子设备和存储介质
WO2022028592A1 (zh) * 2020-08-06 2022-02-10 百果园技术(新加坡)有限公司 目标检测跟踪方法、装置、电子设备和存储介质
CN111915639B (zh) * 2020-08-06 2024-05-31 广州市百果园信息技术有限公司 目标检测跟踪方法、装置、电子设备和存储介质
CN113223043A (zh) * 2021-03-26 2021-08-06 西安闻泰信息技术有限公司 一种移动目标的检测方法、装置、设备及介质

Also Published As

Publication number Publication date
CN109478333A (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
WO2018058530A1 (zh) 目标检测方法、装置以及图像处理设备
CN110414507B (zh) 车牌识别方法、装置、计算机设备和存储介质
US10212397B2 (en) Abandoned object detection apparatus and method and system
US9953225B2 (en) Image processing apparatus and image processing method
WO2018153211A1 (zh) 获取交通路况信息的方法及装置、计算机存储介质
CN110634153A (zh) 目标跟踪模板更新方法、装置、计算机设备和存储介质
WO2015184899A1 (zh) 一种车辆车牌识别方法及装置
CN111429483A (zh) 高速跨摄像机多目标跟踪方法、系统、装置及存储介质
CN108647587B (zh) 人数统计方法、装置、终端及存储介质
CN111553234B (zh) 融合人脸特征与Re-ID特征排序的行人跟踪方法及装置
CN111104925B (zh) 图像处理方法、装置、存储介质和电子设备
US11017552B2 (en) Measurement method and apparatus
CN110909699A (zh) 视频车辆非导向行驶检测方法、装置及可读存储介质
CN110826484A (zh) 车辆重识别的方法、装置、计算机设备及模型训练方法
CN110647818A (zh) 一种遮挡目标物体的识别方法及装置
CN113343985B (zh) 车牌识别方法和装置
CN113657434A (zh) 人脸人体关联方法、系统以及计算机可读存储介质
US11250269B2 (en) Recognition method and apparatus for false detection of an abandoned object and image processing device
CN110298302B (zh) 一种人体目标检测方法及相关设备
CN113869137A (zh) 事件检测方法、装置、终端设备及存储介质
CN104239847A (zh) 行车警示方法及车用电子装置
WO2018058573A1 (zh) 对象检测方法、对象检测装置以及电子设备
CN113256683B (zh) 目标跟踪方法及相关设备
CN113160272B (zh) 目标跟踪方法、装置、电子设备及存储介质
CN113076851A (zh) 一种车辆违章数据的采集方法、装置以及计算机设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16917280

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16917280

Country of ref document: EP

Kind code of ref document: A1