WO2018058573A1 - Procédé de détection d'objet, appareil de détection d'objet et dispositif électronique - Google Patents

Procédé de détection d'objet, appareil de détection d'objet et dispositif électronique Download PDF

Info

Publication number
WO2018058573A1
WO2018058573A1 PCT/CN2016/101204 CN2016101204W WO2018058573A1 WO 2018058573 A1 WO2018058573 A1 WO 2018058573A1 CN 2016101204 W CN2016101204 W CN 2016101204W WO 2018058573 A1 WO2018058573 A1 WO 2018058573A1
Authority
WO
WIPO (PCT)
Prior art keywords
video image
image frame
interest
region
unit
Prior art date
Application number
PCT/CN2016/101204
Other languages
English (en)
Chinese (zh)
Inventor
伍健荣
刘晓青
白向晖
谭志明
东明浩
Original Assignee
富士通株式会社
伍健荣
刘晓青
白向晖
谭志明
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社, 伍健荣, 刘晓青, 白向晖, 谭志明 filed Critical 富士通株式会社
Priority to PCT/CN2016/101204 priority Critical patent/WO2018058573A1/fr
Priority to CN201680087601.8A priority patent/CN109479118A/zh
Publication of WO2018058573A1 publication Critical patent/WO2018058573A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present application relates to the field of information technology, and in particular, to a video image based object detecting method, an object detecting device, and an electronic device.
  • object detection can be performed on a video surveillance image, thereby identifying an object such as a specific vehicle, and further implementing functions such as object recognition, tracking, and control.
  • object detection can be performed on the entire image range of the video image frame, so that the blind area of detection can be avoided, but the range of detection needs to be large, and the data processing amount when detecting is compared. Big.
  • a Region of Interest may be preset in a video image frame, and object detection is performed for a region of interest of each video image frame, thereby reducing detection time.
  • the amount of data processing increases the detection speed.
  • the region of interest is preset, and the locations of the regions of interest in each video image frame are the same unless a new region of interest is re-set.
  • the object to be detected usually moves, and when it moves outside the region of interest, it is difficult to be detected, thereby causing a missed detection.
  • An embodiment of the present application provides an object detecting method, an object detecting apparatus, and an electronic device, which can extract a region of interest based on motion information of a video image frame, and perform object detection according to the extracted region of interest, Therefore, the accuracy of object detection can be improved and the detection speed can be improved.
  • an object detection apparatus For detecting a target object from a video image frame, the apparatus includes:
  • An extracting unit that extracts a region of interest from the video image frame based on motion information of a video image frame
  • a detecting unit that performs object detection in the video image frame according to the region of interest extracted by the extracting unit.
  • an object detecting method for detecting a target object from a video image frame, the method comprising:
  • Object detection is performed in the video image frame based on the extracted region of interest.
  • an electronic device comprising the object detecting device of the first aspect of the above embodiment.
  • the beneficial effects of the embodiments of the present application are that, according to the implementation of the present application, the accuracy of object detection can be improved, and the detection speed can be improved.
  • FIG. 1 is a schematic diagram of an object detecting device according to Embodiment 1 of the present application.
  • FIG. 2 is a schematic diagram of an extracting unit of Embodiment 1 of the present application.
  • FIG. 3 is a schematic diagram of a video image frame according to Embodiment 1 of the present application.
  • FIG. 4 is a schematic diagram of a binarized moving image corresponding to the video image frame of FIG. 3;
  • FIG. 5 is a schematic diagram of performing a connected domain segmentation process on the binarized moving image of FIG. 4 and generating a circumscribed rectangle;
  • FIG. 6 is a schematic diagram of merging connected domains according to Embodiment 1 of the present application.
  • FIG. 7 is another schematic diagram of merging connected domains according to Embodiment 1 of the present application.
  • FIG. 8 is a schematic diagram of a detecting unit of Embodiment 1 of the present application.
  • Embodiment 9 is a schematic diagram of combining detection results according to Embodiment 1 of the present application.
  • FIG. 11 is a schematic flow chart of an object detecting method according to Embodiment 2 of the present application.
  • FIG. 12 is a schematic diagram of a method for extracting a region of interest according to Embodiment 2 of the present application.
  • FIG. 13 is a schematic diagram of a method for performing object detection according to Embodiment 2 of the present application.
  • FIG. 14 is a schematic diagram showing the configuration of an electronic device according to Embodiment 3 of the present application.
  • Embodiment 1 of the present application provides an object detection device for detecting a target object from a video image frame.
  • the detecting device 100 includes an extracting unit 101 and a detecting unit 102.
  • the extracting unit 101 extracts a region of interest from the video image frame based on the motion information of the video image frame; the detecting unit 102 is in the video image frame according to the region of interest extracted by the extracting unit 101. Perform object detection.
  • the object detecting apparatus can extract the region of interest based on the motion information of the video image frame, and perform object detection based on the extracted region of interest, thereby being able to extract more accurately for each video image frame Corresponding regions of interest, thereby improving the accuracy of object detection and increasing the speed of detection.
  • the video image frame may be, for example, an image frame in a video captured by the surveillance camera.
  • the video image frame may also be from other devices, which is not limited in this embodiment.
  • the extracting unit 101 includes a motion detecting unit 201, a region dividing unit 202, and a generating unit 203.
  • the motion detecting unit 201 is configured to detect motion information in a video image frame; the region dividing unit 202 is configured to divide each moving object in the video image frame according to the motion information detected by the motion detecting unit 201.
  • the occupied area; the generating unit 203 generates at least one region of interest according to an area occupied by each moving object in the video image frame, the at least one region of interest covering an area where each moving object in the video image frame is located .
  • the motion detecting unit 201 may perform foreground detection on the video image frame to generate a binarized motion image of the video image frame, and according to the binarized moving image, the video image frame may be obtained.
  • the motion information may reflect motion information of the video image frame according to the first pixel in the binarized motion image, wherein the first pixel may be, for example, a white pixel.
  • FIG. 3 is a schematic diagram of a video image frame
  • FIG. 4 is a schematic diagram of a binarized moving image corresponding to the video image frame of FIG. 3, and the white pixels in the binarized moving image 400 of FIG. 4 can reflect the video image frame 300.
  • Sports information is a schematic diagram of a video image frame
  • the region dividing unit 202 may perform a connected domain segmentation process on the binarized moving image to obtain at least one connected domain of the pixel, where the at least one connected domain may correspond to each moving object in the video image frame.
  • Area For example, in the binarized moving image, each connected domain includes a plurality of first pixels, and within each connected domain, the first pixel is connected, and between the different connected domains, the first pixel is not connected, and thus different The connected areas are isolated from each other.
  • the region dividing unit 202 may further generate a circumscribed polygon of the connected domain for each connected domain in the binarized moving image, and the circumscribed polygon may be used to represent a contour of each connected domain, and the circumscribed polygon may be, for example, It is a rectangle or the like.
  • 5 is a schematic diagram of the connected domain segmentation process of the binarized moving image of FIG. 4 and the generation of the circumscribed rectangle. As shown in FIG. 5, each circumscribed rectangle 501 represents the contour of each connected domain, and The area enclosed by each circumscribed rectangle 501 corresponds to the area occupied by each moving object in the video image frame 300.
  • the area dividing unit 202 may also merge the connected domains whose distances are less than or equal to the first threshold as a new connected domain.
  • the first threshold may be a value greater than 0, and the distance between the connected domains may refer to the distance between the boundaries of the connected domains, or may refer to the geometric center or the centroid of each connected domain.
  • the area dividing unit 202 may also generate a circumscribed polygon by a new connected domain formed by combining at least two connected domains.
  • FIG. 6 is a schematic diagram of merging the connected domains.
  • the circumscribed rectangles of the two connected domains are 6011 and 6012, respectively, and the circumscribed rectangles 6011 and 6012 partially overlap.
  • the two connected domains are merged into the connected domain 6020, and the circumscribed rectangle of the connected domain 6020 is 6021, wherein the circumscribed rectangle is 6021, which may be a circumscribed rectangle of the circumscribed rectangles 6011 and 6012.
  • Figure 7 is another schematic diagram of the merging of connected domains.
  • the circumscribed rectangles of the four connected domains are 7011, 7012, 7013, and 7014, respectively, and the distance between the four circumscribed rectangles and the boundary of the adjacent circumscribed rectangle is smaller than the first Threshold.
  • the four connected domains are merged into the connected domain 7020, and the circumscribed rectangle of the connected domain 7020 is 7021.
  • the circumscribed rectangle 7021 may be a circumscribed rectangle 7011, 7012, 7013. And the circumscribed rectangle of 7014.
  • the distance between the circumscribed rectangle 7016 and the circumscribed rectangles 7011 7070 is far, for example, the distance is greater than the first threshold. Therefore, the connected domain corresponding to the circumscribed rectangle 7016 is not connected to the circumscribed rectangle 7011 ⁇ The connected domains corresponding to 7014 are merged.
  • the generating unit 203 is capable of generating at least one region of interest according to the distance between the regions occupied by the moving objects in the video image frame, whereby the regions closer to each other can be in the same interest. Within the scope covered by the area.
  • the generating unit 203 can be binarized according to the binarization.
  • the distance between the connected domains in the moving image to generate a region of interest for example, the generating unit 203 can make the distance A connected domain that is less than or equal to the second threshold is covered by the same region of interest.
  • the distance between the connected domain corresponding to the circumscribed rectangle 7016 and the connected domain 7020 is less than or equal to the second threshold. Therefore, the connected domain corresponding to the circumscribed rectangle 7016 and the connected domain 7020 are the same region of interest 703. Covered, wherein the boundary 7031 of the region of interest 703 is identified by a rectangular frame. Of course, the embodiment is not limited thereto, and the region of interest may be identified in other manners. For example, the boundary 7031 may be other polygonal frames.
  • the size of the boundary of the region of interest 703 may be larger than the size of the circumscribed polygon of each connected domain covered by it.
  • the size of the boundary 7031 of the region of interest 703 may be larger than the size of the circumscribed rectangle 7016 and the circumscribed rectangle 7021.
  • the size of the circumscribed rectangle, for example, the former can be 10% larger than the latter.
  • the generating unit 203 may use a corresponding region of the region of interest generated in the binarized moving image in the video image frame as the region of interest of the video image frame, whereby the extracting unit 101 can obtain the video image from the video image.
  • the region of interest is extracted from the frame.
  • Block 301 of FIG. 3 illustrates the boundaries of the region of interest extracted from the video image frame 300 by the extraction unit 101 in accordance with the present application.
  • the detecting unit 102 can perform object detection in the video image frame based on the region of interest extracted by the extracting unit 101.
  • FIG. 8 is a schematic diagram of the detecting unit 102. As shown in FIG. 8, the detecting unit 102 may include a determining unit 801 and an object detecting unit 802.
  • the determining unit 801 is configured to determine whether the number of regions of interest in the video image frame is less than or equal to a third threshold, and whether the area of the region of interest is less than or equal to a fourth threshold; the object detecting unit 802 determines The result of the determination by unit 801 is object detection in the region of interest of the video image frame or the entire image range of the video image frame.
  • the object detecting unit 802 performs object detection in each region of interest of the video image frame, whereby fast object detection can be performed.
  • the object detecting unit 802 does not perform object detection on the video image frame.
  • the determining unit 801 determines that the number of regions of interest in the video image frame is large At a third threshold, or the sum of the areas of the region of interest in the video image frame is greater than a fourth threshold, then in the video image frame, the object detecting unit 802 performs object detection in the entire image range of the video image frame, thereby Can prevent missed inspections.
  • the specific method for the object detection unit 802 to perform the object detection may refer to the prior art, and is not described in this embodiment.
  • a specific video image frame in the video may be used as a key frame, and other video image frames in the video may be used as a normal frame, wherein a video separated by a predetermined time may be used.
  • the image frame or the video image frame of a predetermined number of frames is used as a key frame.
  • other methods may be used to set the key frame.
  • the determining unit 801 can determine whether the video image frame is a normal frame, and for the normal frame, further determining, according to the determination result of the determining unit 801, performing object detection in the region of interest of the normal frame or the entire image range of the normal frame.
  • the determination unit 801 can be used for further determination, and the object detection can be performed directly in the entire image range of the key frame. Thereby, it is possible to prevent missed detection by performing object detection in the entire image range on the key frame.
  • the detecting unit 102 may further have a merging unit 803.
  • the merging unit 803 may detect the detection result in the region of interest of the current video image frame and the video image frame before the current video image frame.
  • the detection result is merged, and the combined detection result may include, for example, a detection result of the region of interest of the current video image frame, and detection of the previous video image frame outside the region of interest of the current video image frame. result.
  • FIG. 9 is a schematic diagram of merging detection results
  • 901 is a video image frame before the current video image frame 902
  • 9011, 9012 are object objects detected in the video image frame 901
  • the current video image frame 902 is sensed.
  • the region of interest is 9021
  • the target object 9022 is detected in the region of interest 9021
  • the detection result of the current video image frame 902 is combined with the detection result of the previous video image frame 901 to obtain a combined detection result 903, in the merged
  • the detection result 903 includes: the object object 9022 detected in the region of interest 9021 in the current video image frame 902, and the region other than the region of interest 9021 of the current video image frame 902, detected in the previous video image frame 901 Object object 9012.
  • Step 1001 The determining unit 801 determines whether the current video image frame is a normal frame, and if yes, proceeds to step 1002, and if no, proceeds to step 1005.
  • Step 1002 The determining unit 801 determines whether the number of regions of interest in the current video image frame is less than or equal to a third threshold, and if yes, proceeds to step 1003, and if no, proceeds to step 1005.
  • Step 1003 The determining unit 801 determines whether the total area of the region of interest in the current video image frame is less than or equal to the fourth threshold. If yes, proceed to step 1004. If no, proceed to step 1005.
  • Step 1004 The object detecting unit 802 performs object detection in the region of interest of the video image frame.
  • Step 1005 The object detecting unit 802 performs object detection in the entire image range of the video image frame.
  • Step 1006 The merging unit 803 combines the detection result of the region of interest of the current video image frame with the detection result of the previous video image frame.
  • the object detecting apparatus can extract the region of interest based on the motion information of the video image frame, and perform object detection based on the extracted region of interest, thereby being able to extract more accurately for each video image frame Corresponding regions of interest, thereby improving the accuracy of object detection and increasing the speed of detection.
  • the embodiment of the present application further provides an object detecting method for detecting a target object from a video image frame, corresponding to the object detecting device of Embodiment 1.
  • FIG. 11 is a schematic flowchart of the object detecting method in the second embodiment. As shown in FIG. 11, the detecting method may include:
  • Step 1101 extracting a region of interest from the video image frame based on motion information of a video image frame
  • Step 1102 Perform object detection in the video image frame according to the extracted region of interest.
  • FIG. 12 is a schematic diagram of a method for extracting a region of interest according to the second embodiment. As shown in FIG. 12, the method includes:
  • Step 1201 Detect motion information in the video image frame.
  • Step 1202 According to the detected motion information, divide an area occupied by each moving object in the video image frame;
  • Step 1203 Generate at least one region of interest according to an area occupied by each moving object in the video image frame, where the at least one region of interest covers an area where each moving object in the video image frame is located.
  • binarization of the video image frame may be generated based on foreground detection.
  • a motion image to obtain the motion information of the video image frame.
  • the connected domain segmentation process may be performed on the binarized moving image to obtain at least one connected domain of the pixel, where the at least one connected domain corresponds to each moving object in the video image frame. Occupied area.
  • an circumscribed polygon of each of the connected domains may also be generated.
  • the connected domains whose distances from each other are less than or equal to the first threshold may also be merged into one new connected domain.
  • the at least one region of interest may be generated according to the distance of the region.
  • FIG. 13 is a schematic diagram of a method for performing object detection in the video image frame according to the extracted region of interest according to the second embodiment. As shown in FIG. 13, the method includes:
  • Step 1301 Determine whether the number of the regions of interest in the video image frame is less than or equal to a third threshold, and whether an area of the region of interest is less than or equal to a fourth threshold;
  • Step 1302 Perform object detection in the region of interest of the video image frame or the entire image range of the video image frame according to the result of the determining.
  • the method further includes:
  • Step 1303 Combine the detection result in the region of interest of the current video image frame and the detection result of the video image frame before the current video image frame in the case of performing object detection on the region of interest of the current video image frame.
  • the object detecting method can extract the region of interest based on the motion information of the video image frame, and perform object detection based on the extracted region of interest, thereby being able to extract more accurately for each video image frame. Corresponding regions of interest, thereby improving the accuracy of object detection and increasing the speed of detection.
  • Embodiment 3 of the present application provides an electronic device including the object detecting device as described in Embodiment 1.
  • FIG. 14 is a schematic diagram showing the configuration of an electronic device according to Embodiment 3 of the present application.
  • the electronic device 1400 can include a central processing unit (CPU) 1401 and a memory 1402; the memory 1402 is coupled to the center.
  • the memory 1402 can store various data; in addition, a program for performing object detection is stored, and the program is executed under the control of the central processing unit 1401.
  • the functionality in the object detection device can be integrated into the central processor 1401.
  • the central processing unit 1401 can be configured to:
  • Object detection is performed in the video image frame based on the extracted region of interest.
  • the central processor 1401 can also be configured to:
  • the central processor 1401 can also be configured to:
  • a binarized motion image of the video image frame is generated based on foreground detection, thereby obtaining the motion information of the video image frame.
  • the central processor 1401 can also be configured to:
  • Connected domain segmentation processing is performed on the binarized moving image to obtain at least one connected domain of the pixel, the at least one connected domain corresponding to an area occupied by each moving object in the video image frame.
  • the central processor 1401 can also be configured to:
  • the central processor 1401 can also be configured to:
  • the connected domains that are less than or equal to the first threshold are merged into a new connected domain.
  • the central processor 1401 can also be configured to:
  • the at least one region of interest is generated based on the distance of the region.
  • the central processor 1401 can also be configured to:
  • object detection is performed in the region of interest of the video image frame or in the entire image range of the video image frame.
  • the central processor 1401 can also be configured to:
  • the detection results in the region of interest of the current video image frame and the detection results of the video image frames preceding the current video image frame are combined.
  • the electronic device 1400 may further include: an input and output unit 1403, a display unit 1404, and the like; wherein the functions of the above components are similar to those of the prior art, and details are not described herein again. It should be noted that the electronic device 1400 does not necessarily have to include all the components shown in FIG. 14; in addition, the electronic device 1400 may further include components not shown in FIG. 14, and reference may be made to the prior art.
  • the embodiment of the present application further provides a computer readable program, wherein the program causes the object detecting device or the electronic device to perform the object detection described in Embodiment 2 when the program is executed in an object detecting device or an electronic device method.
  • the embodiment of the present application further provides a storage medium storing a computer readable program, wherein the storage medium stores the computer readable program, wherein the computer readable program causes the object detecting device or the electronic device to perform the embodiment 2 Object detection method.
  • the object detecting apparatus described in connection with the embodiments of the present invention may be directly embodied as hardware, a software module executed by a processor, or a combination of both.
  • one or more of the functional blocks shown in Figures 1, 2, and 8 and/or one or more combinations of functional blocks may correspond to individual software modules of a computer program flow, or to individual hardware.
  • These software modules may correspond to the respective steps shown in Embodiment 2, respectively.
  • These hardware modules can be implemented, for example, by curing these software modules using a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • a storage medium can be coupled to the processor to enable the processor to read information from, and write information to, the storage medium; or the storage medium can be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC.
  • the software module can be stored in the memory of the mobile terminal or in a memory card that can be inserted into the mobile terminal.
  • the software module can be stored in the MEGA-SIM card or a large-capacity flash memory device.
  • One or more of the functional block diagrams described with respect to Figures 1, 2, 8 and/or one or more groups of functional block diagrams A general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete door for performing the functions described herein can be implemented. Or transistor logic device, discrete hardware component, or any suitable combination thereof.
  • One or more of the functional blocks described with respect to Figures 1-3 and/or one or more combinations of functional blocks may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors One or more microprocessors in conjunction with DSP communication or any other such configuration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

Les modes de réalisation de la présente invention concernent un procédé de détection d'objet, un appareil de détection d'objet et un dispositif électronique, conçus pour détecter un objet à partir d'une trame d'images vidéo. L'appareil de détection d'objet comprend : une unité d'extraction pour extraire, sur la base d'informations de mouvement d'une trame d'images vidéo, une région d'intérêt à partir de la trame d'images vidéo; et une unité de détection pour effectuer, en fonction de la région d'intérêt extraite par l'unité d'extraction, une détection d'objet sur la trame d'images vidéo. Selon la présente invention, la précision et la vitesse de détection d'objet sont améliorées.
PCT/CN2016/101204 2016-09-30 2016-09-30 Procédé de détection d'objet, appareil de détection d'objet et dispositif électronique WO2018058573A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/101204 WO2018058573A1 (fr) 2016-09-30 2016-09-30 Procédé de détection d'objet, appareil de détection d'objet et dispositif électronique
CN201680087601.8A CN109479118A (zh) 2016-09-30 2016-09-30 对象检测方法、对象检测装置以及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/101204 WO2018058573A1 (fr) 2016-09-30 2016-09-30 Procédé de détection d'objet, appareil de détection d'objet et dispositif électronique

Publications (1)

Publication Number Publication Date
WO2018058573A1 true WO2018058573A1 (fr) 2018-04-05

Family

ID=61762403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/101204 WO2018058573A1 (fr) 2016-09-30 2016-09-30 Procédé de détection d'objet, appareil de détection d'objet et dispositif électronique

Country Status (2)

Country Link
CN (1) CN109479118A (fr)
WO (1) WO2018058573A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584266A (zh) * 2018-11-15 2019-04-05 腾讯科技(深圳)有限公司 一种目标检测方法及装置
CN110738101A (zh) * 2019-09-04 2020-01-31 平安科技(深圳)有限公司 行为识别方法、装置及计算机可读存储介质
CN111191730A (zh) * 2020-01-02 2020-05-22 中国航空工业集团公司西安航空计算技术研究所 一种面向嵌入式深度学习的超大尺寸图像目标检测方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3819811A1 (fr) * 2019-11-06 2021-05-12 Ningbo Geely Automobile Research & Development Co. Ltd. Détection d'objet de véhicule

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101198033A (zh) * 2007-12-21 2008-06-11 北京中星微电子有限公司 一种二值图像中前景图像的定位方法及装置
CN101799968A (zh) * 2010-01-13 2010-08-11 任芳 基于视频图像智能分析的油井入侵检测方法及装置
CN103020608A (zh) * 2012-12-28 2013-04-03 南京荣飞科技有限公司 一种监狱视频监控图像中的囚服识别方法
CN103971381A (zh) * 2014-05-16 2014-08-06 江苏新瑞峰信息科技有限公司 一种多目标跟踪系统及方法
CN104167004A (zh) * 2013-05-16 2014-11-26 上海分维智能科技有限公司 一种用于嵌入式dsp平台的运动车辆快速检测方法
US20150131851A1 (en) * 2013-11-13 2015-05-14 Xerox Corporation System and method for using apparent size and orientation of an object to improve video-based tracking in regularized environments

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325690A (zh) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 监控视频流中人流分析与人群聚集过程的检测方法及系统
CN104573697B (zh) * 2014-12-31 2017-10-31 西安丰树电子科技发展有限公司 基于多信息融合的施工升降机轿厢人数统计方法
CN105957110B (zh) * 2016-06-29 2018-04-13 上海小蚁科技有限公司 用于检测对象的设备和方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101198033A (zh) * 2007-12-21 2008-06-11 北京中星微电子有限公司 一种二值图像中前景图像的定位方法及装置
CN101799968A (zh) * 2010-01-13 2010-08-11 任芳 基于视频图像智能分析的油井入侵检测方法及装置
CN103020608A (zh) * 2012-12-28 2013-04-03 南京荣飞科技有限公司 一种监狱视频监控图像中的囚服识别方法
CN104167004A (zh) * 2013-05-16 2014-11-26 上海分维智能科技有限公司 一种用于嵌入式dsp平台的运动车辆快速检测方法
US20150131851A1 (en) * 2013-11-13 2015-05-14 Xerox Corporation System and method for using apparent size and orientation of an object to improve video-based tracking in regularized environments
CN103971381A (zh) * 2014-05-16 2014-08-06 江苏新瑞峰信息科技有限公司 一种多目标跟踪系统及方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584266A (zh) * 2018-11-15 2019-04-05 腾讯科技(深圳)有限公司 一种目标检测方法及装置
CN110738101A (zh) * 2019-09-04 2020-01-31 平安科技(深圳)有限公司 行为识别方法、装置及计算机可读存储介质
CN110738101B (zh) * 2019-09-04 2023-07-25 平安科技(深圳)有限公司 行为识别方法、装置及计算机可读存储介质
CN111191730A (zh) * 2020-01-02 2020-05-22 中国航空工业集团公司西安航空计算技术研究所 一种面向嵌入式深度学习的超大尺寸图像目标检测方法及系统
CN111191730B (zh) * 2020-01-02 2023-05-12 中国航空工业集团公司西安航空计算技术研究所 一种面向嵌入式深度学习的超大尺寸图像目标检测方法及系统

Also Published As

Publication number Publication date
CN109479118A (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
CN109086691B (zh) 一种三维脸部活体检测方法、脸部认证识别方法及装置
US10192107B2 (en) Object detection method and object detection apparatus
JP6511149B2 (ja) 指紋重複領域の面積算出方法、それを行う電子機器、コンピュータプログラム、及び、記録媒体
US9619708B2 (en) Method of detecting a main subject in an image
CN108875723B (zh) 对象检测方法、装置和系统及存储介质
US9311533B2 (en) Device and method for detecting the presence of a logo in a picture
WO2021051604A1 (fr) Procédé d'identification de région de texte d'osd, dispositif et support d'enregistrement
WO2018058595A1 (fr) Méthode et dispositif de détection de cible, et système informatique
US20190156499A1 (en) Detection of humans in images using depth information
TWI514327B (zh) 目標偵測與追蹤方法及系統
WO2018058573A1 (fr) Procédé de détection d'objet, appareil de détection d'objet et dispositif électronique
TWI772757B (zh) 目標檢測方法、電子設備和電腦可讀儲存媒介
WO2018058530A1 (fr) Procédé et dispositif de détection de cible, et appareil de traitement d'images
JP2012038318A (ja) ターゲット検出方法及び装置
CN109948521B (zh) 图像纠偏方法和装置、设备及存储介质
WO2019076187A1 (fr) Procédé et appareil de sélection de région de blocage vidéo, dispositif électronique et système
TW201432620A (zh) 具有邊緣選擇功能性之影像處理器
JP6338429B2 (ja) 被写体検出装置、被写体検出方法及びプログラム
CN111046845A (zh) 活体检测方法、装置及系统
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
CN108960247B (zh) 图像显著性检测方法、装置以及电子设备
JP2016053763A (ja) 画像処理装置、画像処理方法及びプログラム
TW201944353A (zh) 物件影像辨識系統及物件影像辨識方法
JP6580201B2 (ja) 被写体検出装置、被写体検出方法及びプログラム
US10713808B2 (en) Stereo matching method and system using rectangular window

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16917323

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16917323

Country of ref document: EP

Kind code of ref document: A1