WO2017219875A1 - 人手检测跟踪方法及装置 - Google Patents

人手检测跟踪方法及装置 Download PDF

Info

Publication number
WO2017219875A1
WO2017219875A1 PCT/CN2017/087658 CN2017087658W WO2017219875A1 WO 2017219875 A1 WO2017219875 A1 WO 2017219875A1 CN 2017087658 W CN2017087658 W CN 2017087658W WO 2017219875 A1 WO2017219875 A1 WO 2017219875A1
Authority
WO
WIPO (PCT)
Prior art keywords
human hand
tracking
detection
tracking result
frame
Prior art date
Application number
PCT/CN2017/087658
Other languages
English (en)
French (fr)
Inventor
杜志军
王楠
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to ES17814613T priority Critical patent/ES2865403T3/es
Priority to EP17814613.0A priority patent/EP3477593B1/en
Priority to JP2018567694A priority patent/JP6767516B2/ja
Priority to KR1020197001955A priority patent/KR102227083B1/ko
Priority to PL17814613T priority patent/PL3477593T3/pl
Publication of WO2017219875A1 publication Critical patent/WO2017219875A1/zh
Priority to US16/229,810 priority patent/US10885638B2/en
Priority to US16/721,449 priority patent/US10885639B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • G06T7/231Analysis of motion using block-matching using full search
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the invention relates to the field of visual target detection tracking and human-computer interaction technology, and particularly relates to a manual hand detection tracking method and device.
  • the human hand can be used as a tool for human-computer interaction.
  • the application needs real-time detection and tracking of the human hand to obtain the position of the human hand in each frame of the video.
  • the strategy of detecting every frame can be adopted.
  • the problem of this strategy is that the detection is time consuming and cannot achieve the purpose of real-time detection.
  • the occasional misdetection problem will cause the jump of the position of the human hand and affect the subsequent interaction effect.
  • the tracking mechanism is introduced in the prior art, and the real-time effect is achieved by tracking.
  • tracking often has problems with losing.
  • the commonly used method in the prior art is to introduce skin color information, although the skin color can avoid some wrong tracking, but if the background color is similar to the skin color, Will cause problems with tracking errors.
  • the embodiment of the present application provides a human hand detection and tracking method, including:
  • the embodiment of the present application further provides a human hand detection and tracking device, including:
  • a position tracking unit configured to perform position tracking on the detected human hand when a human hand is detected in a certain frame image, to obtain a tracking result
  • the tracking result processing unit is configured to verify whether the tracking result is valid, to perform tracking of the next frame on the human hand, or perform local detection of the current frame on the human hand according to the tracking result.
  • the tracking result can be corrected in real time, and the manual detection is ensured quickly and accurately.
  • FIG. 1 is a flowchart of a method for performing a human hand detection and tracking method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of different scales of human hand detection according to an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for verifying a tracking result according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of verifying whether a tracking result is valid according to an embodiment of the present application.
  • FIG. 5 is a flowchart of a method for locally detecting a current frame according to a tracking result according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of a combination of blocks in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a combination of blocks in another embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a human hand detection and tracking device according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a tracking result processing unit according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a tracking result processing unit according to another embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a tracking result processing unit according to still another embodiment of the present application.
  • FIG. 1 is a flow chart of a method of an embodiment of a human hand detection tracking method proposed by the present application.
  • the present application provides method operational steps or apparatus structures as shown in the following embodiments or figures, more or fewer operational steps or modular structures may be included in the method or apparatus based on conventional or no inventive labor. .
  • the execution order of the steps or the module structure of the device is not limited to the execution order or the module structure provided by the embodiment of the present application.
  • the actual device or terminal product of the method or module structure is executed, it may be sequentially executed or executed in parallel according to the method or module structure connection shown in the embodiment or the drawing (for example, parallel processor or multi-thread processing). surroundings).
  • the human hand detection and tracking method of the present application may include:
  • S103 Verify whether the tracking result is valid, to perform tracking of the next frame on the human hand, or perform local detection of the current frame on the human hand according to the tracking result.
  • the present application first performs human hand detection, performs position tracking when detecting a human hand, and performs validity verification on the tracking result, and can correct the invalid tracking result to prevent false positives and can be quickly Accurate hand testing to reduce the amount of calculations.
  • the specific detection method may be: traversing each frame of the image as a full image, using the HOG+SVM method for manual detection, and the HOG+SVM method is commonly used in the prior art. Human detection methods are not described here.
  • manual detection is required at different scales to perform a good match with the human hand in the frame image, and the human hand is accurately and quickly detected. See Figure 2 for different scales.
  • the detected human hand can be tracked.
  • the detected position can be tracked by using a template matching strategy to obtain a tracking result.
  • the tracking result is a preliminary tracking result. According to the preliminary tracking result, it is not determined whether the tracking is valid, and the tracking result needs to be verified.
  • the tracking result generally corresponds to a positioning block (bolock) in the frame image, and verifying whether the tracking result is valid is based on determining whether the block is a human hand.
  • the method for verifying whether the tracking result is valid includes the following steps:
  • S301 Adjust the positioning block to a size determined during manual training.
  • the classifier needs training before classification. Since the classifier is fixed in size during training, it is necessary to adjust the hand block in the video to the size determined during manual training before classifying.
  • S302 Send the size-adjusted positioning block to the classifier, and determine whether the positioning block is a human hand, if If the positioning block is a human hand, the tracking result is valid, otherwise the tracking result is invalid.
  • FIG. 4 is a schematic diagram of verifying whether the tracking result is valid according to an embodiment of the present application.
  • the human hand frame (positioning block S1) in the video needs to be resized to the size determined during manual training, and the block S2 is obtained.
  • the block S2 is sent to the classifier, and the classifier can output a judgment result. According to the judgment result, it can be determined whether the block S2 is a human hand. If the block S2 is a human hand, the tracking result is valid, otherwise the tracking result is invalid.
  • the above classifier may be SVM, ANN, BOOST, etc., and the application is not limited thereto.
  • the tracking of the next frame may be continued, that is, S102 and S103 of FIG. 1 are repeatedly performed, and the manual detection is not performed by performing S101, and the manual detection is performed for each frame of the image compared with the prior art.
  • the workload is not performed by performing S101, and the manual detection is performed for each frame of the image compared with the prior art.
  • the local detection of the current frame may be performed on the human hand according to the tracking result, which specifically includes:
  • S501 Determine a center of the block, and define a plurality of neighborhood blocks by setting a step size and setting a block size.
  • S502 Adjust the plurality of neighborhood blocks to respectively determine a size determined during manual training.
  • the classifier needs training before classification. Since the classifier is fixed in size, it is necessary to adjust the hand block in the video to the size determined during manual training before classifying.
  • S503 The plurality of neighboring blocks that are resized are sent to the classifier, and the number of the neighboring blocks that are manually in the plurality of neighborhood blocks is determined.
  • the block center of the current tracking result may be defined as (x, y), and the block height of the block is (w, h).
  • the block is judged not to be a human hand, and the possible reason is that the tracking result is somewhat deviated from the real position, or the human hand is zoomed when the human hand is imaged because of the distance of the shooting. Therefore, the present application adopts the following strategy to solve this problem.
  • the above setting step is set to 2
  • the number of neighborhood blocks is set to 8
  • the set block size is set to (0.8w, 0.8h), (w, h), (1.2w, 1.2h) three scales, not for limitation.
  • the human hand detection is performed in the 8 neighborhoods of (x, y) with a step size of 2, that is, the centers of the 8 neighborhood blocks waiting for the decision are: (x-2, y-2), (x, y- 2), (x+2, y-2), (x, y-2), (x, y+2), (x+2, y-2), (x+2, y), (x+ 2, y + 2).
  • the 24 neighborhood blocks can be manually determined.
  • each neighborhood block is separately adjusted to the size determined during the manual training, and then the adjusted neighborhood blocks are respectively sent to the classifier, and each judgment is made. Whether the neighborhood block is a human hand, and finally counts the number of neighbor blocks of the human hand.
  • This strategy requires 3*8 resizes and points.
  • the classifier decision operation performs a detection operation with respect to each frame of the prior art, which greatly reduces the amount of calculation.
  • block 601 and 602 are detected blocks, and the result of block 601 is (left1, top1). , right1, bottom1), where (left1, top1) identifies the coordinates of the upper left vertex of block 601, and (right1, bottom1) identifies the coordinates of the lower right vertex of block 601.
  • the result of block 602 is (left2, top2, right2, bottom2), where (left2, top2) identifies the coordinates of the upper left vertex of block 602, and (right2, bottom2) identifies the coordinates of the lower right vertex of block 602.
  • Block 601 and block 602 are combined to obtain block 603.
  • the result of block 603 is ((left1+left2)/2, (top1+top2)/2, (right1+right2)/2, (bottom1+bottom2)/2),
  • the combined result (block 603) is output as the final tracking result.
  • the number of neighboring blocks in the 24 neighborhood blocks is greater than or equal to 2, it is equivalent to performing a human hand detection operation in a limited area, and the output is the result of the detection.
  • the neighboring block that is the human hand is combined with the positioning block obtained in S102 and output as the final tracking result, and then the next frame is tracked, that is, the figure is repeated.
  • S102 and S103 of 1 it is not necessary to perform manual detection in S101.
  • block 701 is the detected block, and the result of block 701 is (left3, top3, right3, bottom3), where (left3, top3) identifies the coordinates of the upper left vertex of block 701, and (right1, bottom1) identifies the coordinates of the lower right vertex of block 701.
  • Block 702 is the block obtained in S102, and the result of block 702 is (left4, top4, right4, bottom4), where (left4, top4) identifies the coordinates of the upper left vertex of block 702, and (right4, bottom4) identifies the right of block 701. Lower vertex coordinates.
  • Block 701 and block 702 are combined to obtain block 703.
  • the result of block 703 is ((left3+left4)/2, (top3+top4)/2, (right3+right4)/2, (bottom3+bottom4)/2),
  • the combined result (block 703) is output as the final tracking result.
  • the human hand detection and tracking method in the embodiment of the present application can be invalidated by verifying the validity of the tracking result.
  • the tracking results are corrected to prevent false positives, and manual detection can be performed quickly and accurately.
  • the amount of calculation can be greatly reduced.
  • the present application provides a human hand detection tracking device, as described in the following embodiments. Since the principle of the manual detection and tracking device solving problem is similar to the manual detection and tracking method, the implementation of the manual detection and tracking device can be referred to the implementation of the false transaction information identification method, and the repeated description will not be repeated.
  • FIG. 8 is a schematic structural diagram of a human hand detection and tracking device according to an embodiment of the present invention.
  • the human hand detection and tracking device includes a human hand detection unit 801, a position tracking unit 802, and a tracking result processing unit 803.
  • the human hand detecting unit 801 is configured to perform manual detection on a frame-by-frame image
  • the position tracking unit 802 is configured to perform position tracking on the detected human hand when a human hand is detected in a certain frame image, to obtain a tracking result;
  • the tracking result processing unit 803 is configured to verify whether the tracking result is valid, to perform tracking of the next frame on the human hand, or perform local detection of the current frame on the human hand according to the tracking result.
  • the human hand detecting unit 801 is specifically configured to: traverse the full image of the frame image, and perform human hand detection at different scales by using the HOG+SVM method.
  • manual detection is required at different scales to perform a good match with the human hand in the frame image, and the human hand is accurately and quickly detected.
  • the location tracking unit 802 is specifically configured to perform location tracking on the detected human hand by using a template matching policy to obtain a tracking result.
  • the tracking result processing unit includes: a size adjustment module 901 and a human hand determination module 902 .
  • the resizing module 901 is configured to adjust the positioning block to a size determined during manual training; the classifier needs training before classification, and since the classifier is fixed in size during training, the human hand block in the video needs to be adjusted before being classified into The size determined during manual training.
  • the manual determination module 902 is configured to send the size-adjusted positioning block to the classifier to determine whether the positioning block is a human hand. If the positioning block is a human hand, the tracking result is valid, otherwise the tracking result is invalid.
  • the location tracking unit 802 performs the next frame tracking on the human hand.
  • the tracking result processing unit 803 further includes: an information determining module 1001, configured to determine a center of the positioning block, and define a plurality of neighborhood blocks by setting a step size and setting a block size. .
  • the size adjustment module 901 adjusts the plurality of neighborhood blocks to the size determined during the manual training, and the human hand determination module 902 adjusts the size.
  • the plurality of neighborhood blocks are respectively sent to the classifier, and the number of the neighboring blocks that are manually occupied in the plurality of neighborhood blocks is determined.
  • the tracking result processing unit 803 further includes: a merging module 1101, configured to: when the number of neighboring blocks in a plurality of neighborhood blocks is greater than or equal to 2, all of them are human hands. The neighborhood blocks are combined and output as the final trace result, and then the next frame is tracked.
  • a merging module 1101 configured to: when the number of neighboring blocks in a plurality of neighborhood blocks is greater than or equal to 2, all of them are human hands. The neighborhood blocks are combined and output as the final trace result, and then the next frame is tracked.
  • the merging module 1101 is further configured to combine the neighboring block that is the human hand with the positioning block, and output the result as a final tracking result, and then Perform the next frame tracking.
  • the manual detection unit 801 needs to re-frame the frame-by-frame image for manual detection.
  • the human hand detection and tracking device of the embodiment of the present invention can perform the correction processing on the invalid tracking result by verifying the validity of the tracking result, so as to prevent false positives, and can quickly and accurately perform the manual detection.
  • the amount of calculation can be greatly reduced.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions are provided for implementing one or more processes and/or block diagrams in the flowchart The steps of a function specified in a box or multiple boxes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

一种人手检测跟踪方法及装置,包括:逐帧图像进行人手检测(S101);当在某一帧图像中检测到人手时,对检测到的人手进行位置跟踪,得到跟踪结果(S102);验证跟踪结果是否有效,以对人手进行下一帧跟踪,或者根据所述跟踪结果对人手进行当前帧的局部检测(S103)。该方法通过在跟踪时加入验证环节,可以对跟踪结果进行实时修正,保证了快速准确的进行人手检测。

Description

人手检测跟踪方法及装置
本申请要求2016年06月23日递交的申请号为201610461515.0、发明名称为“人手检测跟踪方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及视觉目标检测跟踪及人机交互技术领域,具体涉及一种人手检测跟踪方法及装置。
背景技术
人手可以作为一种人机交互的工具,应用时需要对人手进行实时检测和跟踪,获取视频中每一帧中人手的位置。具体可以采取每帧都做检测的策略,这个策略的问题是检测比较耗时,不能达到实时检测的目的。而且偶尔出现的误检问题,会导致人手位置的跳动,影响后续的互动效果。
为了解决上述问题,现有技术中引入跟踪的机制,通过跟踪达到实时的效果。然而,跟踪经常会出现跟丢的问题,为了解决跟踪丢失问题,现有技术中常用的方法是引入肤色信息,利用肤色虽然可以避免一些错误的跟踪,但如果背景颜色与肤色相差不多时,仍然会导致跟踪错误的问题。
发明内容
本申请实施例提供一种人手检测跟踪方法,包括:
逐帧图像进行人手检测;
当在某一帧图像中检测到人手时,对检测到的人手进行位置跟踪,得到跟踪结果;
验证所述跟踪结果是否有效,以对人手进行下一帧跟踪,或者根据所述跟踪结果对人手进行当前帧的局部检测。
本申请实施例还提供一种人手检测跟踪装置,包括:
人手检测单元,用于逐帧图像进行人手检测;
位置跟踪单元,用于当在某一帧图像中检测到人手时,对检测到的人手进行位置跟踪,得到跟踪结果;
跟踪结果处理单元,用于验证所述跟踪结果是否有效,以对人手进行下一帧跟踪,或者根据所述跟踪结果对人手进行当前帧的局部检测。
本申请实施例中,通过在跟踪时加入验证环节,可以对跟踪结果进行实时修正,保证了快速准确的进行人手检测。
当然实施本申请的任一产品或者方法必不一定需要同时达到以上所述的所有优点。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例的人手检测跟踪方法的一种实施例的方法流程图;
图2为本申请实施例进行人手检测的不同尺度下示意图;
图3为本申请实施例验证跟踪结果的方法流程图;
图4为本申请实施例验证跟踪结果是否有效的示意图;
图5为本申请实施例验根据跟踪结果对人手进行当前帧的局部检测方法流程图;
图6为本申请一实施例中块的合并示意图;
图7为本申请另一实施例中块的合并示意图;
图8为本申请实施例的人手检测跟踪装置的结构示意图;
图9为本申请一实施例的跟踪结果处理单元的结构示意图;
图10为本申请另一实施例的跟踪结果处理单元的结构示意图;
图11为本申请又一实施例的跟踪结果处理单元的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面结合附图对本申请所述的人手检测跟踪方法及装置进行详细的说明。图1是本申请提出的人手检测跟踪方法的一种实施例的方法流程图。虽然本申请提供了如下述实施例或附图所示的方法操作步骤或装置结构,但基于常规或者无需创造性的劳动在所述方法或装置中可以包括更多或者更少的操作步骤或模块结构。在逻辑性上不存在必要因 果关系的步骤或结构中,这些步骤的执行顺序或装置的模块结构不限于本申请实施例提供的执行顺序或模块结构。所述的方法或模块结构的在实际中的装置或终端产品执行时,可以按照实施例或者附图所示的方法或模块结构连接进行顺序执行或者并行执行(例如并行处理器或者多线程处理的环境)。
基于现有技术中对检测到的人手进行跟踪时经常会出现跟丢的问题,本申请通过引入了跟踪验证机制,可以对跟踪结果进行实时修正,保证了快速准确的进行人手检测。具体如图1所示,本申请的人手检测跟踪方法可以包括:
S101:逐帧图像进行人手检测;
S102:当在某一帧图像中检测到人手时,对检测到的人手进行位置跟踪,得到跟踪结果;
S103:验证所述跟踪结果是否有效,以对人手进行下一帧跟踪,或者根据所述跟踪结果对人手进行当前帧的局部检测。
由图1所示的流程可知,本申请首先进行人手检测,在检测到人手时进行位置跟踪,并对跟踪结果进行有效性验证,可以对无效跟踪结果进行修正处理,以防止误判,可以快速准确的进行人手检测,减少计算量。
S101中,一般需要从第一帧图像开始进行人手检测,具体检测方法可以为,遍历每帧图像当全图,利用HOG+SVM方法在进行人手检测,HOG+SVM方法为现有技术中常用的人体检测方法,不再赘述。另外,本申请中,需要在不同尺度下进行人手检测,以与帧图像中的人手进行良好匹配,准确快速的检测到人手,不同尺度请参见图2所示。
人手检测成功(即在某一帧图像中检测到人手)后,就可以对检测到人手进行位置跟踪,一实施例中,可以利用模板匹配策略对检测到的人手进行位置跟踪,得到跟踪结果。
上述该跟踪结果为初步的跟踪结果,根据该初步的跟踪结果还不能确定跟踪是否有效,就需要对跟踪结果进行验证。
跟踪结果一般对应帧图像中定位块(bolock),验证跟踪结果是否有效即为根据判断该block是否为人手,如图3所示,验证跟踪结果是否有效的方法包括如下步骤:
S301:将所述定位块调整为人手训练时确定的尺寸。分类器在分类前需要训练,由于训练时分类器是固定大小的,进行分类前需要将视频中的人手块调整为人手训练时确定的尺寸。
S302:将尺寸调整后的所述定位块送入分类器,判断该定位块是否为人手,如果该 定位块为人手,则所述跟踪结果有效,否则所述跟踪结果无效。
图4为本申请实施例验证跟踪结果是否有效的示意图,如图4所示,首先需要将视频中的人手框(定位块S1)调整(resize)为人手训练时确定的尺寸,得到块S2,然后将块S2送入分类器中,分类器可以输出判断结果,根据该判断结果可以判断块S2是不是人手,如果块S2为人手,则所述跟踪结果有效,否则所述跟踪结果无效。上述分类器可以为SVM、ANN、BOOST等,本申请不以此为限。
对于跟踪结果为有效的情况,可以继续下一帧的跟踪,即重复进行图1的S102及S103,无需进行S101进行人手检测,相比现有技术中对每一帧图像进行人手检测,减小了工作量。
而对于跟踪结果为有效的情况,可能是由于跟踪结果与图像中人手的位置及大小有所偏差,如果直接转到S101继续进行人手检测,可能会造成误判。为了解决该问题,如图5所示,可以根据跟踪结果对人手进行当前帧的局部检测,具体包括:
S501:确定上述block的中心,以设定步长及设定块尺度定义多个邻域块。
S502:将所述多个邻域块分别调整为人手训练时确定的尺寸。分类器在分类前需要训练,由于分类器训练时是固定大小的,进行分类前需要将视频中的人手块调整为人手训练时确定的尺寸。
S503:将调整尺寸后的所述多个邻域块分别送入分类器,判断所述多个邻域块中为人手的邻域块的数量。
具体地,可以定义当前跟踪结果的block中心为(x,y),block的框高为(w,h)。根据上述描述,该block被判不是人手,可能的原因是跟踪结果与真实的位置有些偏差,或者人手因为拍摄距离远近导致了人手成像时的缩放。因此,本申请采用了如下策略来解决这个问题,为了清楚的说明,下述策略中将上述设定步长设为2,邻域块数设为8,设定块尺度设为(0.8w,0.8h),(w,h),(1.2w,1.2h)三个尺度,并非用于限定。
首先,在(x,y)的以步长为2的8邻域进行人手检测,即等待判定的8个邻域块中心分别为:(x-2,y-2),(x,y-2),(x+2,y-2),(x,y-2),(x,y+2),(x+2,y-2),(x+2,y),(x+2,y+2)。在设定了(0.8w,0.8h),(w,h),(1.2w,1.2h)三个尺度后,邻域块的个数相当于3*8=24个,块的不同尺度是为了涵盖缩放的影响。
完成上述操作后,可以对24邻域块分别进行人手判断,首先将每个邻域块分别调整为人手训练时确定的尺寸,然后将调整尺寸后的邻域块分别送入分类器,判断每个邻域块是否为人手,最后统计为人手的邻域块的数量。该策略需要进行3*8次的resize和分 类器判定操作,相对现有技术的每帧进行检测操作,极大减少了计算量。
基于统计的为人手的邻域块的数量,可以进行进一步的动作,具体如下:
如果24个邻域块中为人手的邻域块的数量大于或等于2,可以将所有为人手的邻域块合并后作为最终跟踪结果输出,然后进行下一帧跟踪,即重复进行图1的S102及S103,无需进行S101进行人手检测。
假设24个邻域块中为人手的邻域块的数量等于2,如图6所示,两个虚线框(块601和块602)为检测到的块,块601的结果为(left1,top1,right1,bottom1),其中(left1,top1)标识了块601的左上顶点坐标,(right1,bottom1)标识了块601的右下顶点坐标。块602的结果为(left2,top2,right2,bottom2),其中(left2,top2)标识了块602的左上顶点坐标,(right2,bottom2)标识了块602的右下顶点坐标。块601及块602合并得到块603,块603的结果为((left1+left2)/2,(top1+top2)/2,(right1+right2)/2,(bottom1+bottom2)/2),该合并后的结果(块603)作为最终跟踪结果输出。
对于24个邻域块中为人手的邻域块的数量大于或等于2的情况,相当于在一个有限的区域做了人手检测操作,输出的是检测的结果。
如果24个邻域块中为人手的邻域块仅有一个,将为人手的邻域块与S102中得到的定位块合并后作为最终跟踪结果输出,然后进行下一帧跟踪,即重复进行图1的S102及S103,无需进行S101进行人手检测。
假设通过分类器判断24个邻域块中仅有一个邻域块为人手,如图7所示,块701为检测到的块,块701的结果为(left3,top3,right3,bottom3),其中(left3,top3)标识了块701的左上顶点坐标,(right1,bottom1)标识了块701的右下顶点坐标。块702为S102中得到的块,块702的结果为(left4,top4,right4,bottom4),其中(left4,top4)标识了块702的左上顶点坐标,(right4,bottom4)标识了块701的右下顶点坐标。块701及块702合并得到块703,块703的结果为((left3+left4)/2,(top3+top4)/2,(right3+right4)/2,(bottom3+bottom4)/2),该合并后的结果(块703)作为最终跟踪结果输出。
对于上述24个邻域块中仅有一个邻域块为人手的情况,可以理解为,跟踪和检测都是有效的,只是跟踪结果与真实位置有一点点偏差,因此做合并就可以了。
如果24个邻域块中不存在为人手的邻域块,可能原因是人手不存在了,或者人手的形态与训练时定义的形态存在较大差异,重新逐帧图像进行人手检测。
本申请实施例的人手检测跟踪方法,通过对跟踪结果进行有效性验证,可以对无效 跟踪结果进行修正处理,以防止误判,可以快速准确的进行人手检测。通过根据跟踪结果对人手进行当前帧的局部检测,可以极大的减少计算量。
基于与上述人手检测跟踪方法相同的发明构思,本申请提供一种人手检测跟踪装置,如下面实施例所述。由于该人手检测跟踪装置解决问题的原理与人手检测跟踪方法相似,因此该人手检测跟踪装置的实施可以参见虚假交易信息识别方法的实施,重复之处不再赘述。
图8为本申请实施例的人手检测跟踪装置的结构示意图,如图8所示,该人手检测跟踪装置包括:人手检测单元801,位置跟踪单元802及跟踪结果处理单元803。
人手检测单元801用于逐帧图像进行人手检测;
位置跟踪单元802用于当在某一帧图像中检测到人手时,对检测到的人手进行位置跟踪,得到跟踪结果;
跟踪结果处理单元803用于验证所述跟踪结果是否有效,以对人手进行下一帧跟踪,或者根据所述跟踪结果对人手进行当前帧的局部检测。
一实施例中,人手检测单元801具体用于:遍历帧图像全图,利用HOG+SVM方法在不同尺度下进行人手检测。另外,本申请中,需要在不同尺度下进行人手检测,以与帧图像中的人手进行良好匹配,准确快速的检测到人手。
一实施例中,位置跟踪单元802具体用于:利用模板匹配策略对检测到的人手进行位置跟踪,得到跟踪结果。
一实施例中,如图9所示,跟踪结果处理单元包括:尺寸调整模块901及人手判断模块902。
尺寸调整模块901用于将所述定位块调整为人手训练时确定的尺寸;分类器在分类前需要训练,由于训练时分类器是固定大小的,进行分类前需要将视频中的人手块调整为人手训练时确定的尺寸。
人手判断模块902用于将尺寸调整后的所述定位块送入分类器,判断该定位块是否为人手,如果该定位块为人手,则所述跟踪结果有效,否则所述跟踪结果无效。
一实施例中,如果人手判断模块902判断得到跟踪结果有效,位置跟踪单元802对人手进行下一帧跟踪。
一实施例中,如图10所示,跟踪结果处理单元803还包括:信息确定模块1001,用于确定所述定位块的中心,以设定步长及设定块尺度定义多个邻域块。尺寸调整模块901将多个邻域块分别调整为人手训练时确定的尺寸,人手判断模块902将调整尺寸后 的所述多个邻域块分别送入分类器,判断所述多个邻域块中为人手的邻域块的数量。
一实施例中,如图11所示,跟踪结果处理单元803还包括:合并模块1101,用于当多个邻域块中为人手的邻域块的数量大于或等于2时,将所有为人手的邻域块合并后作为最终跟踪结果输出,然后进行下一帧跟踪。
一实施例中,如果多个邻域块中为人手的邻域块的数量为1,合并模块1101还用于将为人手的邻域块与所述定位块合并后作为最终跟踪结果输出,然后进行下一帧跟踪。
一实施例中,如果多个邻域块中不存在为人手的邻域块,需要通过人手检测单元801重新逐帧图像进行人手检测。
本申请实施例的人手检测跟踪装置,通过对跟踪结果进行有效性验证,可以对无效跟踪结果进行修正处理,以防止误判,可以快速准确的进行人手检测。通过根据跟踪结果对人手进行当前帧的局部检测,可以极大的减少计算量。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图 一个方框或多个方框中指定的功能的步骤。
本发明中应用了具体实施例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (18)

  1. 一种人手检测跟踪方法,其特征在于,包括:
    逐帧图像进行人手检测;
    当在某一帧图像中检测到人手时,对检测到的人手进行位置跟踪,得到跟踪结果;
    验证所述跟踪结果是否有效,以对人手进行下一帧跟踪,或者根据所述跟踪结果对人手进行当前帧的局部检测。
  2. 根据权利要求1所述的人手检测跟踪方法,其特征在于,逐帧图像进行人手检测,包括:遍历帧图像全图,利用HOG+SVM方法在不同尺度下进行人手检测。
  3. 根据权利要求1所述的人手检测跟踪方法,其特征在于,对检测到的人手进行位置跟踪,得到跟踪结果,包括:
    利用模板匹配策略对检测到的人手进行位置跟踪,得到跟踪结果。
  4. 根据权利要求1所述的人手检测跟踪方法,其特征在于,所述跟踪结果为帧图像中用于标识人手位置的定位块的坐标,验证所述跟踪结果是否有效,包括:
    将所定位块调整为人手训练时确定的尺寸;
    将尺寸调整后的所述定位块送入分类器,判断该定位块是否为人手,如果该定位块为人手,则所述跟踪结果有效,否则所述跟踪结果无效。
  5. 根据权利要求4所述的人手检测跟踪方法,其特征在于,如果所述跟踪结果有效,对人手进行下一帧跟踪。
  6. 根据权利要求4所述的人手检测跟踪方法,其特征在于,如果所述跟踪结果无效,根据所述跟踪结果对人手进行当前帧的局部检测,包括:
    确定所述定位块的中心,以设定步长及设定块尺度定义多个邻域块;
    将所述多个邻域块分别调整为人手训练时确定的尺寸;
    将调整尺寸后的所述多个邻域块分别送入分类器,判断所述多个邻域块中为人手的邻域块的数量。
  7. 根据权利要求6所述的人手检测跟踪方法,其特征在于,如果多个邻域块中为人手的邻域块的数量大于或等于2,将所有为人手的邻域块合并后作为最终跟踪结果输出,然后进行下一帧跟踪。
  8. 根据权利要求6所述的人手检测跟踪方法,其特征在于,如果多个邻域块中为人手的邻域块的数量为1,将为人手的邻域块与所述定位块合并后作为最终跟踪结果输出,然后进行下一帧跟踪。
  9. 根据权利要求6所述的人手检测跟踪方法,其特征在于,如果多个邻域块中不存在为人手的邻域块,重新逐帧图像进行人手检测。
  10. 一种人手检测跟踪装置,其特征在于,包括:
    人手检测单元,用于逐帧图像进行人手检测;
    位置跟踪单元,用于当在某一帧图像中检测到人手时,对检测到的人手进行位置跟踪,得到跟踪结果;
    跟踪结果处理单元,用于验证所述跟踪结果是否有效,以对人手进行下一帧跟踪,或者根据所述跟踪结果对人手进行当前帧的局部检测。
  11. 根据权利要求10所述的人手检测跟踪装置,其特征在于,所述人手检测单元具体用于:遍历帧图像全图,利用HOG+SVM方法在不同尺度下进行人手检测。
  12. 根据权利要求10所述的人手检测跟踪装置,其特征在于,所述位置跟踪单元具体用于:
    利用模板匹配策略对检测到的人手进行位置跟踪,得到跟踪结果。
  13. 根据权利要求10所述的人手检测跟踪装置,其特征在于,所述跟踪结果为帧图像中用于标识人手位置的定位块的坐标,所述跟踪结果处理单元包括:
    尺寸调整模块,用于将所述定位块调整为人手训练时确定的尺寸;
    人手判断模块,用于将尺寸调整后的所述定位块送入分类器,判断该定位块是否为人手,如果该定位块为人手,则所述跟踪结果有效,否则所述跟踪结果无效。
  14. 根据权利要求13所述的人手检测跟踪装置,其特征在于,如果所述跟踪结果有效,所述位置跟踪单元对人手进行下一帧跟踪。
  15. 根据权利要求13所述的人手检测跟踪装置,其特征在于,所述跟踪结果处理单元还包括:信息确定模块,用于确定所述定位块的中心,以设定步长及设定块尺度定义多个邻域块;
    所述尺寸调整模块将所述多个邻域块分别调整为人手训练时确定的尺寸;
    所述人手判断模块用于将调整尺寸后的所述多个邻域块分别送入分类器,判断所述多个邻域块中为人手的邻域块的数量。
  16. 根据权利要求15所述的人手检测跟踪装置,其特征在于,所述跟踪结果处理单元还包括:合并模块,用于当多个邻域块中为人手的邻域块的数量大于或等于2时,将所有为人手的邻域块合并后作为最终跟踪结果输出,然后进行下一帧跟踪。
  17. 根据权利要求16所述的人手检测跟踪装置,其特征在于,如果多个邻域块中为 人手的邻域块的数量为1,所述合并模块还用于将为人手的邻域块与所述定位块合并后作为最终跟踪结果输出,然后进行下一帧跟踪。
  18. 根据权利要求15所述的人手检测跟踪装置,其特征在于,如果多个邻域块中不存在为人手的邻域块,所述人手检测单元重新逐帧图像进行人手检测。
PCT/CN2017/087658 2016-06-23 2017-06-09 人手检测跟踪方法及装置 WO2017219875A1 (zh)

Priority Applications (7)

Application Number Priority Date Filing Date Title
ES17814613T ES2865403T3 (es) 2016-06-23 2017-06-09 Método y dispositivo de detección y seguimiento de la mano
EP17814613.0A EP3477593B1 (en) 2016-06-23 2017-06-09 Hand detection and tracking method and device
JP2018567694A JP6767516B2 (ja) 2016-06-23 2017-06-09 手検出及び追跡方法並びに装置
KR1020197001955A KR102227083B1 (ko) 2016-06-23 2017-06-09 손 검출 및 추적 방법과 디바이스
PL17814613T PL3477593T3 (pl) 2016-06-23 2017-06-09 Sposób i urządzenie do wykrywania i śledzenia dłoni
US16/229,810 US10885638B2 (en) 2016-06-23 2018-12-21 Hand detection and tracking method and device
US16/721,449 US10885639B2 (en) 2016-06-23 2019-12-19 Hand detection and tracking method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610461515.0 2016-06-23
CN201610461515.0A CN106920251A (zh) 2016-06-23 2016-06-23 人手检测跟踪方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/229,810 Continuation US10885638B2 (en) 2016-06-23 2018-12-21 Hand detection and tracking method and device

Publications (1)

Publication Number Publication Date
WO2017219875A1 true WO2017219875A1 (zh) 2017-12-28

Family

ID=59453270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/087658 WO2017219875A1 (zh) 2016-06-23 2017-06-09 人手检测跟踪方法及装置

Country Status (9)

Country Link
US (2) US10885638B2 (zh)
EP (1) EP3477593B1 (zh)
JP (1) JP6767516B2 (zh)
KR (1) KR102227083B1 (zh)
CN (1) CN106920251A (zh)
ES (1) ES2865403T3 (zh)
PL (1) PL3477593T3 (zh)
TW (1) TWI703507B (zh)
WO (1) WO2017219875A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920251A (zh) 2016-06-23 2017-07-04 阿里巴巴集团控股有限公司 人手检测跟踪方法及装置
WO2018223295A1 (en) * 2017-06-06 2018-12-13 Midea Group Co., Ltd. Coarse-to-fine hand detection method using deep neural network
CN108121971B (zh) * 2017-12-25 2018-10-26 哈尔滨拓讯科技有限公司 一种基于动作时序特征的人手检测方法及装置
CN108229360B (zh) * 2017-12-26 2021-03-19 美的集团股份有限公司 一种图像处理的方法、设备及存储介质
CN108717522A (zh) * 2018-04-18 2018-10-30 上海交通大学 一种基于深度学习和相关滤波的人体目标跟踪方法
TWI719591B (zh) * 2019-08-16 2021-02-21 緯創資通股份有限公司 物件追蹤方法及其電腦系統
CN111046844B (zh) * 2019-12-27 2020-11-27 中国地质大学(北京) 一种基于邻域选取约束的高光谱图像分类方法
CN111568197A (zh) * 2020-02-28 2020-08-25 佛山市云米电器科技有限公司 智能检测方法、系统及存储介质
JP2023161209A (ja) 2022-04-25 2023-11-07 シャープ株式会社 入力装置、入力方法、及び入力プログラム

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831439A (zh) * 2012-08-15 2012-12-19 深圳先进技术研究院 手势跟踪方法及系统
CN103376890A (zh) * 2012-04-16 2013-10-30 富士通株式会社 基于视觉的手势遥控系统
CN104731323A (zh) * 2015-02-13 2015-06-24 北京航空航天大学 一种基于hog特征的多旋转方向svm模型的手势跟踪方法
CN104821010A (zh) * 2015-05-04 2015-08-05 清华大学深圳研究生院 基于双目视觉的人手三维信息实时提取方法及系统

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4267648B2 (ja) * 2006-08-25 2009-05-27 株式会社東芝 インターフェース装置及びその方法
JP2010039788A (ja) * 2008-08-05 2010-02-18 Toshiba Corp 画像処理装置及びその方法並びに画像処理プログラム
TWI397840B (zh) * 2009-07-23 2013-06-01 Ind Tech Res Inst 基於軌跡之控制方法及裝置
TW201201090A (en) * 2010-06-30 2012-01-01 Chunghwa Telecom Co Ltd Virtual keyboard input system
JP2012098771A (ja) 2010-10-29 2012-05-24 Sony Corp 画像処理装置および方法、並びに、プログラム
JP2012203439A (ja) * 2011-03-23 2012-10-22 Sony Corp 情報処理装置および情報処理方法、記録媒体、並びにプログラム
US9141196B2 (en) * 2012-04-16 2015-09-22 Qualcomm Incorporated Robust and efficient learning object tracker
JP6030430B2 (ja) * 2012-12-14 2016-11-24 クラリオン株式会社 制御装置、車両及び携帯端末
KR101436050B1 (ko) * 2013-06-07 2014-09-02 한국과학기술연구원 손모양 깊이영상 데이터베이스 구축방법, 손모양 인식방법 및 손모양 인식 장치
US10474921B2 (en) * 2013-06-14 2019-11-12 Qualcomm Incorporated Tracker assisted image capture
TWI499966B (zh) * 2013-10-08 2015-09-11 Univ Nat Taiwan Science Tech 互動式操作方法
JP6235414B2 (ja) * 2014-06-06 2017-11-22 株式会社デンソーアイティーラボラトリ 特徴量演算装置、特徴量演算方法、及び特徴量演算プログラム
JP6471934B2 (ja) * 2014-06-12 2019-02-20 パナソニックIpマネジメント株式会社 画像認識方法、カメラシステム
JP6487642B2 (ja) * 2014-07-01 2019-03-20 国立大学法人 筑波大学 手指形状の検出方法、そのプログラム、そのプログラムの記憶媒体、及び、手指の形状を検出するシステム。
US9665804B2 (en) * 2014-11-12 2017-05-30 Qualcomm Incorporated Systems and methods for tracking an object
US9922244B2 (en) * 2015-09-03 2018-03-20 Gestigon Gmbh Fast and robust identification of extremities of an object within a scene
CN106920251A (zh) 2016-06-23 2017-07-04 阿里巴巴集团控股有限公司 人手检测跟踪方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103376890A (zh) * 2012-04-16 2013-10-30 富士通株式会社 基于视觉的手势遥控系统
CN102831439A (zh) * 2012-08-15 2012-12-19 深圳先进技术研究院 手势跟踪方法及系统
CN104731323A (zh) * 2015-02-13 2015-06-24 北京航空航天大学 一种基于hog特征的多旋转方向svm模型的手势跟踪方法
CN104821010A (zh) * 2015-05-04 2015-08-05 清华大学深圳研究生院 基于双目视觉的人手三维信息实时提取方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3477593A4 *

Also Published As

Publication number Publication date
EP3477593A4 (en) 2019-06-12
KR102227083B1 (ko) 2021-03-16
TW201800975A (zh) 2018-01-01
ES2865403T3 (es) 2021-10-15
JP2019519049A (ja) 2019-07-04
EP3477593B1 (en) 2021-02-17
US10885638B2 (en) 2021-01-05
JP6767516B2 (ja) 2020-10-14
US20190188865A1 (en) 2019-06-20
KR20190020783A (ko) 2019-03-04
PL3477593T3 (pl) 2021-07-12
TWI703507B (zh) 2020-09-01
CN106920251A (zh) 2017-07-04
EP3477593A1 (en) 2019-05-01
US10885639B2 (en) 2021-01-05
US20200134838A1 (en) 2020-04-30

Similar Documents

Publication Publication Date Title
WO2017219875A1 (zh) 人手检测跟踪方法及装置
JP6871314B2 (ja) 物体検出方法、装置及び記憶媒体
US10438077B2 (en) Face liveness detection method, terminal, server and storage medium
CN110210302B (zh) 多目标跟踪方法、装置、计算机设备及存储介质
JP5959951B2 (ja) 映像処理装置、映像処理方法、及びプログラム
TW202011733A (zh) 對影像進行目標取樣的方法及裝置
KR102476897B1 (ko) 객체 추적 방법 및 장치, 및 이를 이용한 3d 디스플레이 장치
WO2021139197A1 (zh) 一种图像处理方法及装置
US11688078B2 (en) Video object detection
WO2022083123A1 (zh) 证件定位方法
WO2017107345A1 (zh) 一种图像处理方法及装置
US10803295B2 (en) Method and device for face selection, recognition and comparison
KR20200096426A (ko) 동체 검출 장치, 동체 검출 방법, 동체 검출 프로그램
JP2007025902A (ja) 画像処理装置、画像処理方法
US9727145B2 (en) Detecting device and detecting method
JP2006323779A (ja) 画像処理方法、画像処理装置
US20230114980A1 (en) System and method for processing media for facial manipulation
CN109146916A (zh) 一种运动物体跟踪方法及装置
JP5778983B2 (ja) データ処理装置、データ処理装置の制御方法、およびプログラム
CN113921412A (zh) 一种晶圆中晶片周期的计算方法、装置和设备
JP2013029996A (ja) 画像処理装置
CN114445570A (zh) 一种从高精度地图中快速提取条带状局部地图要素的方法
JP2004110543A (ja) 顔画像処理装置及びプログラム
Lim et al. A Methodology for Estimating the Assembly Position of the Process Based on YOLO and Regression of Operator Hand Position and Time Information
BR102023005151A2 (pt) Método, aparelho e sistema de controle de soldagem, e, meio de armazenamento legível por computador

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17814613

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018567694

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20197001955

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017814613

Country of ref document: EP

Effective date: 20190123