WO2018177153A1 - 一种行人跟踪方法以及电子设备 - Google Patents

一种行人跟踪方法以及电子设备 Download PDF

Info

Publication number
WO2018177153A1
WO2018177153A1 PCT/CN2018/079514 CN2018079514W WO2018177153A1 WO 2018177153 A1 WO2018177153 A1 WO 2018177153A1 CN 2018079514 W CN2018079514 W CN 2018079514W WO 2018177153 A1 WO2018177153 A1 WO 2018177153A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
detection
tracking
period
tracked
Prior art date
Application number
PCT/CN2018/079514
Other languages
English (en)
French (fr)
Inventor
杨怡
陈茂林
周剑辉
白博
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020197026991A priority Critical patent/KR102296088B1/ko
Priority to EP18774563.3A priority patent/EP3573022A4/en
Priority to JP2019553258A priority patent/JP6847254B2/ja
Publication of WO2018177153A1 publication Critical patent/WO2018177153A1/zh
Priority to US16/587,941 priority patent/US20200027240A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a pedestrian tracking method and an electronic device.
  • intelligent video analysis systems are receiving more and more attention.
  • the intelligent video analysis system needs to analyze the pedestrians in the massive video data automatically and intelligently, such as calculating the pedestrian trajectory, performing pedestrian anomaly detection on the restricted area, automatically detecting pedestrians on the road, and reminding the driver to avoid and pass.
  • Map search helps public security to find criminal suspects, etc., which can greatly improve work efficiency and reduce labor costs.
  • Pedestrian detection refers to inputting an image, and the pedestrian in the image is automatically found by the detection algorithm, and the position of the pedestrian is given in the form of a rectangular frame called a pedestrian detection frame. Since the pedestrian is in motion in the video, the pedestrian tracking algorithm is needed to track the pedestrians, and the position of each frame of the pedestrian in the video is obtained. The position is also given in the form of a rectangular frame, which is called the tracking of the pedestrian. frame.
  • the defects of the prior art are as follows: 1.
  • the detection frame is not accurate enough: since the aspect ratio of the pedestrian detection frame is fixed and the whole body of the pedestrian is detected, when the pedestrian appears in an abnormal posture, such as a special piece of the leg If it is turned on, the aspect ratio becomes larger, and the detection frame of the pedestrian with the fixed aspect ratio will be less accurate.
  • the detection frame and the tracking frame cannot capture the change of the posture during the walking of the pedestrian: since the pedestrian is moving in the video, the posture of the pedestrian may change greatly during the walking, and the change is represented in the video image. The change in the aspect ratio of the minimum circumscribed rectangle for the pedestrian.
  • the detection frame and the tracking frame based on the fixed aspect ratio cannot capture the posture change during pedestrian walking.
  • the present invention provides a pedestrian tracking method and an electronic device that can accurately achieve tracking regardless of changes in the posture of a pedestrian to be tracked.
  • a first aspect of the embodiments of the present invention provides a pedestrian tracking method, including:
  • Step A determining a detection period and a tracking period of the to-be-tracked video
  • the detection period is included in the tracking period, and the duration of the detection period is less than the duration of the tracking period.
  • the detection period shown in this embodiment may not be included in the tracking period, the detection period is before the tracking period, and the duration of the detection period is less than the duration of the tracking period.
  • Step B Obtain an upper body detection frame of the pedestrian to be tracked.
  • an upper body detection frame of the pedestrian to be tracked appearing in the to-be-tracked video is acquired.
  • the target image frame is first determined, and the target image frame is an image frame in which the pedestrian to be tracked appears.
  • the upper body detection frame may be acquired within the target image frame.
  • Step C Obtain a whole body frame of the detection period of the pedestrian to be tracked.
  • the whole body frame of the detection period of the pedestrian to be tracked is acquired according to the upper body detection frame.
  • Step D The upper body tracking frame of the pedestrian to be tracked.
  • an upper body tracking frame of the to-be-tracked pedestrian appearing in the to-be-tracked video is acquired.
  • the whole body frame of the detection period acquired in the detection period is initialized as a tracking target, so that the to-be-tracked pedestrian as the tracking target can be tracked in the tracking period.
  • Step E Obtain a tracking period full body frame.
  • the whole body frame of the tracking period corresponding to the upper body tracking frame is acquired according to the detection period full body frame;
  • the tracking period body frame is used to track the pedestrian to be tracked.
  • the obtained whole body frame of the detection period is obtained according to the upper body detection frame of the pedestrian to be tracked, and the aspect ratio of the whole body frame of the detection period is variable, so Even if the pedestrian to be tracked appears in an abnormal posture during the detection period, the method shown in this embodiment can still obtain the accurate tracking period full body frame of the pedestrian to be tracked, thereby realizing the pedestrian to be tracked. When the abnormal posture occurs, the preparation tracking of the pedestrian to be tracked can still be realized.
  • Step C01 Acquire a lower body scanning area.
  • the embodiment may acquire the lower body scanning area of the pedestrian to be tracked according to the upper body detection frame of the pedestrian to be tracked.
  • Step C02 Acquire the whole body frame of the detection cycle.
  • the detection cycle full body frame is acquired according to the upper body detection frame and the lower body detection frame.
  • the whole detection frame obtained by the method shown in this embodiment is obtained by combining the upper body detection frame of the pedestrian to be tracked and the lower body detection frame of the pedestrian to be tracked, and the obtained
  • the aspect ratio of the whole body frame of the detection period is variable, so that even if the pedestrian to be tracked appears in an abnormal posture during the detection period, for example, the specially opened posture of the leg of the pedestrian to be tracked makes the pedestrian to be tracked
  • the combination of the upper body detection frame and the lower body detection frame can still obtain the accurate detection of the body frame of the pedestrian to be tracked, thereby realizing the tracking Pedestrian preparation for tracking.
  • the upper body detection frame Wherein said Is the upper left horizontal coordinate of the upper body detection frame, Is the upper left ordinate of the upper body detection frame, the Is the lower right corner abscissa of the upper body detection frame, Is the ordinate of the lower right corner of the upper body detection frame;
  • Step C01 specifically includes:
  • Step C011 determining the first parameter.
  • the first parameter Wherein the Ratio default is a preset ratio.
  • the Ratio default is stored in advance in the embodiment, and the Ratio default may be set according to an aspect ratio of the human body detection frame in advance.
  • the aspect ratio of the human detection frame is determined to be 3:7, the Ratio default can be set to 3/7, and the Ratio default is stored, so that in the process shown in this step, the Ratio default can be retrieved to perform the First parameter Calculation.
  • Step C012 determining a second parameter.
  • Step C014 determining the lower body scanning area.
  • the lower body scanning area may be determined according to the first parameter, the second parameter, and the third parameter.
  • the lower body scanning area may be determined to be performed in the acquired lower body scanning area.
  • the detection of the lower body detection frame of the pedestrian to be tracked improves the accuracy and efficiency of obtaining the lower body detection frame of the pedestrian to be tracked, and improves the efficiency of tracking the pedestrian to be tracked.
  • the L s is an upper left horizontal coordinate of the lower body scanning area
  • T s is an upper left vertical coordinate of the lower body scanning area
  • R s is a lower right horizontal coordinate of the lower body scanning area
  • the B s is the lower right ordinate of the lower body scanning area.
  • the paral1, the paral2, and the paral3 are preset values, and the paral1, the paral2, and the paral3 may be empirical values, or the operator may implement different lower limbs by setting different paral1, paral2, and paral3. Scan area settings.
  • the imgW is the width of any image frame in the detection period of the to-be-tracked video
  • the imgH is the height of any image frame in the detection period of the to-be-tracked video.
  • the detection of the lower body detection frame of the pedestrian to be tracked can be performed in the acquired lower body scanning area, thereby improving the accuracy and efficiency of obtaining the lower body detection frame of the pedestrian to be tracked, and improving The efficiency of tracking the pedestrians is tracked, and in the process of obtaining, different settings of the parameters (paral1, paral2, and paral3) can be used to realize different settings of the lower body scanning area, thereby making the present
  • the method shown in the embodiment has strong applicability, so that in different application scenarios, the positioning of different lower body detection frames can be realized according to different parameter settings, thereby improving the accuracy of the pedestrian detection to be tracked.
  • the step C includes:
  • Step C11 Determine an upper left horizontal coordinate of the whole body frame of the detection period
  • the abscissa of the upper left corner of the whole body frame of the detection period is:
  • Step C12 determining an ordinate of an upper left corner of the whole body frame of the detection period
  • the ordinate of the upper left corner of the whole body frame of the detection period is:
  • Step C13 Determine an abscissa of a lower right corner of the whole body frame of the detection period.
  • the abscissa of the lower right corner of the whole body frame of the detection period is:
  • Step C14 Determine a ordinate of a lower right corner of the whole body frame of the detection period.
  • the ordinate of the lower right corner of the whole body frame of the detection period is:
  • Step C15 Determine the whole period frame of the detection period.
  • the whole body of the detection cycle is:
  • the obtained whole body frame of the detection period is obtained by combining the upper body detection frame of the pedestrian to be tracked and the lower body detection frame of the pedestrian to be tracked, so that even if the tracking is to be tracked
  • the pedestrian appears in an abnormal posture, for example, the opening of the leg is particularly wide, the aspect ratio becomes large, because the upper body and the lower body of the pedestrian to be tracked are separately detected in the embodiment, thereby respectively acquiring the upper body detection of the pedestrian to be tracked.
  • the frame and the lower body detection frame of the pedestrian to be tracked that is, according to the difference of the pedestrian posture to be tracked, the upper body detection frame and the lower body detection frame in the whole body frame of the detection cycle have different ratios, so that the frame can be acquired
  • the accurate detection period of the whole body frame visible, based on the ratio of the variable upper body detection frame and the lower body detection frame, can accurately capture the change of the posture of the pedestrian to be tracked during walking, effectively avoiding the waiting Tracking the difference in pedestrian attitude and not being able to track the occurrence of tracking pedestrians.
  • step C After the step C, it is also necessary to perform:
  • Step D01 determining a ratio of a width of the whole body frame of the detection period to a high value of the whole body frame of the detection period;
  • the width of the whole body frame of the detection period and the high ratio of the whole body frame of the detection period are
  • Step D02 Determine a high ratio of the height of the upper body detection frame to the whole body frame of the detection period.
  • Step D03 Determine the tracking period full body frame.
  • the method shown in this embodiment it is possible to determine the ratio based on the width of the whole body frame of the detection period and the high ratio of the whole body frame of the detection period and the ratio of the height of the upper body detection frame to the height of the whole body frame of the detection period.
  • the tracking period body frame because the whole body frame of the detection period can accurately capture the change of the pedestrian posture to be tracked, so that the whole body frame of the tracking period acquired according to the whole body frame of the detection period can be accurately captured
  • the change of the posture during the walking of the pedestrian to be tracked improves the accuracy of tracking the pedestrians through the whole body frame of the tracking period, and effectively avoids the situation in which the pedestrians cannot be tracked due to the difference in the posture of the pedestrian to be tracked. appear.
  • step C01 After the step C01, it is also required to execute:
  • Step C21 Determine an upper left horizontal coordinate of the whole body frame of the detection period.
  • the horizontal coordinate of the upper left corner of the whole body frame of the detection cycle is determined.
  • the abscissa of the upper left corner of the whole body frame of the detection period is:
  • Step C22 Determine an ordinate of an upper left corner of the whole body frame of the detection period.
  • the ordinate of the upper left corner of the whole body frame of the detection period is:
  • Step C23 Determine an abscissa of a lower right corner of the whole body frame of the detection period.
  • the abscissa of the lower right corner of the whole body frame of the detection period is
  • Step C24 Determine a ordinate of a lower right corner of the whole body frame of the detection period.
  • Step C25 determining the detection period full body frame
  • the lower body detection frame can be calculated according to the upper body detection frame, so that no detection is detected.
  • the whole body frame of the detection cycle can still be obtained, thereby effectively ensuring tracking of the pedestrians to be tracked, and avoiding the situation that the pedestrian to be tracked cannot be detected because the pedestrian to be tracked cannot detect the lower body.
  • the method further includes:
  • Step C31 Obtain a preset ratio of a width of the whole body frame of the detection period and a height of the whole body frame of the detection period;
  • the preset width of the whole body frame of the detection period and the high ratio of the whole body frame of the detection period are
  • Step C32 Determine a high ratio of the height of the upper body detection frame to the whole body frame of the detection period.
  • the ratio of the height of the upper body detection frame to the height of the whole body of the detection period is:
  • Step C33 according to the And said Determine the tracking period full body frame.
  • the upper body tracking frame Wherein said For the upper left body tracking frame, the upper left horizontal coordinate, the Is the upper left ordinate ordinate of the upper body tracking frame, For the upper right corner of the tracking frame of the upper body, the For the upper body tracking frame, the lower right ordinate of the frame.
  • the step C33 specifically includes:
  • Step C331 determining an upper left horizontal coordinate of the whole body frame of the tracking period.
  • Step C332 Determine an ordinate of an upper left corner of the whole body frame of the tracking period.
  • the ordinate of the upper left corner of the whole body frame of the tracking period is:
  • Step C333 Determine an abscissa of a lower right corner of the whole body frame of the tracking period.
  • the bottom right corner of the whole body frame of the tracking period is
  • Step C334 determining a ordinate of a lower right corner of the whole body frame of the tracking period.
  • the ordinate of the lower right corner of the whole body frame of the tracking period is:
  • the tracking period body frame is:
  • the abscissa of the upper left corner of the whole body frame in the detection period Equal to the upper left horizontal coordinate of the upper body detection frame
  • the whole body frame of the tracking period can be calculated, so that even if a large change occurs in the pedestrian posture to be tracked, the whole body frame of the tracking period can still be obtained, thereby avoiding the inability to track the pedestrian to be tracked. The situation arises and can improve the accuracy of tracking pedestrians.
  • the step D specifically includes:
  • step D11 a plurality of particles are sprinkled around the upper body detection frame.
  • a plurality of particles are sprinkled around the upper body detection frame, and a ratio of a width of any one of the plurality of particles to a height of any one of the plurality of particles is opposite to an upper body detection frame The ratio of the width to the height of the upper body detection frame is the same.
  • the pedestrian to be tracked needs to be A plurality of particles are scattered around the upper body detection frame to achieve tracking of the pedestrian to be tracked.
  • Step D12 Determine the upper body tracking frame.
  • the upper body tracking frame is the one of the plurality of particles that is most similar to the upper body detection frame.
  • a plurality of particles are scattered on the upper body detection frame, thereby realizing that the upper body tracking frame can be accurately obtained in the tracking period, and is obtained by the upper body detection frame.
  • the upper body tracking frame can match the different postures of the pedestrians to be tracked, thereby achieving accurate tracking of the pedestrians to be tracked.
  • the step E specifically includes:
  • Step E11 Determine an upper left horizontal coordinate of the whole body frame of the tracking period.
  • Step E12 Determine an ordinate of an upper left corner of the whole body frame of the tracking period.
  • the ordinate of the upper left corner of the whole body frame of the tracking period is:
  • Step E13 Determine the abscissa of the lower right corner of the whole body frame of the tracking period.
  • the bottom right corner of the whole body frame of the tracking period is
  • Step E14 Determine a ordinate of a lower right corner of the whole body frame of the tracking period.
  • Step E15 Determine the tracking period body frame.
  • the abscissa of the upper left corner of the whole body frame in the detection period Equal to the upper left horizontal coordinate of the lower body detection frame
  • the whole body frame of the tracking period can be calculated, so that even if a large change occurs in the pedestrian posture to be tracked, the whole body frame of the tracking period can still be obtained, thereby avoiding the inability to track the pedestrian to be tracked. The situation arises and can improve the accuracy of tracking pedestrians.
  • the method further includes:
  • Step E21 Obtain a sequence of target image frames of the to-be-tracked video.
  • the target image frame sequence includes one or more consecutive image frames, and the target image frame sequence is located before the detection cycle.
  • Step E22 Acquire a background area of the to-be-tracked video according to the target image frame sequence.
  • the static object is obtained by the static background model, and the still object is determined as the background area of the to-be-tracked video.
  • Step E23 Obtain a foreground area of any image frame of the to-be-tracked video.
  • any image frame of the to-be-tracked video is subtracted from the background area to obtain a foreground area of any image frame of the to-be-tracked video.
  • any area of any image frame of the to-be-tracked video is distinguished from the background area to obtain a target value, which is visible. Different regions of any image frame of the tracking video correspond to a target value.
  • the area of any image frame of the to-be-tracked video corresponding to the target value is a motion area.
  • the motion region is determined to be a foreground region of any image frame of the video to be tracked.
  • Step E24 Obtain the pedestrian to be tracked.
  • the foreground area of any image frame of the to-be-tracked video is detected to obtain the pedestrian to be tracked.
  • the step B includes:
  • Step B11 Determine a target image frame.
  • the target image frame is an image frame in which the pedestrian to be tracked appears.
  • Step B12 Acquire the upper body detection frame in a foreground area of the target image frame.
  • the detection and tracking of the pedestrian to be tracked can be performed in the foreground area of any image frame of the to-be-tracked video, that is, the detection of the pedestrian to be tracked as shown in this embodiment.
  • the process and the tracking process are all executed on the foreground area of the image, which greatly reduces the number of image windows that need to be processed, that is, reduces the search space for the pedestrian to be tracked, thereby reducing the need for tracking the pedestrians to be tracked.
  • the length of time has improved the efficiency of tracking pedestrians.
  • a second aspect of the embodiments of the present invention provides an electronic device, including:
  • a first determining unit configured to determine a detection period and a tracking period of the video to be tracked
  • the first determining unit shown in this embodiment is used to perform the step A shown in the first embodiment of the present invention.
  • the first determining unit shown in this embodiment is used to perform the step A shown in the first embodiment of the present invention.
  • the first determining unit shown in this embodiment is used to perform the step A shown in the first embodiment of the present invention.
  • a first acquiring unit configured to acquire an upper body detecting frame of a pedestrian to be tracked that appears in the to-be-tracked video during the detecting period
  • the first obtaining unit shown in this embodiment is used to perform the step B shown in the first embodiment of the present invention.
  • the first obtaining unit shown in this embodiment is used to perform the step B shown in the first embodiment of the present invention.
  • the first obtaining unit shown in this embodiment is used to perform the step B shown in the first embodiment of the present invention.
  • a second acquiring unit configured to acquire a whole body frame of the detection period of the pedestrian to be tracked according to the upper body detection frame
  • the second obtaining unit shown in this embodiment is used to perform the step C shown in the first embodiment of the present invention.
  • the second obtaining unit shown in this embodiment is used to perform the step C shown in the first embodiment of the present invention.
  • a third acquiring unit configured to acquire, in the tracking period, an upper body tracking frame of the to-be-tracked pedestrian that appears in the to-be-tracked video;
  • the third obtaining unit shown in this embodiment is used to perform the step D shown in the first aspect of the present invention.
  • the specific implementation process is shown in the first aspect of the embodiment of the present invention, and details are not described herein.
  • a fourth acquiring unit configured to acquire a tracking period full body frame corresponding to the upper body tracking frame according to the detection period full body frame, where the tracking period full body frame is used to track the to-be-tracked pedestrian.
  • the fourth obtaining unit shown in this embodiment is used to perform the step E shown in the first aspect of the embodiment of the present invention.
  • the fourth obtaining unit shown in this embodiment is used to perform the step E shown in the first aspect of the embodiment of the present invention.
  • the acquired whole body frame of the detection period is obtained according to the upper body detection frame of the pedestrian to be tracked, and the aspect ratio of the whole body frame of the detection period is variable. Therefore, even if the pedestrian to be tracked appears in an abnormal posture during the detection period, the method shown in this embodiment can still obtain the accurate tracking period full body frame of the pedestrian to be tracked, thereby realizing the pedestrian to be tracked. When appearing in an abnormal posture, the preparation tracking of the pedestrian to be tracked can still be achieved.
  • the electronic device further includes:
  • the second acquiring unit is configured to acquire a lower body scanning area according to the upper body detecting frame, and if the lower body detecting frame is acquired in the lower body scanning area, the lower body detecting frame is obtained, according to the upper body detecting frame and the lower body detecting The box acquires the whole body frame of the detection period.
  • the second obtaining unit shown in this embodiment is used to perform step C01 and step C02 shown in the first embodiment of the present invention.
  • step C01 and step C02 shown in the first embodiment of the present invention For details, refer to the first aspect of the embodiment of the present invention. Do not repeat them.
  • the obtained whole body frame of the detection period is obtained by combining the upper body detection frame of the pedestrian to be tracked and the lower body detection frame of the pedestrian to be tracked, and the obtained The aspect ratio of the whole body frame of the detection period is variable, so that even if the pedestrian to be tracked appears in an abnormal posture during the detection period, for example, the specially opened posture of the leg of the pedestrian to be tracked makes the waiting
  • the ratio of the upper body and the lower body of the pedestrian is changed, the combination of the upper body detection frame and the lower body detection frame can still obtain the accurate detection of the body frame of the pedestrian to be tracked, thereby realizing Tracking the preparation of tracking pedestrians.
  • the upper body detection frame Wherein said Is the upper left horizontal coordinate of the upper body detection frame, Is the upper left ordinate of the upper body detection frame, the Is the lower right corner abscissa of the upper body detection frame, Is the ordinate of the lower right corner of the upper body detection frame;
  • the second acquiring unit is specifically configured to determine a first parameter, where the first parameter is obtained when the lower body scanning area is acquired according to the upper body detecting frame
  • the Ratio default is a preset ratio
  • the second parameter is determined, where the second parameter is Determine the third parameter,
  • the lower body scanning area is determined according to the first parameter, the second parameter, and the third parameter.
  • the second obtaining unit shown in this embodiment is used to perform step C011, step C012, step C013, and step C014 shown in the first aspect of the embodiment of the present invention.
  • step C011, step C012, step C013, and step C014 shown in the first aspect of the embodiment of the present invention are used to perform step C011, step C012, step C013, and step C014 shown in the first aspect of the embodiment of the present invention.
  • step C011, step C012, step C013, and step C014 shown in the first aspect of the embodiment of the present invention For details, refer to the embodiment of the present invention. As shown on the one hand, the details are not described.
  • the lower body scanning area can be determined, so that the obtained The detection of the lower body detection frame of the pedestrian to be tracked is performed in the lower body scanning area, thereby improving the accuracy and efficiency of obtaining the lower body detection frame of the pedestrian to be tracked, and improving the efficiency of tracking the pedestrian to be tracked.
  • the second acquiring unit is configured to acquire a lower body scanning area according to the upper body detection frame.
  • the L s is the upper left horizontal coordinate of the lower body scanning area
  • T s is the upper left vertical coordinate of the lower body scanning area
  • R s is the lower right horizontal coordinate of the lower body scanning area
  • the B s is a lower right ordinate of the lower body scanning area;
  • the parl1, the paral2, and the paral3 are preset values, where the imgW is the width of any image frame in the detection period of the to-be-tracked video, and the imgH is the to-be-tracked video in the detection period. The height of any image frame within.
  • the second obtaining unit shown in this embodiment is used to perform the step C014 shown in the first aspect of the embodiment of the present invention.
  • the second obtaining unit shown in this embodiment is used to perform the step C014 shown in the first aspect of the embodiment of the present invention.
  • Narration refer to the first aspect of the embodiment of the present invention.
  • the detection of the lower body detection frame of the pedestrian to be tracked can be performed in the acquired lower body scanning area, thereby improving the accuracy and efficiency of obtaining the lower body detection frame of the pedestrian to be tracked.
  • the efficiency of tracking the pedestrians to be tracked is improved, and in the process of obtaining, different settings of the parameters (paral1, paral2, and paral3) can be used to realize different settings of the lower body scanning region, thereby making
  • the method shown in this embodiment has strong applicability, so that in different application scenarios, different lower body detection frames can be positioned according to different parameter settings, thereby improving the accuracy of the pedestrian detection to be tracked.
  • the lower body detection frame Said Is the upper left horizontal coordinate of the lower body detection frame, For the lower left ordinate of the lower body detection frame, the Is the lower right corner abscissa of the lower body detection frame, For the lower right corner of the lower body detection frame,
  • the second obtaining unit is specifically configured to determine an upper left horizontal coordinate of the whole body frame of the detection period when the whole body frame of the detection period of the pedestrian to be tracked is acquired according to the upper body detection frame, where the detection period is upper left of the whole body frame Angular abscissa Determining the upper left ordinate of the whole body frame of the detection period Determining the lower right corner of the whole body frame of the detection period Determining the lower right ordinate of the whole body frame of the detection period Determining the detection cycle full body frame
  • the second obtaining unit shown in this embodiment is used to perform step C11, step C12, step C13, step C14, and step C15 shown in the first aspect of the embodiment of the present invention.
  • step C11, step C12, step C13, step C14, and step C15 shown in the first aspect of the embodiment of the present invention are used to perform step C11, step C12, step C13, step C14, and step C15 shown in the first aspect of the embodiment of the present invention.
  • the obtained whole body frame of the detection period is obtained by combining the upper body detection frame of the pedestrian to be tracked and the lower body detection frame of the pedestrian to be tracked, so that even if When the pedestrian is displayed in an abnormal posture, for example, the opening of the leg is particularly wide, the aspect ratio is increased, because the upper body and the lower body of the pedestrian to be tracked are separately detected in the embodiment, thereby respectively acquiring the upper body of the pedestrian to be tracked.
  • the detection frame and the lower body detection frame of the pedestrian to be tracked that is, according to different postures of the pedestrian to be tracked, so that the upper body detection frame and the lower body detection frame in the whole body frame of the detection cycle have different ratios, thereby enabling acquisition It can be seen that, according to the accurate whole body frame of the detection period, based on the ratio of the variable upper body detection frame and the lower body detection frame, the change of the posture during the walking of the pedestrian to be tracked can be accurately captured, thereby effectively avoiding the cause.
  • the situation in which the pedestrians are to be tracked is different and the tracking of pedestrians cannot be tracked.
  • the fourth obtaining unit is specifically configured to determine the detection period of the whole body frame Width and the high ratio of the whole body frame of the detection period Determining a high ratio of the height of the upper body detection frame to the whole body frame of the detection period According to the And said Determine the tracking period full body frame.
  • the fourth obtaining unit shown in this embodiment is used to perform the step D01, the step D02, and the step D03 shown in the first aspect of the embodiment of the present invention.
  • the specific implementation process please refer to the first aspect of the embodiment of the present invention. Show, the specifics are not described.
  • the electronic device shown in this embodiment it is possible to based on the ratio of the width of the whole body frame of the detection period and the high ratio of the whole body frame of the detection period and the high ratio of the height of the upper body detection frame to the whole body frame of the detection period. Determining the whole body frame of the tracking period, because the whole body frame of the detection period can accurately capture the change of the posture of the pedestrian to be tracked, so that the whole body frame of the tracking period acquired according to the whole body frame of the detection period can be accurately captured The change of the posture during the walking of the pedestrian to be tracked improves the accuracy of tracking the pedestrians through the whole body frame of the tracking period, and effectively avoids the situation that the pedestrians cannot be tracked due to the difference of the pedestrian posture to be tracked. Appearance.
  • the second acquisition unit acquires the whole body frame of the detection period of the pedestrian to be tracked according to the upper body detection frame
  • the second acquisition unit is specifically configured to: if the lower body detection frame is not acquired in the lower body scanning area, Determining an upper left horizontal coordinate of the whole body frame of the detection period, wherein the detection period is an upper left horizontal coordinate of the whole body frame Determining the upper left ordinate of the whole body frame of the detection period Determining the lower right corner of the whole body frame of the detection period Determining the lower right ordinate of the whole body frame of the detection period Determining the detection cycle full body frame
  • the second obtaining unit shown in this embodiment is used to perform step C21, step C22, step C23, and step C24 shown in the first aspect of the embodiment of the present invention.
  • step C21, step C22, step C23, and step C24 shown in the first aspect of the embodiment of the present invention For details, refer to the implementation of the present invention. As shown in the first aspect of the example, the details are not described.
  • the lower body detection frame can be calculated according to the upper body detection frame, so that no detection is performed.
  • the whole body frame of the detection cycle can still be obtained, thereby effectively ensuring tracking of the pedestrians to be tracked, and avoiding the situation that the pedestrian to be tracked cannot be tracked because the pedestrian to be tracked cannot detect the lower body.
  • the appearance of the phenomenon can accurately capture the change of the posture of the pedestrian to be tracked during walking, effectively avoiding the situation that the pedestrian can not be tracked due to the difference of the pedestrian posture to be tracked.
  • the fourth acquiring unit is specifically configured to acquire the preset detection period.
  • the width of the whole body frame and the high ratio of the whole body frame of the detection period Determining a high ratio of the height of the upper body detection frame to the whole body frame of the detection period According to the And said Determine the tracking period full body frame.
  • the fourth obtaining unit shown in this embodiment is used to perform step C31, step C32, and step C33 shown in the first aspect of the embodiment of the present invention.
  • step C31, step C32, and step C33 shown in the first aspect of the embodiment of the present invention are used to perform step C31, step C32, and step C33 shown in the first aspect of the embodiment of the present invention.
  • steps C31, step C32, and step C33 shown in the first aspect of the embodiment of the present invention For details, refer to the embodiment of the present invention. As shown on the one hand, the details are not described.
  • the upper body tracking frame Wherein said For the upper left body tracking frame, the upper left horizontal coordinate, the Is the upper left ordinate ordinate of the upper body tracking frame, For the upper right corner of the tracking frame of the upper body, the Is the ordinate of the lower right corner of the upper body tracking frame;
  • the fourth obtaining unit is according to the And said Determining, when determining the whole body frame of the tracking period, determining an abscissa of an upper left corner of the whole body frame of the tracking period, wherein, if Then the upper left angle of the whole body frame of the tracking period Determining the upper left ordinate of the whole body frame of the tracking period Determining the lower right corner of the whole body frame of the tracking period Determining the lower right ordinate of the whole body frame of the tracking period Wherein said Determining the tracking period full body frame
  • the fourth obtaining unit shown in this embodiment is used to perform step C331, step C332, step C333, step C334, and step C335 shown in the first aspect of the embodiment of the present invention.
  • step C331, step C332, step C333, step C334, and step C335 shown in the first aspect of the embodiment of the present invention are used to perform step C331, step C332, step C333, step C334, and step C335 shown in the first aspect of the embodiment of the present invention.
  • step C331, step C332, step C333, step C334, and step C335 shown in the first aspect of the embodiment of the present invention For details, refer to the implementation of the present invention. As shown in the first aspect of the example, the details are not described.
  • the abscissa of the upper left corner of the whole body frame in the detection period Equal to the upper left horizontal coordinate of the upper body detection frame
  • the whole body frame of the tracking period can be calculated, so that even if a large change occurs in the pedestrian posture to be tracked, the whole body frame of the tracking period can still be obtained, thereby avoiding the inability to track the pedestrian to be tracked. The situation arises and can improve the accuracy of tracking pedestrians.
  • the third acquiring unit is configured to sprinkle a plurality of particles centering on the upper body detection frame, and a ratio of a width of any one of the plurality of particles to a height of any one of the plurality of particles
  • the upper body tracking frame is determined by the same ratio of the width of the upper body detection frame and the height of the upper body detection frame, and the upper body tracking frame is the most similar particles of the plurality of particles to the upper body detection frame.
  • the third obtaining unit shown in this embodiment is used to perform step D11 and step D12 shown in the first aspect of the embodiment of the present invention.
  • step D11 and step D12 shown in the first aspect of the embodiment of the present invention For details, refer to the first aspect of the embodiment of the present invention. Make a statement.
  • a plurality of particles are scattered around the upper body detection frame, thereby realizing that the upper body tracking frame can be accurately obtained in the tracking period, and the upper body detection frame is passed through the upper body detection frame.
  • Obtaining the upper body tracking frame can match different postures of the pedestrian to be tracked, thereby realizing accurate tracking of the pedestrian to be tracked.
  • the fourth acquiring unit is specifically configured to determine an upper left horizontal coordinate of the whole body frame of the tracking period, where Then the upper left angle of the whole body frame of the tracking period Determining the upper left ordinate of the whole body frame of the tracking period Determining the lower right corner of the whole body frame of the tracking period Determining the lower right ordinate of the whole body frame of the tracking period Wherein said Determining the tracking period full body frame
  • the fourth obtaining unit shown in this embodiment is used to perform step E11, step E12, step E13, step E14 and step E15 shown in the first aspect of the embodiment of the present invention.
  • the first aspect of the embodiment is shown, and details are not described herein.
  • the abscissa of the upper left corner of the whole body frame in the detection period Equal to the upper left horizontal coordinate of the lower body detection frame
  • the whole body frame of the tracking period can be calculated, so that even if a large change occurs in the pedestrian posture to be tracked, the whole body frame of the tracking period can still be obtained, thereby avoiding the inability to track the pedestrian to be tracked. The situation arises and can improve the accuracy of tracking pedestrians.
  • the electronic device also includes:
  • a fifth acquiring unit configured to acquire a target image frame sequence of the to-be-tracked video, the target image frame sequence includes one or more consecutive image frames, and the target image frame sequence is located before the detection period;
  • the fifth obtaining unit shown in this embodiment is used to perform the step E21 shown in the first embodiment of the present invention.
  • the fifth obtaining unit shown in this embodiment is used to perform the step E21 shown in the first embodiment of the present invention.
  • the fifth obtaining unit shown in this embodiment is used to perform the step E21 shown in the first embodiment of the present invention.
  • a sixth acquiring unit configured to acquire a background area of the to-be-tracked video according to the target image frame sequence
  • the sixth obtaining unit shown in this embodiment is used to perform the step E22 shown in the first embodiment of the present invention.
  • the specific implementation process is shown in the first aspect of the embodiment of the present invention, and details are not described herein.
  • a seventh acquiring unit configured to subtract any image frame of the to-be-tracked video from the background area to obtain a foreground area of any image frame of the to-be-tracked video during the detecting period;
  • the seventh obtaining unit shown in this embodiment is used to perform the step E23 shown in the first embodiment of the present invention.
  • the specific implementation process is shown in the first aspect of the embodiment of the present invention, and details are not described herein.
  • an eighth acquiring unit configured to detect a foreground area of any image frame of the to-be-tracked video to obtain the to-be-tracked pedestrian.
  • the eighth obtaining unit shown in this embodiment is used to perform the step E24 shown in the first aspect of the present invention.
  • the specific implementation process is shown in the first aspect of the embodiment of the present invention, and details are not described herein.
  • the first acquiring unit is configured to determine a target image frame, where the target image frame is an image frame in which the pedestrian to be tracked appears, and the upper body detection frame is acquired in a foreground area of the target image frame.
  • the first obtaining unit shown in this embodiment is used to perform step B11 and step B12 shown in the first aspect of the embodiment of the present invention.
  • step B11 and step B12 shown in the first aspect of the embodiment of the present invention For details, refer to the first aspect of the embodiment of the present invention. Make a statement.
  • the detection and tracking of the pedestrian to be tracked can be performed in the foreground area of any image frame of the to-be-tracked video, that is, the pedestrian to be tracked as shown in this embodiment
  • the detection process and the tracking process are all executed on the foreground area of the image, which greatly reduces the number of image windows that need to be processed, that is, reduces the search space for the pedestrian to be tracked, thereby reducing the tracking process for the pedestrians to be tracked.
  • the length of time required increases the efficiency of tracking pedestrians.
  • An embodiment of the present invention provides a pedestrian tracking method and an electronic device, which can acquire an upper body detection frame of a pedestrian to be tracked that appears in a to-be-tracked video during a detection period, and acquire the to-be-checked frame according to the upper body detection frame. Tracking the detection period of the pedestrian, the whole body frame, in the tracking period, the whole body frame of the tracking period corresponding to the upper body tracking frame is obtained according to the detection period, and the whole body frame of the tracking period can be used to perform the tracking of the pedestrian to be tracked.
  • the method shown in this embodiment can still obtain the waiting Tracking the pedestrian's accurate tracking period body frame, so that when the pedestrian to be tracked appears in an abnormal posture, the preparation tracking of the pedestrian to be tracked can still be realized.
  • FIG. 1 is a schematic structural view of an embodiment of an electronic device according to the present invention.
  • FIG. 2 is a schematic structural diagram of an embodiment of a processor provided by the present invention.
  • FIG. 3 is a flow chart of steps of an embodiment of a pedestrian tracking method according to the present invention.
  • FIG. 4 is a schematic diagram of application of an embodiment of a pedestrian tracking method according to the present invention.
  • FIG. 5 is a schematic diagram of application of another embodiment of a pedestrian tracking method according to the present invention.
  • FIG. 6 is a schematic diagram of application of another embodiment of a pedestrian tracking method according to the present invention.
  • FIG. 7 is a schematic diagram of application of another embodiment of a pedestrian tracking method according to the present invention.
  • FIG. 8 is a schematic diagram of application of another embodiment of a pedestrian tracking method according to the present invention.
  • FIG. 9 is a flow chart of steps of an embodiment of a pedestrian inquiry method according to the present invention.
  • FIG. 10 is a schematic diagram of execution steps of an embodiment of a pedestrian inquiry method according to the present invention.
  • FIG. 11 is a schematic structural diagram of another embodiment of an electronic device according to the present invention.
  • the embodiment of the present invention provides a pedestrian tracking method. To better understand the pedestrian tracking method shown in the embodiment of the present invention, the following is a detailed description of the specific structure of the electronic device capable of implementing the method shown in the embodiment of the present invention. :
  • FIG. 1 is a schematic structural diagram of an embodiment of an electronic device according to the present invention.
  • the electronic device 100 can vary considerably depending on configuration or performance and can include one or more processors 122.
  • the processor 122 is not limited, as long as the processor 122 can have calculation and image processing to implement the pedestrian tracking method shown in this embodiment.
  • the embodiment is as shown in this embodiment.
  • the processor 122 can be a central processing unit (CPU)
  • One or more storage media 130 (e.g., one or one storage device in Shanghai) for storing applications 142 or data 144.
  • the storage medium 130 can be short-lived or persistently stored.
  • the program stored on storage medium 130 may include one or more modules (not shown), each of which may include a series of instruction operations in the electronic device.
  • the processor 122 can be configured to communicate with the storage medium 130 to perform a series of instruction operations in the storage medium 130 on the electronic device 100.
  • the electronic device 100 may also include one or more power sources 126, one or more input and output interfaces 158, and/or one or more operating systems 141, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and the like.
  • operating systems 141 such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and the like.
  • the electronic device may be any device having image processing capability and computing capability, including but not limited to a server, a video camera, a mobile computer, a tablet computer, and the like.
  • the input/output interface 158 shown in this embodiment can be used to receive a large amount of monitoring video, and the input and output interface 158 can Displaying a detection process, displaying a tracking result for a pedestrian, etc.
  • the processor 122 is configured to perform detection of a pedestrian
  • a tracking algorithm for a pedestrian
  • the storage medium 130 is configured to store an operating system, an application, etc., and the storing The medium 130 can save the intermediate result in the pedestrian tracking process, and the electronic device shown in this embodiment can realize the need to perform the massive monitoring video in the process of executing the method shown in this embodiment. Track the target pedestrian and give information such as the time, location, etc. of the target pedestrian in the surveillance video.
  • the processor 122 includes a metadata extracting unit 21 and a query unit 22.
  • the metadata extracting unit 21 includes: an object extracting module 211, a feature extracting module 212, and an index building module 213;
  • the query unit 22 includes a feature extraction module 221, a feature fusion module 222, and an index and query module 223.
  • the processor 122 is capable of executing a program stored in the storage medium 130, thereby implementing the functions of any one of the units included in the processor 122 shown in FIG. 2.
  • FIG. 3 is a flow chart of steps of an embodiment of a pedestrian tracking method according to the present invention.
  • the execution subject of the pedestrian tracking method shown in this embodiment is the electronic device, and specifically may be one or more modules of the processor 122, such as the object extraction module 211.
  • Step 301 Acquire a video to be tracked.
  • the object extraction module 211 included in the processor 122 is configured to acquire the to-be-tracked video.
  • the electronic device shown in this embodiment does not include a camera
  • the electronic device is a server
  • the electronic device shown in this embodiment is performed by using the input/output interface 158 and multiple cameras. Communication.
  • the camera is used to take pictures of pedestrians to be tracked to generate a video to be tracked.
  • the electronic device receives the to-be-tracked video sent by the camera through the input/output interface 158, and further, the object extraction module 211 of the processor 122 acquires the received by the input/output interface 158.
  • the video to be tracked is performed by using the input/output interface 158 and multiple cameras. Communication.
  • the camera is used to take pictures of pedestrians to be tracked to generate a video to be tracked.
  • the electronic device receives the to-be-tracked video sent by the camera through the input/output interface 158, and further, the object extraction module 211 of the processor 122 acquires the received by the input/output interface 158.
  • the video to be tracked is performed by using the input/output
  • the electronic device shown in this embodiment includes a camera
  • the electronic device is a video camera.
  • the object extraction module 211 of the processor 122 of the electronic device acquires the tracking video captured by the camera of the electronic device.
  • the to-be-tracked video shown in this embodiment is generally a mass video.
  • the method for obtaining the to-be-tracked video in this embodiment is an optional example, which is not limited, as long as the object extraction module 211 can obtain the to-be-tracked video for performing pedestrian tracking.
  • Step 302 Acquire a sequence of target image frames.
  • the object extraction module 211 shown in this embodiment acquires the target image frame sequence.
  • the object extraction module 211 shown in this embodiment determines the target image frame sequence in the to-be-tracked video after acquiring the to-be-tracked video.
  • the target image frame sequence is the first M image frame of the to-be-tracked video.
  • the specific value of the M is not limited in this embodiment, as long as the M is a positive integer greater than 1.
  • the sequence of target image frames includes one or more consecutive image frames.
  • Step 303 Acquire a background area of the to-be-tracked video.
  • the object extraction module 211 shown in this embodiment learns the target image frame sequence of the to-be-tracked video to obtain a background area of the to-be-tracked video.
  • the background area of the tracking video can be seen in Figure 4.
  • the specific manner of the object extraction module 211 acquiring the background area of the to-be-tracked video may be that the object extraction module 211 passes the static background model in any image frame in the target image frame sequence. Obtaining a stationary object, determining that the stationary object is the background area of the video to be tracked.
  • the description of the background area of the to-be-tracked video is an optional example, which is not limited.
  • the object extraction module 211 may also adopt a frame difference method, an optical flow field method, or the like. As long as the object extraction module 211 can acquire the background area.
  • step 303 shown in this embodiment is an optional step.
  • Step 304 Determine a detection period and a tracking period of the video to be tracked.
  • the object extraction module 211 determines the detection period T1 and the tracking period T2.
  • the detection period T1 shown in this embodiment is included in the tracking period T2, and the duration of the detection period T1 is less than the duration of the tracking period T2.
  • the duration of the tracking period T2 may be 10 minutes
  • the duration of the detection period T1 is 2 seconds
  • the first two seconds within 10 minutes of the duration of the tracking period T2 is the detection period T1.
  • the detection period T1 shown in this embodiment may not be included in the tracking period T2, the detection period T1 is before the tracking period T2, and the duration of the detection period T1 is less than the tracking period.
  • the duration of the detection period T1 is 2 seconds
  • the duration of the tracking period T2 may be 10 minutes
  • the tracking period T2 is continued.
  • the description of the detection period T1 and the duration of the tracking period T2 in this embodiment is an optional example and is not limited.
  • This embodiment is exemplified by taking the detection period T1 included in the tracking period T2 as an example.
  • the start frame of the detection period T1 shown in this embodiment is the t-th frame of the to-be-tracked video, and the t is greater than the M, and the target image frame sequence described in this embodiment is located.
  • the detection period is before T1.
  • Step 305 Acquire a foreground area of any image frame of the to-be-tracked video.
  • the object extraction module 211 shown in this embodiment subtracts any image frame of the to-be-tracked video from the background area to obtain any of the to-be-tracked videos during the detection period.
  • the foreground area of the image frame is the image frame of the to-be-tracked video.
  • FIG. 5 the foreground area of any acquired image frame of the to-be-tracked video is shown in FIG. 5, wherein FIG. 5 shows a foreground area of any image frame of the to-be-tracked video.
  • the white pixel shown in FIG. 5 is the foreground area of any image frame of the to-be-tracked video.
  • the object extraction module 211 shown in this embodiment acquires any area of any image frame of the to-be-tracked video and the background area, when the background area of the to-be-tracked video is acquired. The difference is obtained to obtain the target value. It can be seen that different regions of any image frame of the to-be-tracked video correspond to one target value.
  • the area of any image frame of the to-be-tracked video corresponding to the target value is a motion area.
  • the preset threshold value is not preset in this embodiment, as long as the size of the preset threshold is not limited, as long as any image frame of the video to be tracked can be determined according to the preset threshold.
  • the sport area is fine.
  • the motion region is determined to be a foreground region of any image frame of the video to be tracked.
  • Step 306 Acquire a pedestrian to be tracked.
  • the object extraction module 211 shown in this embodiment detects a foreground area of any image frame of the to-be-tracked video to obtain the pedestrian to be tracked.
  • the specific number of the pedestrians to be tracked detected by the object extraction module 211 is not limited in this embodiment.
  • Step 307 Obtain an upper body detection frame of the pedestrian to be tracked.
  • the object extraction module 211 shown in this embodiment first determines the target image frame.
  • the target image frame shown in this embodiment is an image frame in which the pedestrian to be tracked appears.
  • the object extraction module 211 may determine to appear.
  • the image frame of the pedestrian to be tracked is a target image frame, that is, the target image frame is an image frame of the to-be-tracked pedestrian in the detection period.
  • the object extraction module 211 may determine the continuous In the continuous image frame of the to-be-tracked video of the to-be-tracked pedestrian, the last image frame of the pedestrian to be tracked appears as the target image frame, or the object extraction module 211 may determine that the waiting to appear continuously In the continuous image frame of the to-be-tracked video of the pedestrian, the random image frame of the pedestrian to be tracked is the target image frame, which is not limited in this embodiment.
  • the object extraction module 211 may determine the interval.
  • the last image frame of the pedestrian to be tracked is the target image frame, or the object extraction module 211 may determine that the to-be-tracking occurs at intervals.
  • the random image frame of the pedestrian to be tracked is the target image frame, which is not limited in this embodiment.
  • the above description of how to determine the target image frame is an optional example, which is not limited as long as the target image frame has the pedestrian to be tracked.
  • the object extraction module 211 may acquire the upper body detection frame within the target image frame.
  • the object extraction module 211 shown in this embodiment may be configured with a first detector, and the first detector is configured to detect the upper body detection frame.
  • the first detector of the object extraction module 211 acquires the upper body detection frame in a foreground area of the target image frame.
  • the object extraction module 211 shown in this embodiment can detect the pedestrian to be tracked in the foreground area of the target image frame, that is, the process of detecting the pedestrian to be tracked by the object extraction module 211 In the case, the background area need not be detected, so that the time required for pedestrian detection is greatly reduced in the case of improving the accuracy of pedestrian detection.
  • the following describes how the object extraction module 211 acquires the upper body detection frame of the pedestrian to be tracked in the foreground area of the target image frame:
  • the object extraction module 211 shown in this embodiment acquires an upper body detection frame of a pedestrian to be tracked that appears in the to-be-tracked video during the detection period.
  • the object extraction module 211 shown in this embodiment can obtain the detected when the pedestrian to be tracked is detected. Said Said And the stated
  • the target image frame determined by the object extraction module 211 is taken as an example.
  • the pedestrian to be tracked in the target image frame can be detected by the above method in the embodiment to obtain Go to the upper body detection frame of each pedestrian to be tracked.
  • the object extraction shown in this embodiment The module 211 cannot obtain the upper body detection frame of the pedestrian 601.
  • the pedestrian 602 and the pedestrian 603 are all clearly present in the target image frame, and the object extraction module 211 can acquire the upper body detection frame of the pedestrian 602 and the upper body detection frame of the pedestrian 603. .
  • the object extraction module 211 shown in this embodiment cannot acquire the upper body detection frame of each pedestrian located in the area 604.
  • the object extraction module 211 shown in this embodiment detects only the upper body detection frame of the pedestrian to be tracked displayed in the target image frame;
  • the pedestrian to be tracked is a complete pedestrian displayed in the target image frame, that is, the upper body and the lower body of the pedestrian to be tracked are completely displayed in the target image frame.
  • the pedestrian to be tracked is a pedestrian displayed in the target image frame with an area greater than or equal to a threshold preset by the object extraction module 211, that is, if the target is displayed in the target image frame Tracking the pedestrian is greater than or equal to the preset threshold, indicating that the pedestrian to be tracked is clearly displayed in the target image frame, and the pedestrian to be tracked in the target image frame is smaller than the preset In the case of the threshold, the object extraction module 211 cannot detect the pedestrian to be tracked.
  • Step 308 Acquire a lower body scanning area according to the upper body detection frame.
  • the object extraction module 211 may acquire the to-be-tracked according to the upper body detection frame of the to-be-tracked pedestrian. Pedestrian's lower body scan area.
  • the object extraction module 211 needs to acquire the first parameter, the second parameter and the third parameter.
  • the Ratio default is a preset ratio.
  • the Ratio default is stored in the storage medium 130 by the object extraction module 211, and the Ratio default may be the object detection module 211 according to a human body detection frame (such as a background).
  • the aspect ratio of the technique is set, for example, if the aspect ratio of the human body detection frame is predetermined to be 3:7, the object extraction module 211 can set the Ratio default to 3/7, and
  • the Ratio default is stored in the storage medium 130, such that in performing the process shown in this step, the object extraction module 211 may retrieve the Ratio default from the storage medium 130 to perform First parameter Calculation.
  • the object extraction module 211 may determine the lower body scanning area.
  • the lower body scanning area ScanArea [L s , T s , R s , B s ], the L s is the upper left horizontal coordinate of the lower body scanning area, and T s is the lower body scanning area.
  • the paral1, the paral2, and the paral3 are preset values.
  • paral1, the paral2, and the paral3 are not limited in this embodiment, and the paral1, the paral2, and the paral3 may be empirical values, or the operator may set different paral1, paral2, and paral3. Different settings of the lower body scanning area are achieved.
  • the imgW is the width of any image frame in the detection period of the to-be-tracked video
  • the imgH is the height of any image frame in the detection period of the to-be-tracked video.
  • Step 309 Determine whether the lower body detection frame of the pedestrian to be tracked is detected in the lower body scanning area. If yes, execute step 310. If no, execute step 313.
  • the object extraction module 211 shown in this embodiment performs lower body detection in the lower body scanning area to determine whether the lower body detection frame of the pedestrian to be tracked can be detected.
  • Step 310 Acquire the lower body detection frame.
  • the object extraction module 211 can set a lower body detector.
  • Detection box Said Is the upper left horizontal coordinate of the lower body detection frame, For the lower left ordinate of the lower body detection frame, the Is the lower right corner abscissa of the lower body detection frame, It is the lower right ordinate of the lower half of the detection frame.
  • Step 311 Acquire the whole body frame of the detection period.
  • the object extraction module 211 acquires the detection cycle full body frame according to the upper body detection frame and the lower body detection frame.
  • the object extraction module 211 shown in this embodiment combines the upper body detection frame and the lower body detection frame to form the detection cycle full body frame.
  • Step 312 Obtain a first ratio and a second ratio.
  • the object extraction module 211 shown in this embodiment may determine a first ratio of the whole body frame of the detection period
  • the first ratio is a ratio of a width of the whole body frame of the detection period and a height of the whole body frame of the detection period, the first ratio
  • the object extraction module 211 shown in this embodiment determines a second ratio of the whole body frame of the detection period
  • the second ratio is a ratio of a height of the upper body detection frame to a height of the whole body frame of the detection period, and the second ratio is
  • Step 313 Obtain a third ratio.
  • the third ratio is obtained, and the third ratio is the preset detection period.
  • Step 314 Acquire a whole frame of the detection period.
  • the object extraction module 211 acquires the detection cycle full body frame if the lower body detection is not acquired in the lower body scanning region.
  • the detection cycle is the upper left horizontal coordinate of the whole body frame
  • the upper left ordinate of the whole body frame of the detection period The bottom right corner of the whole body frame of the detection cycle
  • Step 315 Determine a fourth ratio of the whole body frame of the detection period.
  • the object extraction module 211 shown in this embodiment determines that the fourth ratio is a high ratio of the upper body detection frame and the detection period body frame;
  • step 316 shown in this embodiment is continued.
  • Step 316 determining an upper body tracking frame.
  • the object extraction module 211 initializes the detection period full body frame acquired in the detection period T1 as a tracking target, so that the object extraction module 211 can be in the tracking period T2. Tracking the to-be-tracked pedestrian as the tracking target.
  • the number of the pedestrians to be tracked determined by the above steps may be at least one. If the number of the pedestrians to be tracked is multiple, each of the plurality of pedestrians to be tracked needs to be separately Each of the to-be-tracked pedestrians is used as the tracking target for tracking.
  • pedestrian A needs to be set as a tracking target to perform subsequent steps for tracking
  • pedestrian B is set as tracking.
  • the target performs tracking by performing subsequent steps, that is, each pedestrian to be tracked in the to-be-tracked video needs to be set as a tracking target to perform subsequent steps for tracking.
  • the object extraction module 211 first determines the upper body detection frame in the process of tracking the to-be-tracked pedestrians, and the object extraction module 211 performs sampling separately with the upper body detection frame as the center, that is, A plurality of particles are scattered around the upper body detection frame, and the upper body tracking frame is determined among the plurality of particles.
  • the object extraction module 211 determines the upper body detection frame in the N1 frame in the detection period T1 of the to-be-tracked video, the object extraction module 211 is in the N2 frame pair in the tracking period T2 of the to-be-tracked video.
  • the to-be-tracked pedestrian performs tracking, and the N2 frame is any frame within the tracking period T2 of the to-be-tracked video.
  • the object to be tracked is not the same as the position of the N1 frame and the position of the N2 frame, and the object extraction module 211 is configured to implement the tracking to be tracked.
  • the object extraction module 211 needs to sprinkle a plurality of particles around the upper body detection frame of the pedestrian to be tracked to achieve tracking of the pedestrian to be tracked.
  • a fifth ratio of any one of the plurality of particles is the same as a sixth ratio of the upper body detection frame, and the fifth ratio is a width of any one of the plurality of particles and the a ratio of heights of any one of the plurality of particles, the sixth ratio being a ratio of a width of the upper body detection frame to a height of the upper body detection frame.
  • any particle sprinkled around the upper body detection frame by the object extraction module 211 is a rectangular frame having the same width and height ratio as the upper body detection frame.
  • the object extraction module 211 determines an upper body tracking frame among the plurality of particles.
  • the object extraction module 211 determines, among the plurality of particles, that the particle most similar to the upper body detection frame is the upper body tracking frame.
  • the upper body tracking frame Wherein said For the upper left body tracking frame, the upper left horizontal coordinate, the Is the upper left ordinate ordinate of the upper body tracking frame, For the upper right corner of the tracking frame of the upper body, the For the upper body tracking frame, the lower right ordinate of the frame.
  • Step 317 Obtain a tracking frame of the pedestrian to be tracked.
  • the object extraction module 211 acquires a tracking period full body frame corresponding to the upper body tracking frame according to the detection period whole body frame.
  • the tracking period body frame is used to track the pedestrian to be tracked.
  • the object extraction module 211 After the object extraction module 211 obtains the detection period whole body frame and the upper body tracking frame, as shown in FIG. 7, the object extraction module 211 determines the upper left angle abscissa of the upper body detection frame 701. Is it equal to the upper left angle abscissa of the whole body frame 702 of the detection period
  • the object extraction module 211 determines that, as shown in FIG. 7(a) Then, the object extraction module 211 determines an upper left horizontal coordinate of the whole body frame of the tracking period.
  • the object extraction module 211 determines an upper left ordinate of the whole body frame of the tracking period
  • the object extraction module 211 determines the lower right horizontal coordinate of the whole body frame of the tracking period
  • the object extraction module 211 determines a lower right ordinate of the whole body frame of the tracking period
  • the object extraction module 211 shown in this embodiment may determine the whole body frame of the tracking period.
  • the object extraction module 211 After the object extraction module 211 acquires the whole body frame and the upper body tracking frame, as shown in FIG. 7, the object extraction module 211 determines the upper left horizontal coordinate of the detection period full body frame 702. Is it equal to the upper left angle abscissa of the lower body detection frame 703
  • the detection period is the upper left horizontal coordinate of the whole body frame 702. Equal to the upper left horizontal coordinate of the lower body detection frame 703 Then, the object extraction module 211 determines an upper left horizontal coordinate of the whole body frame of the tracking period.
  • the object extraction module 211 determines an upper left ordinate of the whole body frame of the tracking period
  • the object extraction module 211 determines the lower right horizontal coordinate of the whole body frame of the tracking period
  • the object extraction module 211 determines a lower right ordinate of the whole body frame of the tracking period Wherein said
  • the object extraction module 211 determines the tracking period body frame
  • tracking period T tracking of the to-be-tracked pedestrians in the to-be-tracked video can be implemented through the tracking period full frame.
  • the object extraction module 211 shown in this embodiment acquires the upper body detection frame 801 of the pedestrian to be tracked as shown in FIG. 8 in the detection period T1.
  • the above steps are not specifically described in this application scenario.
  • the object extraction module 211 acquires the lower body detection frame 802 of the pedestrian to be tracked as shown in FIG. 8 in the detection period T1, and specifically acquires the lower body detection frame 802. Please refer to the above embodiment for details. Specifically, it will not be described in detail in this embodiment.
  • the object extraction module 211 acquires the detection period body frame 803 shown in FIG. 8 in the detection period T1, and specifically acquires the process of the detection period body frame 803, as shown in the above embodiment, specifically This description will not be repeated.
  • the ratio of the width of the whole body frame 803 of the detection period to the height of the whole body frame 803 of the detection period can be obtained. And a high ratio of the upper body detection frame 801 upper body detection frame 801 and the detection cycle body frame 803
  • the detection period full body frame 803 acquired by the object extraction module 211 is the upper body detection frame 801 of the to-be-tracked pedestrian and the If the lower body detection frame 802 of the pedestrian to be tracked is combined, it can be seen that the aspect ratio of the detection period full body frame 803 acquired by the object extraction module 211 is variable, so even if the pedestrian to be tracked is in the detection cycle
  • the object extraction module 211 obtains the respective The combination of the upper body detection frame and the lower body detection frame can still obtain the detection cycle full body frame 803 of the pedestrian to be tracked.
  • variable period ratio of the whole body frame 803 is detected based on the detection cycle of the present embodiment, so that the object extraction module 211 can accurately capture the change of the pedestrian posture to be tracked, so that the detection cycle body frame 803 can be accurately By capturing the change of the pedestrian posture to be tracked, it can be seen that the accurate detection cycle body frame 803 can still be obtained regardless of the change of the posture of the pedestrian to be tracked.
  • the object extraction module 211 can obtain the upper body tracking frame 804 and the tracking period body frame 805 in the detection period T2.
  • the specific acquisition process please refer to the above steps, which is not described in detail in this embodiment.
  • the object extraction module 211 shown in this embodiment can And said Passed into the tracking period T2, thereby based on the variable And said A more accurate tracking period full body frame 805 is obtained, so that even within the tracking period T2, even if the posture of the pedestrian to be tracked changes, accurate tracking of the pedestrian to be tracked can still be achieved.
  • the step 304 described in this embodiment may be performed multiple times in the step 317 shown in this embodiment, thereby improving the tracking of the pedestrians to be tracked more accurately, for example, if the object is extracted.
  • the object extraction module 211 may repeatedly perform the tracking period T2 in a subsequent time. This embodiment does not limit the number of times the tracking period T2 is executed. .
  • the object extraction module 211 may perform the tracking period T2 multiple times, and the object extraction module 211 may refer to the detection result according to the detection result. And the stated The specific value is updated multiple times, so that a more accurate tracking period full body frame is obtained in the tracking period T2, thereby realizing accurate tracking of pedestrians.
  • the detection and tracking of the pedestrian to be tracked can be performed in the foreground area of any image frame of the to-be-tracked video, that is, the detection process of the pedestrian to be tracked as shown in this embodiment
  • the tracking process is performed on the foreground area of the image, which greatly reduces the number of image windows that need to be processed, that is, reduces the search space for the electronic device to search for the pedestrian to be tracked, thereby reducing the tracking process for the pedestrian to be tracked.
  • the length of time required increases the efficiency with which electronic devices track followers.
  • FIG. 9 is a flow chart of steps of an embodiment of a pedestrian inquiry method according to the present invention.
  • FIG. 10 is a schematic diagram of an execution step of an embodiment of a pedestrian tracking method according to the present invention.
  • the description of the execution subject of the pedestrian inquiry method shown in this embodiment is an optional example, which is not limited, that is, the execution subject of each step shown in this embodiment may be as shown in FIG. 2 .
  • Any one of the modules of the processor 122, or the execution of the steps of the steps shown in this embodiment may also be a module not shown in FIG. 2, which is not limited in this embodiment, as long as the electronic device The pedestrian inquiry method shown in this embodiment can be executed.
  • Step 901 Acquire a video to be tracked.
  • step 901 For the specific implementation process of the step 901 shown in this embodiment, please refer to step 301 shown in FIG. 3, and the specific implementation process is not described in this embodiment.
  • Step 902 Detect and track the pedestrian to be tracked in the to-be-tracked video to obtain a pedestrian sequence.
  • the object extraction module 211 is used to detect and track the to-be-tracked pedestrians in the to-be-tracked video. For details, refer to steps 302 to 317 shown in the foregoing embodiment. Specifically, it will not be described in detail in this embodiment.
  • the object extraction module 211 of the embodiment obtains a plurality of pedestrians to be tracked, and the object extraction module 211 summarizes and forms a pedestrian sequence after the above steps.
  • the pedestrian sequence acquired by the object extraction module 211 includes a plurality of sub-sequences, and any one of the plurality of sub-sequences is a target sub-sequence, and the target sub-sequence corresponds to a target to-be-tracked pedestrian, the target The pedestrian to be tracked corresponds to one of the plurality of to-be-tracked pedestrians determined by the above steps.
  • the target subsequence shown in this embodiment includes a plurality of image frames, and any one of the plurality of image frames includes the target to be tracked pedestrian.
  • any image frame included in the target subsequence has the tracking period full body frame shown in the above step corresponding to the target to be tracked pedestrian.
  • the pedestrian sequence shown in this embodiment includes multiple sub-sequences, and any one of the plurality of sub-sequences includes multiple image frames, and the image frames included in any sub-sequence are displayed with
  • the tracking sequence corresponds to the tracking period of the pedestrian to be tracked.
  • the embodiment of the present invention is exemplified by the fact that the electronic device does not include a camera.
  • the electronic device shown in this embodiment can communicate with the camera cluster 105, wherein the camera cluster 105 includes multiple The cameras are capable of capturing the to-be-tracked video by the cameras, and the electronic device is capable of receiving the to-be-tracked video sent by the camera.
  • the object extraction module 211 may create different sub-sequences 1001 for different target to-be-tracked pedestrians, the sub-sequences 1001 including a plurality of the image frames of the target-to-be-tracked pedestrians corresponding to the target to-be-tracked pedestrians.
  • Step 903 Send the pedestrian sequence to the feature extraction module.
  • the object extraction module 211 sends the pedestrian sequence to the feature extraction module 212.
  • Step 904 Acquire a feature of the pedestrian sequence.
  • the feature extraction module 212 shown in this embodiment takes the pedestrian sequence as an input and performs feature extraction on the pedestrian sequence.
  • the feature extraction module 212 may analyze the pedestrian sequence to check whether each pixel in any image frame included in the pedestrian sequence represents a feature, thereby extracting features of the pedestrian sequence.
  • the pedestrian sequence is characterized by a feature set of all the target to-be-tracked pedestrians included in the pedestrian sequence.
  • the feature extraction module 212 may perform feature extraction on the image frame of the pedestrian A to obtain the target to be associated with the pedestrian A. Tracking the feature set of the pedestrian, performing feature extraction on the image frame of the pedestrian B to obtain the feature set of the target pedestrian to be tracked corresponding to the pedestrian B until the feature of each pedestrian in the pedestrian sequence is extracted.
  • the feature set 1002 created by the feature extraction module 212 includes a target identification ID corresponding to a target pedestrian to be tracked, and a plurality of image features corresponding to the target to-be-tracked pedestrian.
  • the feature set 1002 corresponding to the target pedestrian A to be tracked includes a target identification ID corresponding to the target pedestrian A to be tracked, and a plurality of images corresponding to the target pedestrian A to be tracked. feature.
  • the feature extraction module 212 shown in this embodiment can create a correspondence between different target to-be-tracked pedestrians and different target identification IDs, and a corresponding relationship between different target identification IDs and multiple image features.
  • Step 905 Send the feature of the pedestrian sequence to the index building module.
  • the feature extraction module 212 shown in this embodiment can send the acquired features of the pedestrian sequence to the index construction module 213.
  • Step 906 creating an index list.
  • the index construction module 213 of the embodiment After receiving the feature of the pedestrian sequence, the index construction module 213 of the embodiment establishes the index list, and the corresponding relationship included in the index list is different target to-be-tracked pedestrians and different target identifiers.
  • the correspondence between the IDs and the correspondence between the different target identifiers and the plurality of image features, and the index construction module 213 shown in this embodiment can also create different target identifiers and corresponding targets through the index list. Any information such as the time, place, etc. of the pedestrian to be tracked in the video to be tracked.
  • Step 907 Store the index list to a storage medium.
  • the index construction module 213 shown in this embodiment stores the index list to the storage medium 130 after the index list is created.
  • steps 901 to 907 shown in this embodiment different pedestrians can be classified in a large amount of to-be-tracked videos, which is used as a basis for subsequent tracking target queries.
  • the next steps can be performed when a query that tracks the target is needed.
  • Step 908 Receive a tracking target.
  • the purpose of the map search can be realized, that is, when the query is performed, the image in which the tracking target appears can be input to the feature extraction module 221.
  • the image 1003 in which the tracking target appears may be input to the feature extraction module 221.
  • Step 909 Perform feature extraction on the tracking target.
  • the feature extraction module 221 of the embodiment can analyze the image in which the tracking target appears to obtain the feature of the tracking target, and the method can be obtained by using the method shown in this embodiment. Track multiple features corresponding to the target.
  • Step 910 Fusion with different features of the tracking target.
  • the feature fusion module 222 can fuse different features of the tracking target.
  • the feature fusion module 222 can fuse different features of the tracking target to obtain a merged feature. As shown in the embodiment, the merged feature and the tracking target are displayed in this embodiment. correspond.
  • Step 911 Send the merged feature to the index and query module.
  • Step 912 Query the tracking target.
  • the indexing and querying module 223 shown in this embodiment queries the tracking target based on the merged feature corresponding to the tracking target.
  • the index and query module 223 matches the merged feature with the index list stored in the storage medium 130, thereby finding a target identifier ID corresponding to the merged feature, thereby
  • the indexing and querying module 223 can obtain any information such as the time and place where the pedestrian corresponding to the target identification ID appears in the to-be-tracked video according to the index list, and the target identifier ID in this embodiment and the target identifier ID.
  • the corresponding pedestrian is the tracking target.
  • the tracking target in the process of searching for the tracking target, the tracking target can be quickly and accurately positioned in a large amount of to-be-tracked video, so that the tracking target can be quickly obtained.
  • Information such as the time and location of the to-be-tracked video.
  • the application scenario of the method shown in this embodiment is not limited. For example, it can be used for map search in a safe city to achieve rapid acquisition of tracking target related information, and can also be applied to mobilization trajectory generation and analysis, and the number of people. Statistics, pedestrian warning in vehicle-assisted driving, etc. Specifically, as long as it is necessary to perform intelligent analysis on a video including a pedestrian, the pedestrian detection and tracking can be performed by using the embodiment of the present invention to extract information such as the position and trajectory of the pedestrian.
  • FIG. 1 illustrates the specific structure of the electronic device from the perspective of physical hardware.
  • the structure of the electronic device will be described from the perspective of executing the flow of the pedestrian tracking method shown in the above embodiment, as shown in FIG. 11 .
  • the pedestrian tracking method shown in the above embodiment can be performed by the electronic device shown in this embodiment.
  • the electronic device includes:
  • a first determining unit 1101 configured to determine a detection period and a tracking period of the video to be tracked
  • a fifth acquiring unit 1102 configured to acquire a target image frame sequence of the to-be-tracked video, the target image frame sequence includes one or more consecutive image frames, and the target image frame sequence is located before the detection period;
  • a sixth acquiring unit 1103, configured to acquire a background area of the to-be-tracked video according to the target image frame sequence
  • a seventh acquiring unit 1104 configured to subtract any image frame of the to-be-tracked video from the background area to obtain a foreground area of any image frame of the to-be-tracked video in the detecting period;
  • the eighth obtaining unit 1105 is configured to detect a foreground area of any image frame of the to-be-tracked video to obtain the to-be-tracked pedestrian.
  • the fifth obtaining unit 1102 shown in this embodiment is optional.
  • the eighth acquiring unit 1105 shown in this embodiment is an optional unit. In a specific application, the electronic device may not include the foregoing.
  • the first obtaining unit 1106 is configured to acquire, in the detection period, an upper body detection frame of a pedestrian to be tracked that appears in the to-be-tracked video, the upper body detection frame Wherein said Is the upper left horizontal coordinate of the upper body detection frame, Is the upper left ordinate of the upper body detection frame, the Is the lower right corner abscissa of the upper body detection frame, Is the ordinate of the lower right corner of the upper body detection frame;
  • the first acquiring unit 1106 is specifically configured to: determine a target image frame, where the target image frame is an image frame in which the pedestrian to be tracked appears, and acquire the upper body in a foreground area of the target image frame. Detection box.
  • a second obtaining unit 1107 configured to acquire a whole body frame of the detection period of the pedestrian to be tracked according to the upper body detection frame
  • the second acquiring unit 1107 is configured to acquire a lower body scanning area according to the upper body detecting frame, and if the lower body detecting frame is acquired in the lower body scanning area to obtain a lower body detecting frame, according to the upper body detecting frame And the lower body detection frame acquires the whole body frame of the detection period.
  • the upper body detection frame Wherein said Is the upper left horizontal coordinate of the upper body detection frame, Is the upper left ordinate of the upper body detection frame, the Is the lower right corner abscissa of the upper body detection frame, Is the ordinate of the lower right corner of the upper body detection frame;
  • the second acquiring unit 1107 is specifically configured to determine a first parameter, where the first parameter is obtained when the lower body scanning area is acquired according to the upper body detecting frame
  • the Ratio default is a preset ratio
  • the second parameter is determined, where the second parameter is Determine the third parameter,
  • the lower body scanning area is determined according to the first parameter, the second parameter, and the third parameter.
  • the parl1, the paral2, and the paral3 are preset values, where the imgW is the width of any image frame in the detection period of the to-be-tracked video, and the imgH is the to-be-tracked video in the detection period. The height of any image frame within.
  • the lower body detection frame Said Is the upper left horizontal coordinate of the lower body detection frame, For the lower left ordinate of the lower body detection frame, the Is the lower right corner abscissa of the lower body detection frame, Is the lower right ordinate of the lower body detection frame;
  • the second obtaining unit 1107 is specifically configured to determine an upper left horizontal coordinate of the whole body frame of the detection period when the whole body frame of the detection period of the pedestrian to be tracked is acquired according to the upper body detection frame, and the detecting The upper left corner of the cycle body frame Determining the upper left ordinate of the whole body frame of the detection period Determining the lower right corner of the whole body frame of the detection period Determining the lower right ordinate of the whole body frame of the detection period Determining the detection cycle full body frame
  • the second obtaining unit 1107 is specifically configured to: when the whole body frame of the detection period of the pedestrian to be tracked is acquired according to the upper body detection frame, if the lower body detection is not performed in the lower body scanning area
  • the lower body detection frame determines the horizontal coordinate of the upper left corner of the whole body frame of the detection period, and the horizontal coordinate of the upper left corner of the whole body frame of the detection period Determining the upper left ordinate of the whole body frame of the detection period Determining the lower right corner of the whole body frame of the detection period Determining the lower right ordinate of the whole body frame of the detection period Determining the detection cycle full body frame
  • a third obtaining unit 1108, configured to acquire, in the tracking period, an upper body tracking frame of the to-be-tracked pedestrian that appears in the to-be-tracked video;
  • the third acquiring unit is configured to sprinkle a plurality of particles centering on the upper body detection frame, and a width of any one of the plurality of particles and any one of the plurality of particles
  • the ratio of the height is the same as the ratio of the width of the upper body detection frame and the height of the upper body detection frame, and the upper body tracking frame is determined, and the upper body tracking frame is the one of the plurality of particles and the upper body detection frame. Similar particles.
  • the fourth obtaining unit 1109 is configured to acquire a tracking period full body frame corresponding to the upper body tracking frame according to the detection period, and the tracking period full body frame is used to track the to-be-tracked pedestrian.
  • the fourth obtaining unit 1109 is specifically configured to acquire a preset ratio of a width of the whole body frame of the detection period and a high ratio of the whole body frame of the detection period. Determining a high ratio of the height of the upper body detection frame to the whole body frame of the detection period According to the And said Determine the tracking period full body frame.
  • the fourth acquiring unit 1109 is specifically configured to determine a ratio of a width of the whole body frame of the detection period to a high ratio of the whole body frame of the detection period. Determining a high ratio of the height of the upper body detection frame to the whole body frame of the detection period According to the And said Determine the tracking period full body frame.
  • the upper body tracking frame Wherein said For the upper left body tracking frame, the upper left horizontal coordinate, the Is the upper left ordinate ordinate of the upper body tracking frame, For the upper right corner of the tracking frame of the upper body, the Is the ordinate of the lower right corner of the upper body tracking frame;
  • the fourth obtaining unit 1109 is according to the And said Determining, when determining the whole body frame of the tracking period, determining an abscissa of an upper left corner of the whole body frame of the tracking period, wherein, if Then the upper left angle of the whole body frame of the tracking period Determining the upper left ordinate of the whole body frame of the tracking period Determining the lower right corner of the whole body frame of the tracking period Determining the lower right ordinate of the whole body frame of the tracking period Wherein said Determining the tracking period full body frame
  • the fourth acquiring unit 1109 is specifically configured to determine an upper left horizontal coordinate of the whole body frame of the tracking period, where Then the upper left angle of the whole body frame of the tracking period Determining the upper left ordinate of the whole body frame of the tracking period Determining the lower right corner of the whole body frame of the tracking period Determining the lower right ordinate of the whole body frame of the tracking period Wherein said Determining the tracking period full body frame
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processor, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本发明实施例提供了一种行人跟踪方法以及电子设备,所述方法包括:在检测周期内,获取出现在待跟踪视频内的待跟踪行人的上半身检测框,根据所述上半身检测框获取所述待跟踪行人的检测周期全身框,在所述跟踪周期内,根据检测周期全身框获取与上半身跟踪框对应的跟踪周期全身框,可见,通过所述跟踪周期全身框即可对所述待跟踪行人进行跟踪,因检测周期全身框的宽高比是可变的,因此即便待跟踪行人在所述检测周期内以异常的姿势出现时,则采用本实施例所示的方法依旧能够获取到所述待跟踪行人准确的跟踪周期全身框,从而实现了在待跟踪行人以异常姿势出现时,依旧能够实现对所述待跟踪行人的准备跟踪。

Description

一种行人跟踪方法以及电子设备
本申请要求于2017年3月31日提交中国专利局、申请号为201710208767.7、发明名称为“一种行人跟踪方法以及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及通信技术领域,尤其涉及的是一种行人跟踪方法以及电子设备。
背景技术
在建立平安城市的时代背景下,智能视频分析系统越来越受到关注。智能视频分析系统需要自动化、智能化地对海量视频数据中的行人进行分析,比如计算行人运动轨迹、对禁区进行行人异常闯入检测、自动检测道路上的行人并提醒驾驶员注意避让、通过以图搜图帮助公安寻找犯罪嫌疑人等,可大大提高办事效率并降低人力成本。
为自动提取海量视频数据中的行人,需要采用行人检测与跟踪算法。行人检测是指输入一张图像,通过检测算法自动找到图像中的行人,并以矩形框的形式给出该行人的位置,这个矩形框称为行人的检测框。由于行人在视频中处于运动状态,需要采用行人跟踪算法对行人进行跟踪,得到行人在视频中每一帧的位置,该位置也是以矩形框的形式给出,这个矩形框则称为行人的跟踪框。
现有技术的缺陷在于:1、检测框不够准确:由于行人的检测框的宽高比是固定的,且是对行人全身进行检测,当行人以某种异常姿态出现时,例如腿张的特别开,导致宽高比变大,则固定宽高比的行人的检测框将会不太准确。2、检测框、跟踪框不能捕捉到行人行走过程中姿态的变化:由于行人在视频中是运动的,行人在行走过程中姿态可能会发生较大的变化,这种变化表现在视频图像中则为该行人的最小外接矩形框宽高比的变化。基于固定宽高比的检测框和跟踪框不能捕捉到行人行走过程中的姿态变化。
发明内容
本发明提供了一种无论待跟踪行人的姿势如何变化,依旧能够准确的实现跟踪的行人跟踪方法以及电子设备。
本发明实施例第一方面提供了一种行人跟踪方法,包括:
步骤A、确定待跟踪视频的检测周期和跟踪周期;
可选的,所述检测周期包含于所述跟踪周期内,且检测周期的持续时间小于所述跟踪周期的持续时间。
可选的,本实施例所示的所述检测周期可不包含于所述跟踪周期内,所述检测周期位于所述跟踪周期之前,且检测周期的持续时间小于所述跟踪周期的持续时间。
步骤B、获取待跟踪行人的上半身检测框。
具体的,在所述检测周期内,获取出现在所述待跟踪视频内的待跟踪行人的上半身检测框。
更具体的,首先确定出目标图像帧,所述目标图像帧为出现所述待跟踪行人的图像帧。
在确定出所述目标图像帧时,即可在所述目标图像帧内获取所述上半身检测框。
步骤C、获取所述待跟踪行人的检测周期全身框。
具体的,根据所述上半身检测框获取所述待跟踪行人的检测周期全身框。
步骤D、待跟踪行人的上半身跟踪框。
具体的,在所述跟踪周期内,获取出现在所述待跟踪视频内的所述待跟踪行人的上半身跟踪框。
本实施例中,将所述检测周期内所获取到的所述检测周期全身框初始化为跟踪目标,以能够在所述跟踪周期内,对作为所述跟踪目标的所述待跟踪行人进行跟踪。
步骤E、获取跟踪周期全身框。
具体的,根据所述检测周期全身框获取与所述上半身跟踪框对应的跟踪周期全身框;
所述跟踪周期全身框用于对所述待跟踪行人进行跟踪。
采用本实施例所示的方法,所获取到的所述检测周期全身框为根据所述待跟踪行人的上半身检测框得到的,且所述检测周期全身框的宽高比是可变的,因此即便待跟踪行人在所述检测周期内以异常的姿势出现时,则采用本实施例所示的方法依旧能够获取到所述待跟踪行人准确的跟踪周期全身框,从而实现了在待跟踪行人以异常姿势出现时,依旧能够实现对所述待跟踪行人的准备跟踪。
结合本发明实施例的第一方面,本发明实施例的第一方面的第一种实现方式中,所述步骤C之前,还执行如下步骤:
步骤C01、获取下半身扫描区域。
本实施例在获取到所述待跟踪行人的所述上半身检测框后,即可根据所述待跟踪行人的所述上半身检测框获取所述待跟踪行人的下半身扫描区域。
步骤C02、获取所述检测周期全身框。
具体的,若在所述下半身扫描区域内进行下半身检测获取到下半身检测框,则根据所述上半身检测框和所述下半身检测框获取所述检测周期全身框。
采用本实施例所示的方法所获取到的所述检测周期全身框为所述待跟踪行人的上半身检测框和所述待跟踪行人的下半身检测框组合得到的,可见,所获取到的所述检测周期全身框的宽高比是可变的,因此即便待跟踪行人在所述检测周期内以异常的姿势出现时,例如,所述待跟踪行人腿张的特别开的姿势,使得待跟踪行人的上半身和下半身的比例发生变化时,可通过将分别获取到所述上半身检测框和下半身检测框进行组合依旧能够获取到所述待跟踪行人准确的所述检测周期全身框,从而实现了对待跟踪行人的准备跟踪。
结合本发明实施例的第一方面的第一种实现方式,本发明实施例的第一方面的第二种实现方式中,
所述上半身检测框
Figure PCTCN2018079514-appb-000001
其中,所述
Figure PCTCN2018079514-appb-000002
为所述上半身检测框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000003
为所述上半身检测框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000004
为所述上半身检测框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000005
为所述上半身检测框的右下角纵坐标;
则步骤C01具体包括:
步骤C011、确定第一参数。
所述第一参数
Figure PCTCN2018079514-appb-000006
其中,所述Ratio default为预设的比值。
可选的,本实施例所述Ratio default为预先存储的,且所述Ratio default可为预先根据人体检测框的宽高比进行设置,例如,预先确定出所述人体检测框的宽高比为3:7,则可将Ratio default设定为3/7,并将所述Ratio default进行存储,以使在执行本步骤所示的过程中,可将所述Ratio default调取出来以进行所述第一参数
Figure PCTCN2018079514-appb-000007
的计算。
步骤C012、确定第二参数。
其中,所述第二参数
Figure PCTCN2018079514-appb-000008
步骤C013、确定第三参数;
其中,
Figure PCTCN2018079514-appb-000009
步骤C014、确定所述下半身扫描区域。
具体的,可根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域。
可见,在获取到所述第一参数、所述第二参数以及所述第三参数的情况下,即可确定出所述下半身扫描区域,以使在已获取到的所述下半身扫描区域中进行待跟踪行人的下半身检测框的检测,从而提升了获取待跟踪行人的下半身检测框的准确性和效率,提升了对待跟踪行人进行跟踪的效率。
结合本发明实施例的第一方面的第二种实现方式,本发明实施例的第一方面的第三种实现方式中,
所述步骤C014具体执行,根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域,其中,所述下半身扫描区域ScanArea=[L s,T s,R s,B s]。
具体的,所述L s为所述下半身扫描区域的左上角横坐标,所示T s为所述下半身扫描区域的左上角纵坐标,所述R s为所述下半身扫描区域的右下角横坐标,所述B s为所述下半身扫描区域的右下角纵坐标。
更具体的,所述
Figure PCTCN2018079514-appb-000010
所述
Figure PCTCN2018079514-appb-000011
Figure PCTCN2018079514-appb-000012
所述
Figure PCTCN2018079514-appb-000013
所述
Figure PCTCN2018079514-appb-000014
Figure PCTCN2018079514-appb-000015
所述paral1、所述paral2以及paral3为预设值,所述paral1、所述paral2以及paral3可为经验值,或操作人员可通过设定不同的paral1、所述paral2以及paral3实现不同的所述下半身扫描区域的设定。
所述imgW为所述待跟踪视频在所述检测周期内任一图像帧的宽度,所述imgH为所述待跟踪视频在所述检测周期内任一图像帧的高度。
采用本实施例所示的方法,能够在已获取到的所述下半身扫描区域中进行待跟踪行人的下半身检测框的检测,从而提升了获取待跟踪行人的下半身检测框的准确性和效率,提升了对待跟踪行人进行跟踪的效率,且在获取的过程中,可通过对参数(paral1、所述paral2以及paral3)不同的设定,以实现不同的所述下半身扫描区域的设定,从而使得本实施例所示的方法的适用性强,以使在不同的应用场景下,可根据不同的参数的设定以实现不同的下半身检测框的定位,从而提升了待跟踪行人检测的准确性。
结合本发明实施例的第一方面的第一种实现方式至本发明实施例的第一方面的第三种实现方式任一项所述的方法,本发明实施例的第一方面的第四种实现方式中,
所述下半身检测框
Figure PCTCN2018079514-appb-000016
其中,所述
Figure PCTCN2018079514-appb-000017
为所述下半身检测框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000018
为所述下半身检测框的左 上角纵坐标,所述
Figure PCTCN2018079514-appb-000019
为所述下半身检测框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000020
为所述下半身检测框的右下角纵坐标;
所述步骤C包括:
步骤C11、确定所述检测周期全身框的左上角横坐标;
具体的,所述检测周期全身框的左上角横坐标为:
Figure PCTCN2018079514-appb-000021
步骤C12、确定所述检测周期全身框的左上角纵坐标;
具体的,所述检测周期全身框的左上角纵坐标为:
Figure PCTCN2018079514-appb-000022
步骤C13、确定所述检测周期全身框的右下角横坐标。
具体的,所述检测周期全身框的右下角横坐标为:
Figure PCTCN2018079514-appb-000023
步骤C14、确定所述检测周期全身框的右下角纵坐标。
具体的,所述检测周期全身框的右下角纵坐标为:
Figure PCTCN2018079514-appb-000024
步骤C15、确定所述检测周期全身框。
具体的,所述检测周期全身框为:
Figure PCTCN2018079514-appb-000025
可见,采用本实施例所示的方法,所获取到的所述检测周期全身框为所述待跟踪行人的上半身检测框和所述待跟踪行人的下半身检测框组合得到的,从而使得即便待跟踪行人以异常的姿态出现时,例如腿张的特别开,导致宽高比变大,因本实施例能够对待跟踪行人的上半身和下半身分别进行检测,从而分别获取到所述待跟踪行人的上半身检测框和所述待跟踪行人的下半身检测框,即根据待跟踪行人姿态的不同,则使得所述检测周期全身框中的上半身检测框和所述下半身检测框具有不同的比例,则使得能够获取到准确的所述检测周期全身框,可见,基于可变的上半身检测框和所述下半身检测框的比例,则能够准确的捕捉到待跟踪行人行走过程中的姿态的变化,有效的避免了因待跟踪行人姿态的不同而无法对待跟踪行人进行跟踪的情况的出现。
结合本发明实施例的第一方面的第四种实现方式,本发明实施例的第一方面的第五种实现方式中,
所述步骤C之后,还需执行:
步骤D01、确定所述检测周期全身框的宽和所述检测周期全身框的高的比值;
具体的,所述检测周期全身框的宽和所述检测周期全身框的高的比值为
Figure PCTCN2018079514-appb-000026
Figure PCTCN2018079514-appb-000027
步骤D02、确定所述上半身检测框的高和所述检测周期全身框的高的比值。
具体的,所述上半身检测框的高和所述检测周期全身框的高的比值
Figure PCTCN2018079514-appb-000028
步骤D03、确定所述跟踪周期全身框。
具体的,根据所述
Figure PCTCN2018079514-appb-000029
和所述
Figure PCTCN2018079514-appb-000030
确定所述跟踪周期全身框。
采用本实施例所示的方法,能够基于检测周期全身框的宽和所述检测周期全身框的高的比值以及基于所述上半身检测框的高和所述检测周期全身框的高的比值以确定所述跟踪周期全身框,因所述检测周期全身框能够准确的捕捉到待跟踪行人姿态的变化,则使得根据所述检测周期全身框所获取到的所述跟踪周期全身框能够准确的捕捉到待跟踪行人行走过程中的姿态的变化,提升了通过所述跟踪周期全身框对待跟踪行人进行跟踪的准确性,有效的避免了因待跟踪行人姿态的不同而无法对待跟踪行人进行跟踪的情况的出现。
结合本发明实施例的第一方面的第二种实现方式或本发明实施例的第一方面的第三种实现方式所述的方法,本发明实施例的第一方面的第六种实现方式中,
所述步骤C01之后,还需执行:
步骤C21、确定所述检测周期全身框的左上角横坐标。
具体的,若在所述下半身扫描区域内进行下半身检测未获取到所述下半身检测框,则确定所述检测周期全身框的左上角横坐标。
更具体的,所述检测周期全身框的左上角横坐标为:
Figure PCTCN2018079514-appb-000031
步骤C22、确定所述检测周期全身框的左上角纵坐标。
具体的,所述检测周期全身框的左上角纵坐标为:
Figure PCTCN2018079514-appb-000032
步骤C23、确定所述检测周期全身框的右下角横坐标。
具体的,所述检测周期全身框的右下角横坐标为
Figure PCTCN2018079514-appb-000033
步骤C24、确定所述检测周期全身框的右下角纵坐标。
具体的,所述检测周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000034
步骤C25、确定所述检测周期全身框
Figure PCTCN2018079514-appb-000035
可见,采用本实施例所示的方法,即便在所述下半身扫描区域内未获取到所述下半身检测框,则可根据所述上半身检测框对所述下半身检测框进行计算,从而在没有检测到所述下半身检测框的情况下依旧能够获取到所述检测周期全身框,从而有效的保障了对待跟踪行人进行跟踪,避免因待跟踪行人无法检测到下半身而不能实现对待跟踪行人进行跟踪 的情况的出现,且能够准确的捕捉到待跟踪行人行走过程中的姿态的变化,有效的避免了因待跟踪行人姿态的不同而无法对待跟踪行人进行跟踪的情况的出现。
结合本发明实施例的第一方面的第六种实现方式,本发明实施例的第一方面的第七种实现方式中,
所述方法还包括:
步骤C31、获取预设的所述检测周期全身框的宽和所述检测周期全身框的高的比值;
具体的,预设的所述检测周期全身框的宽和所述检测周期全身框的高的比值为
Figure PCTCN2018079514-appb-000036
步骤C32、确定所述上半身检测框的高和所述检测周期全身框的高的比值。
具体的,所述上半身检测框的高和所述检测周期全身框的高的比值为:
Figure PCTCN2018079514-appb-000037
步骤C33、根据所述
Figure PCTCN2018079514-appb-000038
和所述
Figure PCTCN2018079514-appb-000039
确定所述跟踪周期全身框。
可见,采用本实施例所示的方法,即便在所述下半身扫描区域内未获取到所述下半身检测框,也可获取到跟踪周期全身框,从而有效的保障了对待跟踪行人进行跟踪。
结合本发明实施例的第一方面的第五种实现方式或本发明实施例的第一方面的第七种实现方式所述的方法,本发明实施例的第一方面的第八种实现方式中,
所述上半身跟踪框
Figure PCTCN2018079514-appb-000040
其中,所述
Figure PCTCN2018079514-appb-000041
为所述上半身跟踪框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000042
为所述上半身跟踪框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000043
为所述上半身跟踪框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000044
为所述上半身跟踪框的右下角纵坐标。
所述步骤C33具体包括:
步骤C331、确定所述跟踪周期全身框的左上角横坐标。
具体的,若
Figure PCTCN2018079514-appb-000045
则所述跟踪周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000046
步骤C332、确定所述跟踪周期全身框的左上角纵坐标。
具体的,所述跟踪周期全身框的左上角纵坐标为:
Figure PCTCN2018079514-appb-000047
步骤C333、确定所述跟踪周期全身框的右下角横坐标。
具体的,所述跟踪周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000048
步骤C334、确定所述跟踪周期全身框的右下角纵坐标。
具体的,所述跟踪周期全身框的右下角纵坐标为:
Figure PCTCN2018079514-appb-000049
更具体的,所述
Figure PCTCN2018079514-appb-000050
步骤C335、确定所述跟踪周期全身框。
具体的,所述跟踪周期全身框为:
Figure PCTCN2018079514-appb-000051
采用本实施例所示的方法,在所述检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000052
等于所述上半身检测框的左上角横坐标
Figure PCTCN2018079514-appb-000053
的情况下,能够计算得到所述跟踪周期全身框,从而使得即便因待跟踪行人姿态发生大幅度的变化,依旧能够获取到所述跟踪周期全身框,避免了无法跟踪到所述待跟踪行人的情况的出现,且能够提升对待跟踪行人进行跟踪的准确性。
结合本发明实施例第一方面至本发明实施例的第一方面的第八种实现方式任一项所述的方法,本发明实施例的第一方面的第九种实现方式中,
所述步骤D具体包括:
步骤D11、以所述上半身检测框为中心撒多个粒子。
具体的,以所述上半身检测框为中心撒多个粒子,所述多个粒子中的任一粒子的宽度和所述多个粒子中的任一粒子的高度的比值与所述上半身检测框的宽度和所述上半身检测框的高度的比值相同。
若在所述待跟踪视频的检测周期内的确定出所述上半身检测框,则在所述待跟踪视频的跟踪周期内的对所述待跟踪行人进行跟踪,因所述待跟踪行人在所述待跟踪视频中是运动的,所述待跟踪行人在检测周期内和在所述跟踪周期内的位置是不相同的,为实现对所述待跟踪行人的跟踪,则需要在所述待跟踪行人的上半身检测框周围撒多个粒子以实现对所述待跟踪行人的跟踪。
步骤D12、确定所述上半身跟踪框。
具体的,所述上半身跟踪框为所述多个粒子中与所述上半身检测框最相似的粒子。
采用本实施例所示的方法,在所述上半身检测框为中心撒多个粒子,从而实现了在所述跟踪周期内能够获取到准确的所述上半身跟踪框,且通过所述上半身检测框获取所述上半身跟踪框能够匹配待跟踪行人不同的姿态,从而实现了对待跟踪行人的准确跟踪。
结合本发明实施例的第一方面的第八种实现方式,本发明实施例的第一方面的第十种实现方式中,
所述步骤E具体包括:
步骤E11、确定所述跟踪周期全身框的左上角横坐标。
具体的,若
Figure PCTCN2018079514-appb-000054
则所述跟踪周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000055
步骤E12、确定所述跟踪周期全身框的左上角纵坐标。
具体的,所述跟踪周期全身框的左上角纵坐标为:
Figure PCTCN2018079514-appb-000056
步骤E13、确定所述跟踪周期全身框的右下角横坐标。
具体的,所述跟踪周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000057
步骤E14、确定所述跟踪周期全身框的右下角纵坐标。
具体的,所述跟踪周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000058
更具体的,所述
Figure PCTCN2018079514-appb-000059
步骤E15、确定所述跟踪周期全身框。
具体的,所述跟踪周期全身框
Figure PCTCN2018079514-appb-000060
采用本实施例所示的方法,在所述检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000061
等于所述下半身检测框的左上角横坐标
Figure PCTCN2018079514-appb-000062
的情况下,能够计算得到所述跟踪周期全身框,从而使得即便因待跟踪行人姿态发生大幅度的变化,依旧能够获取到所述跟踪周期全身框,避免了无法跟踪到所述待跟踪行人的情况的出现,且能够提升对待跟踪行人进行跟踪的准确性。
结合本发明实施例的第一方面至本发明实施例的第一方面的第十种实现方式任一项所述的方法,本发明实施例的第一方面的第十一种实现方式中,
所述方法还包括:
步骤E21、获取所述待跟踪视频的目标图像帧序列。
所述目标图像帧序列包括一个或多个连续的图像帧,且所述目标图像帧序列位于所述检测周期之前。
步骤E22、根据所述目标图像帧序列获取所述待跟踪视频的背景区域。
具体的,在所述目标图像帧序列中的任一图像帧中,通过静态背景模型获取静止的对象,确定静止的对象即为所述待跟踪视频的背景区域。
步骤E23、获取所述待跟踪视频的任一图像帧的前景区域。
具体的,在所述检测周期内,将所述待跟踪视频的任一图像帧与所述背景区域相减以获取所述待跟踪视频的任一图像帧的前景区域。
具体的,在获取到所述待跟踪视频的背景区域的情况下,将所述待跟踪视频的任一图像帧的任一区域与所述背景区域作差分以得到目标数值,可见,所述待跟踪视频的任一图像帧的不同区域均对应一个目标数值。
若所述目标数值大于或等于预设阈值,则说明与所述目标数值对应的所述待跟踪视频的任一图像帧的区域为运动区域。
在检测出所述运动区域时,确定所述运动区域为所述待跟踪视频的任一图像帧的前景 区域。
步骤E24、获取所述待跟踪行人。
具体的,对所述待跟踪视频的任一图像帧的前景区域进行检测以获取所述待跟踪行人。
结合本发明实施例的第一方面的第十一种实现方式,本发明实施例的第一方面的第十二种实现方式中,
所述步骤B包括:
步骤B11、确定目标图像帧。
具体的,所述目标图像帧为出现所述待跟踪行人的图像帧。
步骤B12、在所述目标图像帧的前景区域内获取所述上半身检测框。
可见,采用本实施例所示的方法,可在所述待跟踪视频的任一图像帧的前景区域进行待跟踪行人的检测和跟踪,即本实施例所示的对所述待跟踪行人的检测过程和跟踪过程,均执行在图像的前景区域上,则大大减少了需要处理的图像窗口数,即减少了所述待跟踪行人进行搜索的搜索空间,从而减少了对待跟踪行人进行跟踪过程所需要的时长,提升了对待跟踪行人进行跟踪的效率。
本发明实施例第二方面提供了一种电子设备,包括:
第一确定单元,用于确定待跟踪视频的检测周期和跟踪周期;
本实施例所示的所述第一确定单元,用于执行本发明实施例第一方面所示的步骤A,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
第一获取单元,用于在所述检测周期内,获取出现在所述待跟踪视频内的待跟踪行人的上半身检测框;
本实施例所示的所述第一获取单元,用于执行本发明实施例第一方面所示的步骤B,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
第二获取单元,用于根据所述上半身检测框获取所述待跟踪行人的检测周期全身框;
本实施例所示的所述第二获取单元,用于执行本发明实施例第一方面所示的步骤C,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
第三获取单元,用于在所述跟踪周期内,获取出现在所述待跟踪视频内的所述待跟踪行人的上半身跟踪框;
本实施例所示的所述第三获取单元,用于执行本发明实施例第一方面所示的步骤D,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
第四获取单元,用于根据所述检测周期全身框获取与所述上半身跟踪框对应的跟踪周期全身框,所述跟踪周期全身框用于对所述待跟踪行人进行跟踪。
本实施例所示的所述第四获取单元,用于执行本发明实施例第一方面所示的步骤E,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
采用本实施例所示的电子设备,所获取到的所述检测周期全身框为根据所述待跟踪行人的上半身检测框得到的,且所述检测周期全身框的宽高比是可变的,因此即便待跟踪行人在所述检测周期内以异常的姿势出现时,则采用本实施例所示的方法依旧能够获取到所述待跟踪行人准确的跟踪周期全身框,从而实现了在待跟踪行人以异常姿势出现时,依旧能够实现对所述待跟踪行人的准备跟踪。
结合本发明实施例第二方面,本发明实施例第二方面的第一中实现方式中,所述电子设备还包括:
所述第二获取单元具体用于,根据所述上半身检测框获取下半身扫描区域,若在所述下半身扫描区域内进行下半身检测获取到下半身检测框,则根据所述上半身检测框和所述下半身检测框获取所述检测周期全身框。
本实施例所示的所述第二获取单元,用于执行本发明实施例第一方面所示的步骤C01,、步骤C02,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
采用本实施例所示的电子设备,所获取到的所述检测周期全身框为所述待跟踪行人的上半身检测框和所述待跟踪行人的下半身检测框组合得到的,可见,所获取到的所述检测周期全身框的宽高比是可变的,因此即便待跟踪行人在所述检测周期内以异常的姿势出现时,例如,所述待跟踪行人腿张的特别开的姿势,使得待跟踪行人的上半身和下半身的比例发生变化时,可通过将分别获取到所述上半身检测框和下半身检测框进行组合依旧能够获取到所述待跟踪行人准确的所述检测周期全身框,从而实现了对待跟踪行人的准备跟踪。
结合本发明实施例的第二方面的第一种实现方式,本发明实施例的第二方面的第二种实现方式中,
所述上半身检测框
Figure PCTCN2018079514-appb-000063
其中,所述
Figure PCTCN2018079514-appb-000064
为所述上半身检测框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000065
为所述上半身检测框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000066
为所述上半身检测框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000067
为所述上半身检测框的右下角纵坐标;
所述第二获取单元在根据所述上半身检测框获取下半身扫描区域时具体用于,确定第一参数,所述第一参数
Figure PCTCN2018079514-appb-000068
其中,所述 Ratio default为预设的比值,确定第二参数,所述第二参数
Figure PCTCN2018079514-appb-000069
确定第三参数,
Figure PCTCN2018079514-appb-000070
根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域。
本实施例所示的所述第二获取单元,用于执行本发明实施例第一方面所示的步骤C011,、步骤C012、步骤C013以及步骤C014,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
可见,本实施例所示的电子设备在获取到所述第一参数、所述第二参数以及所述第三参数的情况下,即可确定出所述下半身扫描区域,以使在已获取到的所述下半身扫描区域中进行待跟踪行人的下半身检测框的检测,从而提升了获取待跟踪行人的下半身检测框的准确性和效率,提升了对待跟踪行人进行跟踪的效率。
结合本发明实施例的第二方面的第二种实现方式,本发明实施例的第二方面的第三种实现方式中,所述第二获取单元在根据所述上半身检测框获取下半身扫描区域时具体用于,根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域,其中,所述下半身扫描区域ScanArea=[L s,T s,R s,B s],所述L s为所述下半身扫描区域的左上角横坐标,所示T s为所述下半身扫描区域的左上角纵坐标,所述R s为所述下半身扫描区域的右下角横坐标,所述B s为所述下半身扫描区域的右下角纵坐标;
其中,所述
Figure PCTCN2018079514-appb-000071
所述
Figure PCTCN2018079514-appb-000072
Figure PCTCN2018079514-appb-000073
所述
Figure PCTCN2018079514-appb-000074
所述
Figure PCTCN2018079514-appb-000075
Figure PCTCN2018079514-appb-000076
所述paral1、所述paral2以及paral3为预设值,所述imgW为所述待跟踪视频在所述检测周期内任一图像帧的宽度,所述imgH为所述待跟踪视频在所述检测周期内任一图像帧的高度。
本实施例所示的所述所述第二获取单元,用于执行本发明实施例第一方面所示的步骤C014,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
采用本实施例所示的电子设备,能够在已获取到的所述下半身扫描区域中进行待跟踪行人的下半身检测框的检测,从而提升了获取待跟踪行人的下半身检测框的准确性和效率,提升了对待跟踪行人进行跟踪的效率,且在获取的过程中,可通过对参数(paral1、所述paral2以及paral3)不同的设定,以实现不同的所述下半身扫描区域的设定,从而使得本实施例所示的方法的适用性强,以使在不同的应用场景下,可根据不同的参数的设定以实 现不同的下半身检测框的定位,从而提升了待跟踪行人检测的准确性。
结合本发明实施例的第二方面的第一种实现方式至本发明实施例的第二方面的第三种实现方式任一项所述的方法,本发明实施例的第二方面的第四种实现方式中,
所述下半身检测框
Figure PCTCN2018079514-appb-000077
所述
Figure PCTCN2018079514-appb-000078
为所述下半身检测框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000079
为所述下半身检测框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000080
为所述下半身检测框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000081
为所述下半身检测框的右下角纵坐标,
所述第二获取单元在根据所述上半身检测框获取所述待跟踪行人的检测周期全身框时具体用于,确定所述检测周期全身框的左上角横坐标,所述检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000082
确定所述检测周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000083
确定所述检测周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000084
确定所述检测周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000085
确定所述检测周期全身框
Figure PCTCN2018079514-appb-000086
本实施例所示的所述所述第二获取单元,用于执行本发明实施例第一方面所示的步骤C11,,步骤C12,,步骤C13,,步骤C14,步骤C15,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
可见,采用本实施例所示的电子设备,所获取到的所述检测周期全身框为所述待跟踪行人的上半身检测框和所述待跟踪行人的下半身检测框组合得到的,从而使得即便待跟踪行人以异常的姿态出现时,例如腿张的特别开,导致宽高比变大,因本实施例能够对待跟踪行人的上半身和下半身分别进行检测,从而分别获取到所述待跟踪行人的上半身检测框和所述待跟踪行人的下半身检测框,即根据待跟踪行人姿态的不同,则使得所述检测周期全身框中的上半身检测框和所述下半身检测框具有不同的比例,则使得能够获取到准确的所述检测周期全身框,可见,基于可变的上半身检测框和所述下半身检测框的比例,则能够准确的捕捉到待跟踪行人行走过程中的姿态的变化,有效的避免了因待跟踪行人姿态的不同而无法对待跟踪行人进行跟踪的情况的出现。
结合本发明实施例的第二方面的第四种实现方式,本发明实施例的第二方面的第五种实现方式中,所述第四获取单元具体用于,确定所述检测周期全身框的宽和所述检测周期全身框的高的比值
Figure PCTCN2018079514-appb-000087
确定所述上半身检测框的高和所述检测周期全身框的高的比值
Figure PCTCN2018079514-appb-000088
根据所述
Figure PCTCN2018079514-appb-000089
和所述
Figure PCTCN2018079514-appb-000090
确定所述跟踪周期全身框。
本实施例所示的所述第四获取单元,用于执行本发明实施例第一方面所示的步骤D01、步骤D02,、步骤D03,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
采用本实施例所示的电子设备,能够基于检测周期全身框的宽和所述检测周期全身框的高的比值以及基于所述上半身检测框的高和所述检测周期全身框的高的比值以确定所述跟踪周期全身框,因所述检测周期全身框能够准确的捕捉到待跟踪行人姿态的变化,则使得根据所述检测周期全身框所获取到的所述跟踪周期全身框能够准确的捕捉到待跟踪行人行走过程中的姿态的变化,提升了通过所述跟踪周期全身框对待跟踪行人进行跟踪的准确性,有效的避免了因待跟踪行人姿态的不同而无法对待跟踪行人进行跟踪的情况的出现。
结合本发明实施例的第二方面的第二种实现方式或本发明实施例的第二方面的第三种实现方式所述的方法,本发明实施例的第二方面的第六种实现方式中,所述第二获取单元在根据所述上半身检测框获取所述待跟踪行人的检测周期全身框时具体用于,若在所述下半身扫描区域内进行下半身检测未获取到所述下半身检测框,则确定所述检测周期全身框的左上角横坐标,所述检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000091
确定所述检测周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000092
确定所述检测周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000093
确定所述检测周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000094
确定所述检测周期全身框
Figure PCTCN2018079514-appb-000095
本实施例所示的所述第二获取单元,用于执行本发明实施例第一方面所示的步骤C21,、步骤C22,、步骤C23,、步骤C24,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
可见,采用本实施例所示的电子设备,即便在所述下半身扫描区域内未获取到所述下半身检测框,则可根据所述上半身检测框对所述下半身检测框进行计算,从而在没有检测到所述下半身检测框的情况下依旧能够获取到所述检测周期全身框,从而有效的保障了对待跟踪行人进行跟踪,避免因待跟踪行人无法检测到下半身而不能实现对待跟踪行人进行跟踪的情况的出现,且能够准确的捕捉到待跟踪行人行走过程中的姿态的变化,有效的避免了因待跟踪行人姿态的不同而无法对待跟踪行人进行跟踪的情况的出现。
结合本发明实施例的第二方面的第六种实现方式,本发明实施例的第二方面的第七种实现方式中,所述第四获取单元具体用于,获取预设的所述检测周期全身框的宽和所述检测周期全身框的高的比值
Figure PCTCN2018079514-appb-000096
确定所述上半身检测框的高和所述检测周期全身框的高 的比值
Figure PCTCN2018079514-appb-000097
根据所述
Figure PCTCN2018079514-appb-000098
和所述
Figure PCTCN2018079514-appb-000099
确定所述跟踪周期全身框。
本实施例所示的所述所述第四获取单元,用于执行本发明实施例第一方面所示的步骤C31,、步骤C32,以及步骤C33,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
可见,采用本实施例所示的电子设备,即便在所述下半身扫描区域内未获取到所述下半身检测框,也可获取到跟踪周期全身框,从而有效的保障了对待跟踪行人进行跟踪。
结合本发明实施例的第二方面的第五种实现方式或本发明实施例的第二方面的第七种实现方式所述的方法,本发明实施例的第二方面的第八种实现方式中,
所述上半身跟踪框
Figure PCTCN2018079514-appb-000100
其中,所述
Figure PCTCN2018079514-appb-000101
为所述上半身跟踪框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000102
为所述上半身跟踪框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000103
为所述上半身跟踪框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000104
为所述上半身跟踪框的右下角纵坐标;
所述第四获取单元在根据所述
Figure PCTCN2018079514-appb-000105
和所述
Figure PCTCN2018079514-appb-000106
确定所述跟踪周期全身框时具体用于,确定所述跟踪周期全身框的左上角横坐标,其中,若
Figure PCTCN2018079514-appb-000107
则所述跟踪周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000108
确定所述跟踪周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000109
确定所述跟踪周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000110
确定所述跟踪周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000111
其中,所述 确定所述跟踪周期全身框
Figure PCTCN2018079514-appb-000113
本实施例所示的所述第四获取单元,用于执行本发明实施例第一方面所示的步骤C331、步骤C332、步骤C333、步骤C334以及步骤C335,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
采用本实施例所示的电子设备,在所述检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000114
等于所述上半身检测框的左上角横坐标
Figure PCTCN2018079514-appb-000115
的情况下,能够计算得到所述跟踪周期全身框,从而使得即便因待跟踪行人姿态发生大幅度的变化,依旧能够获取到所述跟踪周期全身框,避免了无法跟踪到所述待跟踪行人的情况的出现,且能够提升对待跟踪行人进行跟踪的准确性。
结合本发明实施例第二方面至本发明实施例的第二方面的第八种实现方式任一项所述的方法,本发明实施例的第二方面的第九种实现方式中,
所述第三获取单元具体用于,以所述上半身检测框为中心撒多个粒子,所述多个粒子中的任一粒子的宽度和所述多个粒子中的任一粒子的高度的比值与所述上半身检测框的 宽度和所述上半身检测框的高度的比值相同,确定所述上半身跟踪框,所述上半身跟踪框为所述多个粒子中与所述上半身检测框最相似的粒子。
本实施例所示的所述第三获取单元,用于执行本发明实施例第一方面所示的步骤D11以及步骤D12,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
采用本实施例所示的电子设备,在所述上半身检测框为中心撒多个粒子,从而实现了在所述跟踪周期内能够获取到准确的所述上半身跟踪框,且通过所述上半身检测框获取所述上半身跟踪框能够匹配待跟踪行人不同的姿态,从而实现了对待跟踪行人的准确跟踪。
结合本发明实施例的第二方面的第八种实现方式,本发明实施例的第二方面的第十种实现方式中,
所述第四获取单元具体用于,确定所述跟踪周期全身框的左上角横坐标,其中,若
Figure PCTCN2018079514-appb-000116
则所述跟踪周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000117
确定所述跟踪周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000118
确定所述跟踪周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000119
确定所述跟踪周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000120
其中,所述
Figure PCTCN2018079514-appb-000121
确定所述跟踪周期全身框
Figure PCTCN2018079514-appb-000122
本实施例所示的所述第四获取单元,用于执行本发明实施例第一方面所示的步骤E11、步骤E12、步骤E13,、步骤E14以及步骤E15,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
采用本实施例所示的电子设备,在所述检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000123
等于所述下半身检测框的左上角横坐标
Figure PCTCN2018079514-appb-000124
的情况下,能够计算得到所述跟踪周期全身框,从而使得即便因待跟踪行人姿态发生大幅度的变化,依旧能够获取到所述跟踪周期全身框,避免了无法跟踪到所述待跟踪行人的情况的出现,且能够提升对待跟踪行人进行跟踪的准确性。
结合本发明实施例的第二方面至本发明实施例的第二方面的第十种实现方式任一项所述的方法,本发明实施例的第二方面的第十一种实现方式中,所述电子设备还包括:
第五获取单元,用于获取所述待跟踪视频的目标图像帧序列,所述目标图像帧序列包括一个或多个连续的图像帧,且所述目标图像帧序列位于所述检测周期之前;
本实施例所示的所述第五获取单元,用于执行本发明实施例第一方面所示的步骤E21,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
第六获取单元,用于根据所述目标图像帧序列获取所述待跟踪视频的背景区域;
本实施例所示的所述第六获取单元,用于执行本发明实施例第一方面所示的步骤E22, 具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
第七获取单元,用于在所述检测周期内,将所述待跟踪视频的任一图像帧与所述背景区域相减以获取所述待跟踪视频的任一图像帧的前景区域;
本实施例所示的所述第七获取单元,用于执行本发明实施例第一方面所示的步骤E23,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
第八获取单元,用于对所述待跟踪视频的任一图像帧的前景区域进行检测以获取所述待跟踪行人。
本实施例所示的所述第八获取单元,用于执行本发明实施例第一方面所示的步骤E24,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
结合本发明实施例的第二方面的第十一种实现方式,本发明实施例的第二方面的第十二种实现方式中,
所述第一获取单元具体用于,确定目标图像帧,所述目标图像帧为出现所述待跟踪行人的图像帧,在所述目标图像帧的前景区域内获取所述上半身检测框。
本实施例所示的所述第一获取单元,用于执行本发明实施例第一方面所示的步骤B11以及步骤B12,具体执行过程请详见本发明实施例第一方面所示,具体不做赘述。
可见,采用本实施例所示的电子设备,可在所述待跟踪视频的任一图像帧的前景区域进行待跟踪行人的检测和跟踪,即本实施例所示的对所述待跟踪行人的检测过程和跟踪过程,均执行在图像的前景区域上,则大大减少了需要处理的图像窗口数,即减少了所述待跟踪行人进行搜索的搜索空间,从而减少了对待跟踪行人进行跟踪过程所需要的时长,提升了对待跟踪行人进行跟踪的效率。
本发明实施例提供了一种行人跟踪方法以及电子设备,所述方法能够在检测周期内,获取出现在待跟踪视频内的待跟踪行人的上半身检测框,根据所述上半身检测框获取所述待跟踪行人的检测周期全身框,在所述跟踪周期内,根据检测周期全身框获取与上半身跟踪框对应的跟踪周期全身框,可见,通过所述跟踪周期全身框即可对所述待跟踪行人进行跟踪,因检测周期全身框的宽高比是可变的,因此即便待跟踪行人在所述检测周期内以异常的姿势出现时,则采用本实施例所示的方法依旧能够获取到所述待跟踪行人准确的跟踪周期全身框,从而实现了在待跟踪行人以异常姿势出现时,依旧能够实现对所述待跟踪行人的准备跟踪。
附图说明
图1为本发明所提供的电子设备的一种实施例结构示意图;
图2为本发明所提供的处理器的一种实施例结构示意图;
图3为本发明所提供的行人跟踪方法的一种实施例步骤流程图;
图4为本发明所提供的行人跟踪方法的一种实施例应用示意图;
图5为本发明所提供的行人跟踪方法的另一种实施例应用示意图;
图6为本发明所提供的行人跟踪方法的另一种实施例应用示意图;
图7为本发明所提供的行人跟踪方法的另一种实施例应用示意图;
图8为本发明所提供的行人跟踪方法的另一种实施例应用示意图;
图9为本发明所提供的行人查询方法的一种实施例步骤流程图;
图10为本发明所提供的行人查询方法的一种实施例执行步骤示意图;
图11为本发明所提供的电子设备的另一种实施例结构示意图。
具体实施方式
本发明实施例提供了一种行人跟踪方法,为更好的理解本发明实施例所示的行人跟踪方法,以下首先对能够实现本发明实施例所示的方法的电子设备的具体结构进行详细说明:
以下结合图1所示对本实施例所示的电子设备的具体结构进行说明,其中,图1为本发明所提供的电子设备的一种实施例结构示意图。
该电子设备100可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器122。
本实施例对所述处理器122不做限定,只要所述处理器122能够具有计算和图像处理能够以实现本实施例所示的行人跟踪方法即可,可选的,本实施例所示的处理器122可为中央处理器(central processing units,CPU)
一个或一个以上用于存储应用程序142或数据144的存储介质130(例如一个或一个以上海量存储设备)。
其中,和存储介质130可以是短暂存储或持久存储。存储在存储介质130的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对电子设备中的一系列指令操作。
更进一步地,处理器122可以设置为与存储介质130通信,在电子设备100上执行存 储介质130中的一系列指令操作。
电子设备100还可以包括一个或一个以上电源126,一个或一个以上输入输出接口158,和/或,一个或一个以上操作系统141,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。
在本发明实施方式中,所述电子设备可以是任何具有图像处理能力以及计算能力的设备,包括但不限于服务器、摄像机、移动电脑、平板电脑等。
若将本实施例所示的电子设备执行本实施例所示的行人跟踪方法,则本实施例所示的所述输入输出接口158可用于接收海量的监控视频,且所述输入输出接口158能够显示检测过程、显示对行人的跟踪结果等,所述处理器122用于执行对行人的检测、对行人的跟踪算法,所述存储介质130用于存储操作系统以及应用程序等,且所述存储介质130能够对行人跟踪过程中的中间结果进行保存等,可见,本实施例所示的电子设备在执行本实施例所示的方法的过程中,能够实现在海量的监控视频中找出需要进行跟踪的目标行人,并给出所述目标行人在监控视频中出现的时间、地点等信息。
以下结合图2所示对用于实现本实施例所示的行人跟踪方法的所述处理器122的具体结构进行详细说明:
具体的,所述处理器122包括元数据提取单元21和查询单元22。
更具体的,所述元数据提取单元21包括:对象提取模块211、特征提取模块212以及索引构建模块213;
更具体的,所述查询单元22包括:特征提取模块221、特征融合模块222以及索引及查询模块223。
本实施例中,所述处理器122能够执行存储在所述存储介质130中的程序,从而实现图2所示的所述处理器122所包括的任一单元中任一模块的功能。
基于图1与图2所示的电子设备,以下结合图3所示对本实施例所示的行人跟踪方法的具体执行流程进行详细说明:
其中,图3为本发明所提供的行人跟踪方法的一种实施例步骤流程图。
首先需明确的是,执行本实施例所示的行人跟踪方法的执行主体为所述电子设备,具体可以是所述处理器122的一个或多个模块,如所述对象提取模块211。
本实施例所示的行人跟踪方法包括:
步骤301、获取待跟踪视频。
具体的,本实施例所示的所述处理器122所包括的所述对象提取模块211用于获取所述待跟踪视频。
可选的,若本实施例所示的电子设备不包括有摄像头,例如,所述电子设备为服务器,则本实施例所示的所述电子设备通过所述输入输出接口158与多个摄像头进行通信。所述摄像头用于对待跟踪行人进行拍摄以生成待跟踪视频。相应地,所述电子设备通过所述输入输出接口158接收所述摄像头发送的所述待跟踪视频,进一步地,所述处理器122的所述对象提取模块211获取所述输入输出接口158接收的所述待跟踪视频。
可选的,若本实施例所示的电子设备包括有摄像头,例如,所述电子设备为摄像机。则所述电子设备的所述处理器122的所述对象提取模块211获取所述电子设备的摄像头所拍摄的所述跟踪视频。
在具体应用中,本实施例所示的待跟踪视频一般为海量视频。
需明确的是,本实施例对所述待跟踪视频的获取方式为可选示例,不做限定,只要所述对象提取模块211能够获取到用于进行行人跟踪的所述待跟踪视频即可。
步骤302、获取目标图像帧序列。
具体的,本实施例所示的所述对象提取模块211获取所述目标图像帧序列。
更具体的,本实施例所示的所述对象提取模块211在获取到所述待跟踪视频后,在所述待跟踪视频中确定所述目标图像帧序列。
其中,所述目标图像帧序列为所述待跟踪视频的前M图像帧,本实施例对所述M的具体数值不做限定,只要所述M为大于1的正整数即可。
所述目标图像帧序列包括一个或多个连续的图像帧。
步骤303、获取所述待跟踪视频的背景区域。
具体的,本实施例所示的所述对象提取模块211对所述待跟踪视频的所述目标图像帧序列进行学习以获取所述待跟踪视频的背景区域,本实施例所示的所述待跟踪视频的背景区域可参见图4所示。
可选的,所述对象提取模块211获取所述待跟踪视频的背景区域的具体方式可为,所述对象提取模块211在所述目标图像帧序列中的任一图像帧中,通过静态背景模型获取静止的对象,确定静止的对象即为所述待跟踪视频的背景区域。
需明确的是,本实施例对获取所述待跟踪视频的背景区域的说明为可选的示例,不做限定,例如,所述对象提取模块211还可通过帧差分法、光流场法等,只要所述对象提取模块211能够获取到所述背景区域即可。
还需明确的是,本实施例所示的步骤303为可选的步骤。
步骤304、确定待跟踪视频的检测周期和跟踪周期。
具体的,所述对象提取模块211确定所述检测周期T1和跟踪周期T2。
可选的,本实施例所示的所述检测周期T1包含于所述跟踪周期T2内,且检测周期T1的持续时间小于所述跟踪周期T2的持续时间。
例如,所述跟踪周期T2的持续时间可为10分钟,所述检测周期T1的持续时间为2秒,且所述跟踪周期T2的持续时间10分钟内的前两秒为所述检测周期T1。
可选的,本实施例所示的所述检测周期T1可不包含于所述跟踪周期T2内,所述检测周期T1位于所述跟踪周期T2之前,且检测周期T1的持续时间小于所述跟踪周期T2的持续时间。
例如,所述检测周期T1的持续时间为2秒,所述跟踪周期T2的持续时间可为10分钟,且在执行完所述检测周期T1后,继续执行所述跟踪周期T2。
需明确的是,本实施例对所述检测周期T1以及所述跟踪周期T2持续时间的说明为可选的示例,不做限定。
本实施例以所述检测周期T1包含于所述跟踪周期T2内为例进行示例性说明。
更具体的,本实施例所示的检测周期T1的起始帧为所述待跟踪视频的第t帧,且所述t大于所述M,可见,本实施例所述的目标图像帧序列位于所述检测周期T1之前。
步骤305、获取所述待跟踪视频的任一图像帧的前景区域。
具体的,本实施例所示的所述对象提取模块211在所述检测周期内,将所述待跟踪视频的任一图像帧与所述背景区域相减以获取所述待跟踪视频的任一图像帧的前景区域。
如图5所示为本实施例所示的所获取到的所述待跟踪视频的任一图像帧的前景区域,其中,图5所示为所述待跟踪视频的任一图像帧的前景区域,具体的,图5所示的白色像素点即为所述待跟踪视频的任一图像帧的前景区域。
具体的,本实施例所示的所述对象提取模块211在获取到所述待跟踪视频的背景区域的情况下,将所述待跟踪视频的任一图像帧的任一区域与所述背景区域作差分以得到目标数值,可见,所述待跟踪视频的任一图像帧的不同区域均对应一个目标数值。
若所述目标数值大于或等于预设阈值,则说明与所述目标数值对应的所述待跟踪视频的任一图像帧的区域为运动区域。
本实施例所示的所述预设阈值为预先设置的,本实施例对所述预设阈值的大小不做限定,只要能够根据所述预设阈值确定出待跟踪视频的任一图像帧的运动区域即可。
在检测出所述运动区域时,确定所述运动区域为所述待跟踪视频的任一图像帧的前景区域。
步骤306、获取待跟踪行人。
本实施例所示的所述对象提取模块211对所述待跟踪视频的任一图像帧的前景区域进行检测以获取所述待跟踪行人。
本实施例对所述对象提取模块211检测到的所述待跟踪行人的具体数目不做限定。
步骤307、获取待跟踪行人的上半身检测框。
为获取本实施例所示的所述待跟踪行人的上半身检测框,则本实施例所示的所述对象提取模块211首先确定目标图像帧。
具体的,本实施例所示的目标图像帧为出现所述待跟踪行人的图像帧。
可选的,若本实施例所示的所述待跟踪行人出现在所述待跟踪视频的所述检测周期T1内的任一图像帧内的行人,则所述对象提取模块211即可确定出现所述待跟踪行人的图像帧为目标图像帧,即所述目标图像帧为所述待跟踪视频在所述检测周期内出现所述待跟踪行人的图像帧。
可选的,若本实施例所示的所述待跟踪行人为出现在所述待跟踪视频的所述检测周期T1内的连续图像帧内的行人,则所述对象提取模块211即可确定连续出现所述待跟踪行人的所述待跟踪视频的连续图像帧中,出现所述待跟踪行人的最后一图像帧为目标图像帧,或,所述对象提取模块211即可确定连续出现所述待跟踪行人的所述待跟踪视频的连续图像帧中,出现所述待跟踪行人的随机一图像帧为目标图像帧,具体在本实施例中不做限定。
可选的,若本实施例所示的所述待跟踪行人为间隔出现在所述待跟踪视频的所述检测周期T1内的图像帧内的行人,则所述对象提取模块211即可确定间隔出现所述待跟踪行人的所述待跟踪视频的图像帧中,出现所述待跟踪行人的最后一图像帧为目标图像帧,或,所述对象提取模块211即可确定间隔出现所述待跟踪行人的所述待跟踪视频的图像帧中,出现所述待跟踪行人的随机一图像帧为目标图像帧,具体在本实施例中不做限定。
需明确的是,上述对如何确定所述目标图像帧的说明为可选的示例,不做限定,只要 所述目标图像帧出现有所述待跟踪行人即可。
在确定出所述目标图像帧时,所述对象提取模块211即可在所述目标图像帧内获取所述上半身检测框。
本实施例所示的所述对象提取模块211可配置第一检测器,所述第一检测器用于对所述上半身检测框进行检测。
具体的,所述对象提取模块211的第一检测器在所述目标图像帧的前景区域内获取所述上半身检测框。
本实施例所示的所述对象提取模块211能够在所述目标图像帧的前景区域内对所述待跟踪行人进行检测,即所述对象提取模块211在对所述待跟踪行人进行检测的过程中,无需对所述背景区域进行检测,从而在提升行人检测的准确性的情况下,大大减少对行人检测所需的时间。
以下对所述对象提取模块211如何在所述目标图像帧的前景区域获取所述待跟踪行人的上半身检测框的进行说明:
具体的,本实施例所示的所述对象提取模块211在所述检测周期内,获取出现在所述待跟踪视频内的待跟踪行人的上半身检测框。
具体的,本实施例所示的所述上半身检测框
Figure PCTCN2018079514-appb-000125
其中,所述
Figure PCTCN2018079514-appb-000126
为所述上半身检测框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000127
为所述上半身检测框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000128
为所述上半身检测框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000129
为所述上半身检测框的右下角纵坐标。
本实施例所示的所述对象提取模块211在对所述待跟踪行人进行检测时,即可获取到所述
Figure PCTCN2018079514-appb-000130
所述
Figure PCTCN2018079514-appb-000131
所述
Figure PCTCN2018079514-appb-000132
以及所述
Figure PCTCN2018079514-appb-000133
以所述对象提取模块211确定出的目标图像帧如图6所示为例,通过本实施例所示的上述方法,能够在所述目标图像帧中所出现的待跟踪行人进行检测,以获取到各待跟踪行人的上半身检测框。
以图6所示在检测过程中,在目标图像帧中行人601因出现在所述目标图像帧的边缘,且所述行人601的全身没有显示完全,则本实施例所示的所述对象提取模块211获取不到所述行人601的上半身检测框。
在目标图像帧中行人602和行人603全身均清楚的出现在所述目标图像帧中,则所述对象提取模块211即可获取所述行人602的上半身检测框和所述行人603的上半身检测框。
位于目标图像帧中的区域604中的各行人显示的不够清楚,则本实施例所示的所述对象提取模块211获取不到位于所述区域604内的各行人的上半身检测框。
可见,本实施例所示的所述对象提取模块211只会对显示在所述目标图像帧中的待跟踪行人的上半身检测框进行检测;
具体的,所述待跟踪行人为完整的显示在所述目标图像帧中的行人,即所述待跟踪行人的上半身和下半身均完整的显示在所述目标图像帧中。
更具体的,所述待跟踪行人为显示在所述目标图像帧中的面积大于或等于所述对象提取模块211预设的阈值的行人,即若显示在所述目标图像帧中的所述待跟踪行人的大于或等于所述预设的阈值,则说明所述待跟踪行人清楚的显示在所述目标图像帧中,在所述目标图像帧中的所述待跟踪行人的小于所述预设的阈值的情况下,则所述对象提取模块211无法对所述待跟踪行人进行检测。
步骤308、根据所述上半身检测框获取下半身扫描区域。
本实施例在所述对象提取模块211获取到所述待跟踪行人的所述上半身检测框后,所述对象提取模块211即可根据所述待跟踪行人的所述上半身检测框获取所述待跟踪行人的下半身扫描区域。
为获取到所述待跟踪行人的下半身扫描区域,则所述对象提取模块211需要获取第一参数,第二参数以及第三参数。
其中,所述第一参数
Figure PCTCN2018079514-appb-000134
所述Ratio default为预设的比值。
可选的,本实施例所述Ratio default为所述对象提取模块211预先存储在所述存储介质130中的,且所述Ratio default可为所述对象提取模块211预先根据人体检测框(如背景技术所示)的宽高比进行设置,例如,预先确定出所述人体检测框的宽高比为3:7,则所述对象提取模块211可将Ratio default设定为3/7,并将所述Ratio default存储在所述存储介质130中,以使在执行本步骤所示的过程中,所述对象提取模块211可将所述Ratio default从所述存储介质130中调取出来以进行所述第一参数
Figure PCTCN2018079514-appb-000135
的计算。
其中,所述第二参数
Figure PCTCN2018079514-appb-000136
Figure PCTCN2018079514-appb-000137
Figure PCTCN2018079514-appb-000138
在所述对象提取模块211获取到所述第一参数、所述第二参数以及所述第三参数的情况下,所述对象提取模块211即可确定出所述下半身扫描区域。
其中,所述下半身扫描区域ScanArea=[L s,T s,R s,B s],所述L s为所述下半身扫描区域的左上角横坐标,所示T s为所述下半身扫描区域的左上角纵坐标,所述R s为所述下半身扫描区域的右下角横坐标,所述B s为所述下半身扫描区域的右下角纵坐标。
具体的,所述
Figure PCTCN2018079514-appb-000139
所述
Figure PCTCN2018079514-appb-000140
Figure PCTCN2018079514-appb-000141
所述
Figure PCTCN2018079514-appb-000142
所述
Figure PCTCN2018079514-appb-000143
Figure PCTCN2018079514-appb-000144
更具体的,所述paral1、所述paral2以及paral3为预设值。
本实施例对所述paral1、所述paral2以及paral3的具体数值不做限定,所述paral1、所述paral2以及paral3可为经验值,或操作人员可通过设定不同的paral1、所述paral2以及paral3实现不同的所述下半身扫描区域的设定。
所述imgW为所述待跟踪视频在所述检测周期内任一图像帧的宽度,所述imgH为所述待跟踪视频在所述检测周期内任一图像帧的高度。
步骤309、判断在所述下半身扫描区域内是否检测到所述待跟踪行人的下半身检测框,若是,则执行步骤310,若否,则执行步骤313。
具体的,本实施例所示的所述对象提取模块211在所述下半身扫描区域内进行下半身检测以判断是否能够检测到所述待跟踪行人的下半身检测框。
步骤310、获取所述下半身检测框。
具体的,所述对象提取模块211可设置下半身检测器。
更具体的,本实施例所示的所述对象提取模块211的所述下半身检测器在所述下半身扫描区域内进行下半身检测以确定能够得到待跟踪行人的下半身检测框时,则确定所述下半身检测框
Figure PCTCN2018079514-appb-000145
所述
Figure PCTCN2018079514-appb-000146
为所述下半身检测框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000147
为所述下半身检测框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000148
为所述下半身检测框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000149
为所述下半身检测框的右下角纵坐标。
步骤311、获取所述检测周期全身框。
具体的,所述对象提取模块211根据所述上半身检测框和所述下半身检测框获取所述检测周期全身框。
更具体的,本实施例所示的所述对象提取模块211对所述上半身检测框和所述下半身检测框进行组合以形成所述检测周期全身框。
其中,所述检测周期全身框
Figure PCTCN2018079514-appb-000150
检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000151
所述检测周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000152
所述检测周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000153
所述检测周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000154
步骤312、获取第一比值和第二比值。
具体的,本实施例所示的所述对象提取模块211在获取到所述检测周期全身框后,即可确定所述检测周期全身框的第一比值;
其中,所述第一比值为所述检测周期全身框的宽和所述检测周期全身框的高的比值,所述第一比值
Figure PCTCN2018079514-appb-000155
本实施例所示的所述对象提取模块211确定所述检测周期全身框的第二比值;
具体的,所述第二比值为所述上半身检测框的高和所述检测周期全身框的高的比值,所述第二比值
Figure PCTCN2018079514-appb-000156
步骤313、获取第三比值。
本实施例中,若所述对象提取模块211在所述下半身扫描区域内进行下半身检测未获取到所述下半身检测框,则获取第三比值,所述第三比值为预设的所述检测周期全身框的宽和所述检测周期全身框的高的比值
Figure PCTCN2018079514-appb-000157
步骤314、获取检测周期全身框。
实施例中,所述对象提取模块211在所述下半身扫描区域内进行下半身检测未获取到所述下半身检测框的情况下,获取所述检测周期全身框
Figure PCTCN2018079514-appb-000158
其中,所述检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000159
所述检测周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000160
所述检测周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000161
所述检测周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000162
步骤315、确定所述检测周期全身框的第四比值。
本实施例所示的所述对象提取模块211确定所述第四比值为所述上半身检测框和所述检测周期全身框的高的比值;
所述第四比值
Figure PCTCN2018079514-appb-000163
在执行所述步骤312或步骤315后,继续执行本实施例所示的步骤316。
步骤316、确定上半身跟踪框。
本实施例中,所述对象提取模块211将所述检测周期T1内所获取到的所述检测周期全身框初始化为跟踪目标,以使所述对象提取模块211能够在所述跟踪周期T2内,对作为所述跟踪目标的所述待跟踪行人进行跟踪。
需明确的是,本实施例经由上述步骤所确定出的所述待跟踪行人的数目可为至少一个,若所述待跟踪行人的数目为多个,则需要分别将多个所述待跟踪行人中的每一待跟踪行人作为所述跟踪目标以进行跟踪。
例如,经由上述步骤,确定出所述待跟踪行人为行人A、行人B、行人C以及行人D,则需要将行人A设定为跟踪目标以执行后续步骤进行跟踪,将行人B设定为跟踪目标以执行后续步骤进行跟踪,即需要将所述待跟踪视频中的每一待跟踪行人设定为跟踪目标以执行后续步骤进行跟踪。
所述对象提取模块211在对所述待跟踪行人进行跟踪的过程中,首先确定所述上半身检测框,所述对象提取模块211以所述上半身检测框为中心以正太分别进行采样,即在所述上半身检测框周围撒多个粒子,在所述多个粒子中确定出所述上半身跟踪框。
为更好的理解,以下结合应用场景进行说明:
若所述对象提取模块211在所述待跟踪视频的检测周期T1内的N1帧确定出所述上半身检测框,所述对象提取模块211在所述待跟踪视频的跟踪周期T2内的N2帧对所述待跟踪行人进行跟踪,所述N2帧为所述待跟踪视频的跟踪周期T2内的任一帧。
因所述待跟踪行人在所述待跟踪视频中是运动的,所述待跟踪行人在N1帧的位置和N2帧的位置是不相同的,所述对象提取模块211为实现对所述待跟踪行人的跟踪,则所述对象提取模块211需要在所述待跟踪行人的上半身检测框周围撒多个粒子以实现对所述待跟踪行人的跟踪。
具体的,所述多个粒子中的任一粒子的第五比值与所述上半身检测框的第六比值相同,所述第五比值为所述多个粒子中的任一粒子的宽度和所述多个粒子中的任一粒子的高度的比值,所述第六比值为所述上半身检测框的宽度和所述上半身检测框的高度的比值。
可见,采用本实施例所示的方法,所述对象提取模块211在所述上半身检测框周围所撒的任一粒子为与所述上半身检测框具有相同的宽度和高度的比的矩形框。
所述对象提取模块211在所述多个粒子中,确定上半身跟踪框。
具体的,所述对象提取模块211在所述多个粒子中,判断与所述上半身检测框最相似的粒子即为所述上半身跟踪框。
所述上半身跟踪框
Figure PCTCN2018079514-appb-000164
其中,所述
Figure PCTCN2018079514-appb-000165
为所述上半身跟踪框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000166
为所述上半身跟踪框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000167
为所述上半身跟踪框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000168
为所述上半身跟踪框的右下角纵坐标。
步骤317、获取待跟踪行人的跟踪周期全身框。
本实施例中,所述对象提取模块211根据所述检测周期全身框获取与所述上半身跟踪框对应的跟踪周期全身框。
所述跟踪周期全身框用于对所述待跟踪行人进行跟踪。
以下对所述对象提取模块211具体如何获取所述跟踪周期全身框的进行详细说明:
所述对象提取模块211在获取到所述检测周期全身框和所述上半身跟踪框后,如图7所示,所述对象提取模块211判断所述上半身检测框701的左上角横坐标
Figure PCTCN2018079514-appb-000169
是否等于所述检测周期全身框702的左上角横坐标
Figure PCTCN2018079514-appb-000170
若如图7(a)所示,若所述对象提取模块211判断出
Figure PCTCN2018079514-appb-000171
则所述对象提取模块211确定所述跟踪周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000172
所述对象提取模块211确定所述跟踪周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000173
所述对象提取模块211确定所述跟踪周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000174
所述对象提取模块211确定所述跟踪周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000175
其中,所述
Figure PCTCN2018079514-appb-000176
所述所述
Figure PCTCN2018079514-appb-000177
的具体说明请详见上述步骤所示,具体在本步骤中不做赘述。
本实施例所示的所述对象提取模块211可确定出所述跟踪周期全身框
Figure PCTCN2018079514-appb-000178
Figure PCTCN2018079514-appb-000179
所述对象提取模块211在获取到所述检测周期全身框获取与所述上半身跟踪框后,如图7所示,所述对象提取模块211判断检测周期全身框702的左上角横坐标
Figure PCTCN2018079514-appb-000180
是否等于所述下半身检测框703的左上角横坐标
Figure PCTCN2018079514-appb-000181
若如图7(b)所示,所述检测周期全身框702的左上角横坐标
Figure PCTCN2018079514-appb-000182
等于所述下半身检测框703的左上角横坐标
Figure PCTCN2018079514-appb-000183
则所述对象提取模块211确定所述跟踪周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000184
所述对象提取模块211确定所述跟踪周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000185
所述对象提取模块211确定所述跟踪周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000186
所述对象提取模块211确定所述跟踪周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000187
其中,所述
Figure PCTCN2018079514-appb-000188
所述对象提取模块211确定所述跟踪周期全身框
Figure PCTCN2018079514-appb-000189
其中,所述
Figure PCTCN2018079514-appb-000190
所述
Figure PCTCN2018079514-appb-000191
的具体说明请详见上述步骤所示,具体在本步骤中不做赘述。
采用本实施例所示的方法,在所述跟踪周期T内,通过所述跟踪周期全身框能够实现在所述待跟踪视频内对所述待跟踪行人的跟踪。
为更好的理解本发明实施例所示的方法,以下结合图8所示的应用场景对本实施例所示的行人跟踪方法的有益效果进行详述:
本实施例所示的所述对象提取模块211在所述检测周期T1内,获取到图8所示的待跟踪行人的上半身检测框801,具体获取所述上半身检测框801的具体过程请详见上述步骤所示,具体在本应用场景中不做赘述。
所述对象提取模块211在所述检测周期T1内,获取到图8所示的待跟踪行人的下半身检测框802,具体获取所述下半身检测框802的过程,请详见上述实施例所示,具体在本实施例中不做赘述。
所述对象提取模块211在所述检测周期T1内,获取到图8所示的检测周期全身框803,具体获取所述检测周期全身框803的过程,请详见上述实施例所示,具体在本实施例中不做赘述。
在获取到所述检测周期全身框803后,即可获取所述检测周期全身框803的宽和所述检测周期全身框803的高的比
Figure PCTCN2018079514-appb-000192
以及所述上半身检测框801上半身检测框和所述检测周期全身框803的高的比值
Figure PCTCN2018079514-appb-000193
采用本实施例所示的获取所述检测周期全身框803的过程中,所述对象提取模块211所获取到的所述检测周期全身框803为所述待跟踪行人的上半身检测框801和所述待跟踪行人的下半身检测框802组合得到的,可见,所述对象提取模块211所获取到的所述检测周期全身框803的宽高比是可变的,因此即便待跟踪行人在所述检测周期T1内以异常的姿势出现时,例如,所述待跟踪行人腿张的特别开的姿势,使得待跟踪行人的上半身和下半身的比例发生变化时,所述对象提取模块211通过将分别获取到所述上半身检测框和下半身检测框进行组合依旧能够获取到所述待跟踪行人准确的所述检测周期全身框803。
基于本实施例所述的检测周期全身框803可变的宽高比,从而使得所述对象提取模块 211能够准确的捕捉到待跟踪行人姿势的变化,从而使得所述检测周期全身框803能够准确的捕捉到待跟踪行人姿态的变化,可见,无论待跟踪行人的姿势如何变化,依旧能够获取到准确的所述检测周期全身框803。
所述对象提取模块211在所述检测周期T2内,可获取到上半身跟踪框804以及跟踪周期全身框805,具体获取过程请详见上述步骤所示,具体在本实施例中不做赘述。
本实施例所示的所述对象提取模块211能够将所述
Figure PCTCN2018079514-appb-000194
和所述
Figure PCTCN2018079514-appb-000195
沿用至所述跟踪周期T2内,从而基于可变的所述
Figure PCTCN2018079514-appb-000196
和所述
Figure PCTCN2018079514-appb-000197
得到更为准确的所述跟踪周期全身框805,从而使得在跟踪周期T2内,即便待跟踪行人的姿势变化,依旧能够实现对待跟踪行人的准确跟踪。
在具体对行人进行跟踪的过程中,本实施例所述的步骤304至本实施例所示的步骤317可执行多次,从而提升对待跟踪行人更为准确的跟踪,例如,若所述对象提取模块211执行完一次所述跟踪周期T2后,所述对象提取模块211可在后续的时间内,反复多次执行所述跟踪周期T2,本实施例对执行所述跟踪周期T2的次数不做限定。
因本实施例所示的所述对象提取模块211可多次的执行所述跟踪周期T2,则所述对象提取模块211可根据检测结果对所述
Figure PCTCN2018079514-appb-000198
以及所述
Figure PCTCN2018079514-appb-000199
的具体数值进行多次的更新,从而实现在所述跟踪周期T2内获取到更为准确的跟踪周期全身框,从而实现了对行人的准确跟踪。
采用本实施例所示的方法,可在所述待跟踪视频的任一图像帧的前景区域进行待跟踪行人的检测和跟踪,即本实施例所示的对所述待跟踪行人的检测过程和跟踪过程,均执行在图像的前景区域上,则大大减少了需要处理的图像窗口数,即减少了电子设备对所述待跟踪行人进行搜索的搜索空间,从而减少了对待跟踪行人进行跟踪过程所需要的时长,提升了电子设备对待跟踪行人进行跟踪的效率。
基于图1与图2所示的电子设备,以下结合图9和图10所示对本实施例所提供的行人查询方法进行详细说明:
其中,图9为本发明所提供的行人查询方法的一种实施例步骤流程图,图10为本发明所提供的行人跟踪方法的一种实施例执行步骤示意图。
首先需明确的是,执行本实施例所示的所述行人查询方法各执行主体的说明为可选示例,不做限定,即执行本实施例所示的各步骤的执行主体可为图2所示的所述处理器122 任一模块,或执行本实施例所示的各步骤的执行主体也可为图2未示出的模块,具体在本实施例中不做限定,只要所述电子设备能够执行本实施例所示的行人查询方法即可。
步骤901、获取待跟踪视频。
本实施例所示的步骤901的具体执行过程,请详见图3所示的步骤301,具体执行过程在本实施例中不做赘述。
步骤902、对所述待跟踪视频中的待跟踪行人进行检测以及跟踪以获取行人序列。
本实施例所示的所述对象提取模块211用于对所述待跟踪视频中的待跟踪行人进行检测以及跟踪,具体执行过程请详见上述实施例所示的步骤302至步骤317所示,具体在本实施例中不做赘述。
具体的,若经过上述步骤,本实施例所示的所述对象提取模块211获取到多个待跟踪行人,所述对象提取模块211经过上述步骤后,汇总形成行人序列。
所述对象提取模块211获取到的所述行人序列包括多个子序列,所述多个子序列中的任一子序列为目标子序列,与所述目标子序列对应有目标待跟踪行人,所述目标待跟踪行人为上述步骤所确定的所述多个待跟踪行人中的一个待跟踪行人对应。
本实施例所示的所述目标子序列中包括多个图像帧,且所述多个图像帧中的任一图像帧包括有所述目标待跟踪行人。
其中,所述目标子序列中所包括的任一图像帧上具有与所述目标待跟踪行人对应的上述步骤所示的所述跟踪周期全身框。
可见,本实施例所示的所述行人序列包括多个子序列,且所述多个子序列中的任一子序列中包括多个图像帧,任一子序列中所包括的图像帧显示有与所述子序列对应的待跟踪行人的跟踪周期全身框。
如图10所示,本实施例以所述电子设备不包括有摄像头为例进行示例性说明,本实施例所示的电子设备能够与摄像头集群105进行通信,其中,所述摄像头集群105包括多个摄像头,各摄像头能够对待跟踪行人进行拍摄以生成待跟踪视频,则所述电子设备能够接收所述摄像头发送的所述待跟踪视频。
所述对象提取模块211可针对不同的目标待跟踪行人创建不同的子序列1001,所述子序列1001包括与所述目标待跟踪行人对应的多个所述出现有目标待跟踪行人的图像帧。
步骤903、将行人序列发送给特征提取模块。
本实施例中,所述对象提取模块211将所述行人序列发送给所述特征提取模块212。
步骤904、获取行人序列的特征。
本实施例所示的所述特征提取模块212将所述行人序列作为输入,对所述行人序列进行特征提取。
具体的,所述特征提取模块212可对所述行人序列进行分析,以检查行人序列所包括的任一图像帧中的每个像素是否代表一个特征,从而提取出所述行人序列的特征。
具体的,所述行人序列的特征为所述行人序列所包括的所有所述目标待跟踪行人的特征集合。
例如,若所述行人序列包括A、B、C、D以及E五个待跟踪行人,则所述特征提取模块212可分别对行人A的图像帧进行特征提取以获取与行人A对应的目标待跟踪行人的特征集合,对行人B的图像帧进行特征提取以获取与行人B对应的目标待跟踪行人的特征集合,直至将所述行人序列中的每一行人的特征均提取完成。
如图10所示,所述特征提取模块212所创建的所述特征集合1002包括与目标待跟踪行人对应的目标标识ID,以及与所述目标待跟踪行人对应的多个图像特征。
以目标待跟踪行人为A行人为例,则与目标待跟踪行人A对应的特征集合1002包括与目标待跟踪行人A对应的目标标识ID,以及与所述目标待跟踪行人A对应的多个图像特征。
可见,通过本实施例所示的所述特征提取模块212能够创建出不同的目标待跟踪行人与不同的目标标识ID的对应关系,以及不同的目标标识ID与多个图像特征的对应关系。
步骤905、将所述行人序列的特征发送给索引构建模块。
本实施例所示的所述特征提取模块212能够将已获取到的所述行人序列的特征发送给所述索引构建模块213。
步骤906、建立索引列表。
本实施例所示的所述索引构建模块213在接收到所述行人序列的特征后,建立所述索引列表,所述索引列表所包括的对应关系为不同的目标待跟踪行人与不同的目标标识ID的对应关系,以及不同的目标标识ID与多个图像特征的对应关系,且本实施例所示的所述索引构建模块213还能够通过所述索引列表创建不同的目标标识ID与对应的目标待跟踪行人出现在所述待跟踪视频中的时间、地点等任意信息。
步骤907、将所述索引列表存储至存储介质。
本实施例所示的所述索引构建模块213将所述索引列表创建完成后,将所述索引列表 存储至所述存储介质130。
经由本实施例所示的步骤901至步骤907,能够在海量的待跟踪视频中,将不同的行人进行分类,为后续的跟踪目标的查询做基础。
在需要进行跟踪目标的查询时,可执行后续步骤。
步骤908、接收跟踪目标。
通过本实施例所示,能够实现以图搜图的目的,即在进行查询时,可将出现有跟踪目标的图像输入给所述特征提取模块221。
以图10为例,为实现跟踪目标的查询,则可将出现有跟踪目标的图像1003输入给所述特征提取模块221。
步骤909、对所述跟踪目标进行特征提取。
具体的,本实施例所示的所述特征提取模块221能够对出现有跟踪目标的图像进行分析,以获取所述跟踪目标的特征,通过本实施例所示的方法,能够获取到与所述跟踪目标对应的多个特征。
步骤910、将与所述跟踪目标的不同特征进行融合。
本实施例中,所述特征融合模块222能够将与所述跟踪目标的不同特征进行融合。
如图10所示,所述特征融合模块222能够将与所述跟踪目标的不同特征进行融合以得到融合后的特征,可见,本实施例所示的所述融合后的特征与所述跟踪目标对应。
步骤911、将所述融合后的特征发送给所述索引及查询模块。
步骤912、对所述跟踪目标进行查询。
本实施例所示的所述索引及查询模块223基于与所述跟踪目标对应的所述融合后的特征对所述跟踪目标进行查询。
具体的,所述索引及查询模块223将所述融合后的特征与存储在所述存储介质130中的所述索引列表进行匹配,从而查找出与融合后的特征对应的目标标识ID,进而使得所述索引及查询模块223即可根据所述索引列表获取到与所述目标标识ID对应的行人出现在所述待跟踪视频中的时间、地点等任意信息,本实施例与所述目标标识ID对应的行人即为所述跟踪目标。
可见,通过本实施例所示的方法,只需要接收出现有跟踪目标的图像,即可在海量的视频中,获取到该跟踪目标在海量的视频中所出现的时间以及地点等信息。
本实施例所示的方法在对所述待跟踪行人进行检测以及跟踪所取得的有益效果的说 明请详见上述实施例所示,具体在本实施例中不做赘述。
采用本实施例所示的方法,在实现对跟踪目标进行搜索的过程中,能够实现在海量的待跟踪视频中对所述跟踪目标进行快速,准确的定位,从而快速的获取该跟踪目标在海量的待跟踪视频中所出现的时间以及地点等信息。
本实施例所示的方法的应用场景不做限定,例如,可用于平安城市中进行的以图搜图,以实现对跟踪目标相关信息的迅速获取,还可应用至动员轨迹生成与分析、人数统计、车辆辅助驾驶中行人预警等。具体地说,只要是需要对包含行人的视频进行智能分析时,都可采用本发明实施例进行行人检测与跟踪,提取该行人的位置、轨迹等信息。
图1从实体硬件的角度对所述电子设备的具体结构进行了说明,以下结合图11所示,从执行上述实施例所示的行人跟踪方法的流程的角度对所述电子设备的结构进行说明,从而使得通过本实施例所示的电子设备能够执行上述实施例所示的行人跟踪方法。
所述电子设备包括:
第一确定单元1101,用于确定待跟踪视频的检测周期和跟踪周期;
第五获取单元1102,用于获取所述待跟踪视频的目标图像帧序列,所述目标图像帧序列包括一个或多个连续的图像帧,且所述目标图像帧序列位于所述检测周期之前;
第六获取单元1103,用于根据所述目标图像帧序列获取所述待跟踪视频的背景区域;
第七获取单元1104,用于在所述检测周期内,将所述待跟踪视频的任一图像帧与所述背景区域相减以获取所述待跟踪视频的任一图像帧的前景区域;
第八获取单元1105,用于对所述待跟踪视频的任一图像帧的前景区域进行检测以获取所述待跟踪行人。
可选的本实施例所示的所述第五获取单元1102至本实施例所示的所述第八获取单元1105为可选单元,在具体应用中,所述电子设备也可不包括所述本实施例所示的所述第五获取单元1102至本实施例所示的所述第八获取单元1105。
第一获取单元1106,用于在所述检测周期内,获取出现在所述待跟踪视频内的待跟踪行人的上半身检测框,所述上半身检测框
Figure PCTCN2018079514-appb-000200
其中,所述
Figure PCTCN2018079514-appb-000201
为所述上半身检测框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000202
为所述上半身检测框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000203
为所述上半身检测框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000204
为所述上半身检测框的右下角纵坐标;
可选的,所述第一获取单元1106具体用于,确定目标图像帧,所述目标图像帧为出现所述待跟踪行人的图像帧,在所述目标图像帧的前景区域内获取所述上半身检测框。
第二获取单元1107,用于根据所述上半身检测框获取所述待跟踪行人的检测周期全身框;
可选的,所述第二获取单元1107具体用于,根据所述上半身检测框获取下半身扫描区域,若在所述下半身扫描区域内进行下半身检测获取到下半身检测框,则根据所述上半身检测框和所述下半身检测框获取所述检测周期全身框。
其中,所述上半身检测框
Figure PCTCN2018079514-appb-000205
其中,所述
Figure PCTCN2018079514-appb-000206
为所述上半身检测框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000207
为所述上半身检测框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000208
为所述上半身检测框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000209
为所述上半身检测框的右下角纵坐标;
可选的,所述第二获取单元1107在根据所述上半身检测框获取下半身扫描区域时具体用于,确定第一参数,所述第一参数
Figure PCTCN2018079514-appb-000210
其中,所述Ratio default为预设的比值,确定第二参数,所述第二参数
Figure PCTCN2018079514-appb-000211
确定第三参数,
Figure PCTCN2018079514-appb-000212
根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域。
可选的,所述第二获取单元1107在根据所述上半身检测框获取下半身扫描区域时具体用于,根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域,其中,所述下半身扫描区域ScanArea=[L s,T s,R s,B s],所述L s为所述下半身扫描区域的左上角横坐标,所示T s为所述下半身扫描区域的左上角纵坐标,所述R s为所述下半身扫描区域的右下角横坐标,所述B s为所述下半身扫描区域的右下角纵坐标;
其中,所述
Figure PCTCN2018079514-appb-000213
所述
Figure PCTCN2018079514-appb-000214
Figure PCTCN2018079514-appb-000215
所述
Figure PCTCN2018079514-appb-000216
所述
Figure PCTCN2018079514-appb-000217
Figure PCTCN2018079514-appb-000218
所述paral1、所述paral2以及paral3为预设值,所述imgW为所述待跟踪视频在所述检测周期内任一图像帧的宽度,所述imgH为所述待跟踪视频在所述检测周期内任一图像帧的高度。
其中,所述下半身检测框
Figure PCTCN2018079514-appb-000219
所述
Figure PCTCN2018079514-appb-000220
为所述下半身检测框的左上角横坐标,所述
Figure PCTCN2018079514-appb-000221
为所述下半身检测框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000222
为所述下半身检测框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000223
为所述下半身检测框的右下角纵坐标;
可选的,所述第二获取单元1107在根据所述上半身检测框获取所述待跟踪行人的检测 周期全身框时具体用于,确定所述检测周期全身框的左上角横坐标,所述检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000224
确定所述检测周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000225
确定所述检测周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000226
确定所述检测周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000227
确定所述检测周期全身框
Figure PCTCN2018079514-appb-000228
可选的,所述第二获取单元1107在根据所述上半身检测框获取所述待跟踪行人的检测周期全身框时具体用于,若在所述下半身扫描区域内进行下半身检测未获取到所述下半身检测框,则确定所述检测周期全身框的左上角横坐标,所述检测周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000229
确定所述检测周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000230
确定所述检测周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000231
确定所述检测周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000232
确定所述检测周期全身框
Figure PCTCN2018079514-appb-000233
Figure PCTCN2018079514-appb-000234
第三获取单元1108,用于在所述跟踪周期内,获取出现在所述待跟踪视频内的所述待跟踪行人的上半身跟踪框;
可选的,所述第三获取单元具体用于,以所述上半身检测框为中心撒多个粒子,所述多个粒子中的任一粒子的宽度和所述多个粒子中的任一粒子的高度的比值与所述上半身检测框的宽度和所述上半身检测框的高度的比值相同,确定所述上半身跟踪框,所述上半身跟踪框为所述多个粒子中与所述上半身检测框最相似的粒子。
第四获取单元1109,用于根据所述检测周期全身框获取与所述上半身跟踪框对应的跟踪周期全身框,所述跟踪周期全身框用于对所述待跟踪行人进行跟踪。
可选的,所述第四获取单元1109具体用于,获取预设的所述检测周期全身框的宽和所述检测周期全身框的高的比值
Figure PCTCN2018079514-appb-000235
确定所述上半身检测框的高和所述检测周期全身框的高的比值
Figure PCTCN2018079514-appb-000236
根据所述
Figure PCTCN2018079514-appb-000237
和所述
Figure PCTCN2018079514-appb-000238
确定所述跟踪周期全身框。
可选的,所述第四获取单元1109具体用于,确定所述检测周期全身框的宽和所述检测周期全身框的高的比值
Figure PCTCN2018079514-appb-000239
确定所述上半身检测框的高和所述检测周期全身框的高的比值
Figure PCTCN2018079514-appb-000240
根据所述
Figure PCTCN2018079514-appb-000241
和所述
Figure PCTCN2018079514-appb-000242
确定所述跟踪周期全身框。
其中,所述上半身跟踪框
Figure PCTCN2018079514-appb-000243
其中,所述
Figure PCTCN2018079514-appb-000244
为所述上半身跟踪框 的左上角横坐标,所述
Figure PCTCN2018079514-appb-000245
为所述上半身跟踪框的左上角纵坐标,所述
Figure PCTCN2018079514-appb-000246
为所述上半身跟踪框的右下角横坐标,所述
Figure PCTCN2018079514-appb-000247
为所述上半身跟踪框的右下角纵坐标;
可选的,所述第四获取单元1109在根据所述
Figure PCTCN2018079514-appb-000248
和所述
Figure PCTCN2018079514-appb-000249
确定所述跟踪周期全身框时具体用于,确定所述跟踪周期全身框的左上角横坐标,其中,若
Figure PCTCN2018079514-appb-000250
则所述跟踪周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000251
确定所述跟踪周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000252
确定所述跟踪周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000253
确定所述跟踪周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000254
其中,所述
Figure PCTCN2018079514-appb-000255
确定所述跟踪周期全身框
Figure PCTCN2018079514-appb-000256
可选的,所述第四获取单元1109具体用于,确定所述跟踪周期全身框的左上角横坐标,其中,若
Figure PCTCN2018079514-appb-000257
则所述跟踪周期全身框的左上角横坐标
Figure PCTCN2018079514-appb-000258
确定所述跟踪周期全身框的左上角纵坐标
Figure PCTCN2018079514-appb-000259
确定所述跟踪周期全身框的右下角横坐标
Figure PCTCN2018079514-appb-000260
确定所述跟踪周期全身框的右下角纵坐标
Figure PCTCN2018079514-appb-000261
其中,所述
Figure PCTCN2018079514-appb-000262
Figure PCTCN2018079514-appb-000263
确定所述跟踪周期全身框
Figure PCTCN2018079514-appb-000264
本实施例所示的电子设备执行所述行人跟踪方法的具体执行过程请详见上述实施例所示,具体在本实施例中不做赘述。
本实施例所示的电子设备在执行所述行人跟踪方法所取得的有益效果的具体说明请详见上述实施例所示,具体在本实施例中不做赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理器中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (26)

  1. 一种行人跟踪方法,其特征在于,包括:
    确定待跟踪视频的检测周期和跟踪周期;
    在所述检测周期内,获取出现在所述待跟踪视频内的待跟踪行人的上半身检测框;
    根据所述上半身检测框获取所述待跟踪行人的检测周期全身框;
    在所述跟踪周期内,获取出现在所述待跟踪视频内的所述待跟踪行人的上半身跟踪框;
    根据所述检测周期全身框获取与所述上半身跟踪框对应的跟踪周期全身框,所述跟踪周期全身框用于对所述待跟踪行人进行跟踪。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述上半身检测框获取所述待跟踪行人的检测周期全身框之前,所述方法还包括:
    根据所述上半身检测框获取下半身扫描区域;
    若在所述下半身扫描区域内进行下半身检测获取到下半身检测框,则根据所述上半身检测框和所述下半身检测框获取所述检测周期全身框。
  3. 根据权利要求2所述的方法,其特征在于,所述上半身检测框
    Figure PCTCN2018079514-appb-100001
    Figure PCTCN2018079514-appb-100002
    其中,所述
    Figure PCTCN2018079514-appb-100003
    为所述上半身检测框的左上角横坐标,所述
    Figure PCTCN2018079514-appb-100004
    为所述上半身检测框的左上角纵坐标,所述
    Figure PCTCN2018079514-appb-100005
    为所述上半身检测框的右下角横坐标,所述
    Figure PCTCN2018079514-appb-100006
    为所述上半身检测框的右下角纵坐标;
    所述根据所述上半身检测框获取下半身扫描区域包括:
    确定第一参数,所述第一参数
    Figure PCTCN2018079514-appb-100007
    其中,所述Ratio default为预设的比值;
    确定第二参数,所述第二参数
    Figure PCTCN2018079514-appb-100008
    确定第三参数,
    Figure PCTCN2018079514-appb-100009
    根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述上半身检测框获取下半身扫描区域包括:
    根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域,其中,所述下半身扫描区域ScanArea=[L s,T s,R s,B s],所述L s为所述下半身扫描区域的左上角横坐标,所示T s为所述下半身扫描区域的左上角纵坐标,所述R s为所述下半身扫描区域的右 下角横坐标,所述B s为所述下半身扫描区域的右下角纵坐标;
    其中,所述
    Figure PCTCN2018079514-appb-100010
    所述
    Figure PCTCN2018079514-appb-100011
    Figure PCTCN2018079514-appb-100012
    所述
    Figure PCTCN2018079514-appb-100013
    所述
    Figure PCTCN2018079514-appb-100014
    Figure PCTCN2018079514-appb-100015
    所述paral1、所述paral2以及paral3为预设值,所述imgW为所述待跟踪视频在所述检测周期内任一图像帧的宽度,所述imgH为所述待跟踪视频在所述检测周期内任一图像帧的高度。
  5. 根据权利要求2至4任一项所述的方法,其特征在于,所述下半身检测框
    Figure PCTCN2018079514-appb-100016
    Figure PCTCN2018079514-appb-100017
    所述
    Figure PCTCN2018079514-appb-100018
    为所述下半身检测框的左上角横坐标,所述
    Figure PCTCN2018079514-appb-100019
    为所述下半身检测框的左上角纵坐标,所述
    Figure PCTCN2018079514-appb-100020
    为所述下半身检测框的右下角横坐标,所述
    Figure PCTCN2018079514-appb-100021
    为所述下半身检测框的右下角纵坐标,则所述根据所述上半身检测框和所述下半身检测框获取所述检测周期全身框包括:
    确定所述检测周期全身框的左上角横坐标,所述检测周期全身框的左上角横坐标
    Figure PCTCN2018079514-appb-100022
    Figure PCTCN2018079514-appb-100023
    确定所述检测周期全身框的左上角纵坐标
    Figure PCTCN2018079514-appb-100024
    确定所述检测周期全身框的右下角横坐标
    Figure PCTCN2018079514-appb-100025
    确定所述检测周期全身框的右下角纵坐标
    Figure PCTCN2018079514-appb-100026
    确定所述检测周期全身框
    Figure PCTCN2018079514-appb-100027
  6. 根据权利要求5所述的方法,其特征在于,所述确定所述检测周期全身框之后,所述方法还包括:
    确定所述检测周期全身框的宽和所述检测周期全身框的高的比值
    确定所述上半身检测框的高和所述检测周期全身框的高的比值
    Figure PCTCN2018079514-appb-100029
    根据所述
    Figure PCTCN2018079514-appb-100030
    和所述
    Figure PCTCN2018079514-appb-100031
    确定所述跟踪周期全身框。
  7. 根据权利要求3或4所述的方法,其特征在于,所述根据所述上半身检测框获取下半身扫描区域之后,所述方法还包括:
    若在所述下半身扫描区域内进行下半身检测未获取到所述下半身检测框,则确定所述检测周期全身框的左上角横坐标,所述检测周期全身框的左上角横坐标
    Figure PCTCN2018079514-appb-100032
    确定所述检测周期全身框的左上角纵坐标
    Figure PCTCN2018079514-appb-100033
    确定所述检测周期全身框的右下角横坐标
    Figure PCTCN2018079514-appb-100034
    确定所述检测周期全身框的右下角纵坐标
    Figure PCTCN2018079514-appb-100035
    确定所述检测周期全身框
    Figure PCTCN2018079514-appb-100036
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述上半身检测框获取下半身扫描区域之后,所述方法还包括:
    获取预设的所述检测周期全身框的宽和所述检测周期全身框的高的比值
    Figure PCTCN2018079514-appb-100037
    所述确定所述检测周期全身框之后,所述方法还包括:
    确定所述上半身检测框的高和所述检测周期全身框的高的比值
    Figure PCTCN2018079514-appb-100038
    根据所述
    Figure PCTCN2018079514-appb-100039
    和所述
    Figure PCTCN2018079514-appb-100040
    确定所述跟踪周期全身框。
  9. 根据权利要求6或8所述的方法,其特征在于,所述上半身跟踪框
    Figure PCTCN2018079514-appb-100041
    Figure PCTCN2018079514-appb-100042
    其中,所述
    Figure PCTCN2018079514-appb-100043
    为所述上半身跟踪框的左上角横坐标,所述
    Figure PCTCN2018079514-appb-100044
    为所述上半身跟踪框的左上角纵坐标,所述
    Figure PCTCN2018079514-appb-100045
    为所述上半身跟踪框的右下角横坐标,所述
    Figure PCTCN2018079514-appb-100046
    为所述上半身跟踪框的右下角纵坐标;
    所述根据所述
    Figure PCTCN2018079514-appb-100047
    和所述
    Figure PCTCN2018079514-appb-100048
    确定所述跟踪周期全身框包括:
    确定所述跟踪周期全身框的左上角横坐标,其中,若
    Figure PCTCN2018079514-appb-100049
    则所述跟踪周期全身框的左上角横坐标
    Figure PCTCN2018079514-appb-100050
    确定所述跟踪周期全身框的左上角纵坐标
    Figure PCTCN2018079514-appb-100051
    确定所述跟踪周期全身框的右下角横坐标
    Figure PCTCN2018079514-appb-100052
    确定所述跟踪周期全身框的右下角纵坐标
    Figure PCTCN2018079514-appb-100053
    其中,所述
    Figure PCTCN2018079514-appb-100054
    Figure PCTCN2018079514-appb-100055
    确定所述跟踪周期全身框
    Figure PCTCN2018079514-appb-100056
  10. 根据权利要求1至9任一项所述的方法,其特征在于,所述在所述跟踪周期内,获取出现在所述待跟踪视频内的所述待跟踪行人的上半身跟踪框包括:
    以所述上半身检测框为中心撒多个粒子,所述多个粒子中的任一粒子的宽度和所述多个粒子中的任一粒子的高度的比值与所述上半身检测框的宽度和所述上半身检测框的高度的比值相同;
    确定所述上半身跟踪框,所述上半身跟踪框为所述多个粒子中与所述上半身检测框最 相似的粒子。
  11. 根据权利要求9所述的方法,其特征在于,所述根据所述检测周期全身框获取与所述上半身跟踪框对应的跟踪周期全身框包括:
    确定所述跟踪周期全身框的左上角横坐标,其中,若
    Figure PCTCN2018079514-appb-100057
    则所述跟踪周期全身框的左上角横坐标
    Figure PCTCN2018079514-appb-100058
    确定所述跟踪周期全身框的左上角纵坐标
    Figure PCTCN2018079514-appb-100059
    确定所述跟踪周期全身框的右下角横坐标
    Figure PCTCN2018079514-appb-100060
    确定所述跟踪周期全身框的右下角纵坐标
    Figure PCTCN2018079514-appb-100061
    其中,所述
    Figure PCTCN2018079514-appb-100062
    Figure PCTCN2018079514-appb-100063
    确定所述跟踪周期全身框
    Figure PCTCN2018079514-appb-100064
  12. 根据权利要求1至11任一项所述的方法,其特征在于,所述方法还包括:
    获取所述待跟踪视频的目标图像帧序列,所述目标图像帧序列包括一个或多个连续的图像帧,且所述目标图像帧序列位于所述检测周期之前;
    根据所述目标图像帧序列获取所述待跟踪视频的背景区域;
    在所述检测周期内,将所述待跟踪视频的任一图像帧与所述背景区域相减以获取所述待跟踪视频的任一图像帧的前景区域;
    对所述待跟踪视频的任一图像帧的前景区域进行检测以获取所述待跟踪行人。
  13. 根据权利要求12所述的方法,其特征在于,所述获取出现在所述待跟踪视频内的待跟踪行人的上半身检测框包括:
    确定目标图像帧,所述目标图像帧为出现所述待跟踪行人的图像帧;
    在所述目标图像帧的前景区域内获取所述上半身检测框。
  14. 一种电子设备,其特征在于,包括:
    第一确定单元,用于确定待跟踪视频的检测周期和跟踪周期;
    第一获取单元,用于在所述检测周期内,获取出现在所述待跟踪视频内的待跟踪行人的上半身检测框;
    第二获取单元,用于根据所述上半身检测框获取所述待跟踪行人的检测周期全身框;
    第三获取单元,用于在所述跟踪周期内,获取出现在所述待跟踪视频内的所述待跟踪行人的上半身跟踪框;
    第四获取单元,用于根据所述检测周期全身框获取与所述上半身跟踪框对应的跟踪周 期全身框,所述跟踪周期全身框用于对所述待跟踪行人进行跟踪。
  15. 根据权利要求14所述的电子设备,其特征在于,所述第二获取单元具体用于,根据所述上半身检测框获取下半身扫描区域,若在所述下半身扫描区域内进行下半身检测获取到下半身检测框,则根据所述上半身检测框和所述下半身检测框获取所述检测周期全身框。
  16. 根据权利要求15所述的电子设备,其特征在于,所述上半身检测框
    Figure PCTCN2018079514-appb-100065
    Figure PCTCN2018079514-appb-100066
    其中,所述
    Figure PCTCN2018079514-appb-100067
    为所述上半身检测框的左上角横坐标,所述
    Figure PCTCN2018079514-appb-100068
    为所述上半身检测框的左上角纵坐标,所述
    Figure PCTCN2018079514-appb-100069
    为所述上半身检测框的右下角横坐标,所述
    Figure PCTCN2018079514-appb-100070
    为所述上半身检测框的右下角纵坐标,所述第二获取单元在根据所述上半身检测框获取下半身扫描区域时具体用于,确定第一参数,所述第一参数
    Figure PCTCN2018079514-appb-100071
    Figure PCTCN2018079514-appb-100072
    其中,所述Ratio default为预设的比值,确定第二参数,所述第二参数
    Figure PCTCN2018079514-appb-100073
    Figure PCTCN2018079514-appb-100074
    确定第三参数,
    Figure PCTCN2018079514-appb-100075
    根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域。
  17. 根据权利要求16所述的电子设备,其特征在于,所述第二获取单元在根据所述上半身检测框获取下半身扫描区域时具体用于,根据所述第一参数、所述第二参数以及所述第三参数确定所述下半身扫描区域,其中,所述下半身扫描区域ScanArea=[L s,T s,R s,B s],所述L s为所述下半身扫描区域的左上角横坐标,所示T s为所述下半身扫描区域的左上角纵坐标,所述R s为所述下半身扫描区域的右下角横坐标,所述B s为所述下半身扫描区域的右下角纵坐标;
    其中,所述
    Figure PCTCN2018079514-appb-100076
    所述
    Figure PCTCN2018079514-appb-100077
    Figure PCTCN2018079514-appb-100078
    所述
    Figure PCTCN2018079514-appb-100079
    所述
    Figure PCTCN2018079514-appb-100080
    Figure PCTCN2018079514-appb-100081
    所述paral1、所述paral2以及paral3为预设值,所述imgW为所述待跟踪视频在所述检测周期内任一图像帧的宽度,所述imgH为所述待跟踪视频在所述检测周期内任一图像帧的高度。
  18. 根据权利要求15至17任一项所述的电子设备,其特征在于,所述下半身检测框
    Figure PCTCN2018079514-appb-100082
    所述
    Figure PCTCN2018079514-appb-100083
    为所述下半身检测框的左上角横坐标,所述
    Figure PCTCN2018079514-appb-100084
    为所述下半身检测框的左上角纵坐标,所述
    Figure PCTCN2018079514-appb-100085
    所述下半身检测框的右下角横坐标, 所述
    Figure PCTCN2018079514-appb-100086
    为所述下半身检测框的右下角纵坐标;
    所述第二获取单元在根据所述上半身检测框获取所述待跟踪行人的检测周期全身框时具体用于,确定所述检测周期全身框的左上角横坐标,所述检测周期全身框的左上角横坐标
    Figure PCTCN2018079514-appb-100087
    确定所述检测周期全身框的左上角纵坐标
    Figure PCTCN2018079514-appb-100088
    确定所述检测周期全身框的右下角横坐标
    Figure PCTCN2018079514-appb-100089
    确定所述检测周期全身框的右下角纵坐标
    Figure PCTCN2018079514-appb-100090
    确定所述检测周期全身框
    Figure PCTCN2018079514-appb-100091
    Figure PCTCN2018079514-appb-100092
  19. 根据权利要求18所述的电子设备,其特征在于,所述第四获取单元具体用于,确定所述检测周期全身框的宽和所述检测周期全身框的高的比值
    Figure PCTCN2018079514-appb-100093
    确定所述上半身检测框的高和所述检测周期全身框的高的比值
    Figure PCTCN2018079514-appb-100094
    根据所述
    Figure PCTCN2018079514-appb-100095
    和所述
    Figure PCTCN2018079514-appb-100096
    确定所述跟踪周期全身框。
  20. 根据权利要求16或17所述的电子设备,其特征在于,所述第二获取单元在根据所述上半身检测框获取所述待跟踪行人的检测周期全身框时具体用于,若在所述下半身扫描区域内进行下半身检测未获取到所述下半身检测框,则确定所述检测周期全身框的左上角横坐标,所述检测周期全身框的左上角横坐标
    Figure PCTCN2018079514-appb-100097
    确定所述检测周期全身框的左上角纵坐标
    Figure PCTCN2018079514-appb-100098
    确定所述检测周期全身框的右下角横坐标
    Figure PCTCN2018079514-appb-100099
    确定所述检测周期全身框的右下角纵坐标
    Figure PCTCN2018079514-appb-100100
    确定所述检测周期全身框
    Figure PCTCN2018079514-appb-100101
  21. 根据权利要求20所述的电子设备,其特征在于,所述第四获取单元具体用于,获取预设的所述检测周期全身框的宽和所述检测周期全身框的高的比值
    Figure PCTCN2018079514-appb-100102
    确定所述上半身检测框的高和所述检测周期全身框的高的比值
    Figure PCTCN2018079514-appb-100103
    根据所述
    Figure PCTCN2018079514-appb-100104
    和所述
    Figure PCTCN2018079514-appb-100105
    确定所述跟踪周期全身框。
  22. 根据权利要求19或21所述的电子设备,其特征在于,所述上半身跟踪框
    Figure PCTCN2018079514-appb-100106
    Figure PCTCN2018079514-appb-100107
    其中,所述
    Figure PCTCN2018079514-appb-100108
    为所述上半身跟踪框的左上角横坐标,所述
    Figure PCTCN2018079514-appb-100109
    为所述上半身跟踪框的左上角纵坐标,所述
    Figure PCTCN2018079514-appb-100110
    为所述上半身跟踪框的右下角横坐标,所述
    Figure PCTCN2018079514-appb-100111
    为所述上半身跟踪框的右下角纵坐标;
    所述第四获取单元在根据所述
    Figure PCTCN2018079514-appb-100112
    和所述
    Figure PCTCN2018079514-appb-100113
    确定所述跟踪周期全身框时具体 用于,确定所述跟踪周期全身框的左上角横坐标,其中,若
    Figure PCTCN2018079514-appb-100114
    则所述跟踪周期全身框的左上角横坐标
    Figure PCTCN2018079514-appb-100115
    确定所述跟踪周期全身框的左上角纵坐标
    Figure PCTCN2018079514-appb-100116
    确定所述跟踪周期全身框的右下角横坐标
    Figure PCTCN2018079514-appb-100117
    确定所述跟踪周期全身框的右下角纵坐标
    Figure PCTCN2018079514-appb-100118
    其中,所述
    Figure PCTCN2018079514-appb-100119
    确定所述跟踪周期全身框
    Figure PCTCN2018079514-appb-100120
  23. 根据权利要求14至22任一项所述的电子设备,其特征在于,所述第三获取单元具体用于,以所述上半身检测框为中心撒多个粒子,所述多个粒子中的任一粒子的宽度和所述多个粒子中的任一粒子的高度的比值与所述上半身检测框的宽度和所述上半身检测框的高度的比值相同,确定所述上半身跟踪框,所述上半身跟踪框为所述多个粒子中与所述上半身检测框最相似的粒子。
  24. 根据权利要求22所述的电子设备,其特征在于,所述第四获取单元具体用于,确定所述跟踪周期全身框的左上角横坐标,其中,若
    Figure PCTCN2018079514-appb-100121
    则所述跟踪周期全身框的左上角横坐标
    Figure PCTCN2018079514-appb-100122
    确定所述跟踪周期全身框的左上角纵坐标
    Figure PCTCN2018079514-appb-100123
    确定所述跟踪周期全身框的右下角横坐标
    Figure PCTCN2018079514-appb-100124
    确定所述跟踪周期全身框的右下角纵坐标
    Figure PCTCN2018079514-appb-100125
    其中,所述
    Figure PCTCN2018079514-appb-100126
    确定所述跟踪周期全身框
    Figure PCTCN2018079514-appb-100127
  25. 根据权利要求14至24任一项所述的电子设备,其特征在于,所述电子设备还包括:
    第五获取单元,用于获取所述待跟踪视频的目标图像帧序列,所述目标图像帧序列包括一个或多个连续的图像帧,且所述目标图像帧序列位于所述检测周期之前;
    第六获取单元,用于根据所述目标图像帧序列获取所述待跟踪视频的背景区域;
    第七获取单元,用于在所述检测周期内,将所述待跟踪视频的任一图像帧与所述背景区域相减以获取所述待跟踪视频的任一图像帧的前景区域;
    第八获取单元,用于对所述待跟踪视频的任一图像帧的前景区域进行检测以获取所述待跟踪行人。
  26. 根据权利要求25所述的电子设备,其特征在于,所述第一获取单元具体用于,确定目标图像帧,所述目标图像帧为出现所述待跟踪行人的图像帧,在所述目标图像帧的前景区域内获取所述上半身检测框。
PCT/CN2018/079514 2017-03-31 2018-03-20 一种行人跟踪方法以及电子设备 WO2018177153A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020197026991A KR102296088B1 (ko) 2017-03-31 2018-03-20 보행자 추적 방법 및 전자 디바이스
EP18774563.3A EP3573022A4 (en) 2017-03-31 2018-03-20 METHOD FOR TRACKING PEDESTRIAN AND ELECTRONIC DEVICE
JP2019553258A JP6847254B2 (ja) 2017-03-31 2018-03-20 歩行者追跡の方法および電子デバイス
US16/587,941 US20200027240A1 (en) 2017-03-31 2019-09-30 Pedestrian Tracking Method and Electronic Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710208767.7A CN108665476B (zh) 2017-03-31 2017-03-31 一种行人跟踪方法以及电子设备
CN201710208767.7 2017-03-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/587,941 Continuation US20200027240A1 (en) 2017-03-31 2019-09-30 Pedestrian Tracking Method and Electronic Device

Publications (1)

Publication Number Publication Date
WO2018177153A1 true WO2018177153A1 (zh) 2018-10-04

Family

ID=63675224

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/079514 WO2018177153A1 (zh) 2017-03-31 2018-03-20 一种行人跟踪方法以及电子设备

Country Status (6)

Country Link
US (1) US20200027240A1 (zh)
EP (1) EP3573022A4 (zh)
JP (1) JP6847254B2 (zh)
KR (1) KR102296088B1 (zh)
CN (1) CN108665476B (zh)
WO (1) WO2018177153A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012949A (zh) * 2023-02-06 2023-04-25 南京智蓝芯联信息科技有限公司 一种复杂场景下的人流量统计识别方法及系统

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558505A (zh) * 2018-11-21 2019-04-02 百度在线网络技术(北京)有限公司 视觉搜索方法、装置、计算机设备及存储介质
CN111209774B (zh) * 2018-11-21 2024-03-26 杭州海康威视数字技术股份有限公司 目标行为识别及显示方法、装置、设备、可读介质
CN110418114B (zh) * 2019-08-20 2021-11-16 京东方科技集团股份有限公司 一种对象跟踪方法、装置、电子设备及存储介质
CN111767782B (zh) * 2020-04-15 2022-01-11 上海摩象网络科技有限公司 一种跟踪目标确定方法、装置和手持相机
CN113642360A (zh) * 2020-04-27 2021-11-12 杭州海康威视数字技术股份有限公司 一种行为计时方法、装置、电子设备及存储介质
CN112541395A (zh) * 2020-11-13 2021-03-23 浙江大华技术股份有限公司 一种目标检测和跟踪方法、装置、存储介质及电子装置
CN112784680B (zh) * 2020-12-23 2024-02-02 中国人民大学 一种人流密集场所锁定密集接触者的方法和系统
CN113096155B (zh) * 2021-04-21 2023-01-17 青岛海信智慧生活科技股份有限公司 一种社区多特征融合的目标跟踪方法及装置
KR102328644B1 (ko) * 2021-07-01 2021-11-18 주식회사 네패스아크 안전운전 도우미 시스템 및 그 동작방법
CN116091552B (zh) * 2023-04-04 2023-07-28 上海鉴智其迹科技有限公司 基于DeepSORT的目标跟踪方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101232571A (zh) * 2008-01-25 2008-07-30 北京中星微电子有限公司 一种人体图像匹配方法及视频分析检索系统
CN102592288A (zh) * 2012-01-04 2012-07-18 西安理工大学 一种光照环境变化情况下的行人目标匹配跟踪方法
CN104063681A (zh) * 2014-05-30 2014-09-24 联想(北京)有限公司 一种活动对象图像识别方法及装置
US20140348382A1 (en) * 2013-05-22 2014-11-27 Hitachi, Ltd. People counting device and people trajectory analysis device
CN105574515A (zh) * 2016-01-15 2016-05-11 南京邮电大学 一种无重叠视域下的行人再识别方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5206494B2 (ja) * 2009-02-27 2013-06-12 株式会社リコー 撮像装置、画像表示装置と、撮像方法及び画像表示方法並びに合焦領域枠の位置補正方法
US9437009B2 (en) * 2011-06-20 2016-09-06 University Of Southern California Visual tracking in video images in unconstrained environments by exploiting on-the-fly context using supporters and distracters
CN102509086B (zh) * 2011-11-22 2015-02-18 西安理工大学 一种基于目标姿态预测及多特征融合的行人目标检测方法
CN102609686B (zh) * 2012-01-19 2014-03-12 宁波大学 一种行人检测方法
CN103778360A (zh) * 2012-10-26 2014-05-07 华为技术有限公司 一种基于动作分析的人脸解锁的方法和装置
US20140169663A1 (en) * 2012-12-19 2014-06-19 Futurewei Technologies, Inc. System and Method for Video Detection and Tracking
JP6040825B2 (ja) * 2013-03-26 2016-12-07 富士通株式会社 物体追跡プログラム、物体追跡方法及び物体追跡装置
JP6340227B2 (ja) * 2014-03-27 2018-06-06 株式会社メガチップス 人物検出装置
CN106204653B (zh) * 2016-07-13 2019-04-30 浙江宇视科技有限公司 一种监控跟踪方法及装置
CN106845385A (zh) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 视频目标跟踪的方法和装置
CN106960446B (zh) * 2017-04-01 2020-04-24 广东华中科技大学工业技术研究院 一种面向无人艇应用的水面目标检测跟踪一体化方法
CN107909005A (zh) * 2017-10-26 2018-04-13 西安电子科技大学 基于深度学习的监控场景下人物姿态识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101232571A (zh) * 2008-01-25 2008-07-30 北京中星微电子有限公司 一种人体图像匹配方法及视频分析检索系统
CN102592288A (zh) * 2012-01-04 2012-07-18 西安理工大学 一种光照环境变化情况下的行人目标匹配跟踪方法
US20140348382A1 (en) * 2013-05-22 2014-11-27 Hitachi, Ltd. People counting device and people trajectory analysis device
CN104063681A (zh) * 2014-05-30 2014-09-24 联想(北京)有限公司 一种活动对象图像识别方法及装置
CN105574515A (zh) * 2016-01-15 2016-05-11 南京邮电大学 一种无重叠视域下的行人再识别方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3573022A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012949A (zh) * 2023-02-06 2023-04-25 南京智蓝芯联信息科技有限公司 一种复杂场景下的人流量统计识别方法及系统
CN116012949B (zh) * 2023-02-06 2023-11-17 南京智蓝芯联信息科技有限公司 一种复杂场景下的人流量统计识别方法及系统

Also Published As

Publication number Publication date
CN108665476A (zh) 2018-10-16
JP6847254B2 (ja) 2021-03-24
JP2020515974A (ja) 2020-05-28
KR102296088B1 (ko) 2021-08-30
US20200027240A1 (en) 2020-01-23
EP3573022A4 (en) 2020-01-01
EP3573022A1 (en) 2019-11-27
CN108665476B (zh) 2022-03-11
KR20190118619A (ko) 2019-10-18

Similar Documents

Publication Publication Date Title
WO2018177153A1 (zh) 一种行人跟踪方法以及电子设备
CN109272530B (zh) 面向空基监视场景的目标跟踪方法与装置
US10268900B2 (en) Real-time detection, tracking and occlusion reasoning
WO2019184749A1 (zh) 轨迹跟踪方法、装置、计算机设备和存储介质
JP6655878B2 (ja) 画像認識方法及び装置、プログラム
CN111291633B (zh) 一种实时行人重识别方法及装置
WO2020094091A1 (zh) 一种图像抓拍方法、监控相机及监控系统
JP6204659B2 (ja) 映像処理装置及び映像処理方法
Sengar et al. Moving object area detection using normalized self adaptive optical flow
US11527000B2 (en) System and method for re-identifying target object based on location information of CCTV and movement information of object
WO2022001925A1 (zh) 行人追踪方法和设备,及计算机可读存储介质
US20100124356A1 (en) Detecting objects crossing a virtual boundary line
WO2020233397A1 (zh) 在视频中对目标进行检测的方法、装置、计算设备和存储介质
CN110781733B (zh) 图像去重方法、存储介质、网络设备和智能监控系统
CN109859246B (zh) 一种结合相关滤波与视觉显著性的低空慢速无人机跟踪方法
US10621730B2 (en) Missing feet recovery of a human object from an image sequence based on ground plane detection
JP6868061B2 (ja) 人物追跡方法、装置、機器及び記憶媒体
JP2018170003A (ja) ビデオ中のイベントの検出装置、方法及び画像処理装置
CN110633648B (zh) 一种自然行走状态下的人脸识别方法和系统
CN111241872A (zh) 视频图像遮挡方法及装置
CN113228626A (zh) 视频监控系统和方法
KR101492059B1 (ko) 평균이동 알고리즘을 적용한 실시간 객체 추적방법 및 시스템
JP6798609B2 (ja) 映像解析装置、映像解析方法およびプログラム
JP2002342762A (ja) 物体追跡方法
JP2014228881A5 (zh)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18774563

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018774563

Country of ref document: EP

Effective date: 20190823

ENP Entry into the national phase

Ref document number: 20197026991

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019553258

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE