WO2023072290A1 - Tracking target occlusion determination method and apparatus, device and storage medium - Google Patents

Tracking target occlusion determination method and apparatus, device and storage medium Download PDF

Info

Publication number
WO2023072290A1
WO2023072290A1 PCT/CN2022/128719 CN2022128719W WO2023072290A1 WO 2023072290 A1 WO2023072290 A1 WO 2023072290A1 CN 2022128719 W CN2022128719 W CN 2022128719W WO 2023072290 A1 WO2023072290 A1 WO 2023072290A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
template
block
similarity
Prior art date
Application number
PCT/CN2022/128719
Other languages
French (fr)
Chinese (zh)
Inventor
米俊桦
吴强
刘长杰
孔祥
Original Assignee
中移(成都)信息通信科技有限公司
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中移(成都)信息通信科技有限公司, 中国移动通信集团有限公司 filed Critical 中移(成都)信息通信科技有限公司
Publication of WO2023072290A1 publication Critical patent/WO2023072290A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present disclosure relates to the field of computer vision, and in particular to a tracking target occlusion determination method, device, equipment and storage medium.
  • Target tracking technology is an important technology in the field of computer vision, and it has been widely used in scenarios such as smart transportation, smart security, and military weapons.
  • target tracking technology how to achieve accurate target tracking when the target is occluded is extremely important, and accurately judging whether the target is occluded is the basis of target tracking technology.
  • the size of the tracking frame needs to be adjusted in time. Therefore, the accuracy of the determination result of tracking target occlusion will affect the adjustment efficiency of the tracking frame, and then affect the tracking effect.
  • the existing tracking target occlusion determination methods have problems such as low accuracy and poor robustness.
  • Embodiments of the present disclosure expect to provide a tracking target occlusion determination method, device, device, and storage medium, which can improve the accuracy and robustness of the occlusion determination method.
  • an embodiment of the present disclosure provides a method for determining occlusion of a tracking target, including:
  • a template frame from the video, and including a template image of a tracking target in the template frame; obtaining first image information of the template image; tracking the tracking target of the current frame in the video, and determining a target image; Acquiring second image information of the target image; determining an occlusion determination value corresponding to the current frame based on the first image information, the second image information, and an occlusion determination function; wherein, the occlusion determination function includes: the The first similarity function between the target image and the template image, the second similarity function between the target image and the background image, the change degree function between the size of the target image and the size of the template image; the occlusion determination When the value is greater than or equal to the first preset threshold, it is determined that the tracking target is blocked.
  • the first image information includes: the color autocorrelation histogram of the template image;
  • the second image information includes: the color autocorrelation histogram of the target image; wherein, the The color autocorrelation histogram includes: under a preset distance threshold, the ratio of the number of pixels of each color level in the entire image to the total number of pixels in the entire image.
  • the determining the occlusion determination value corresponding to the current frame based on the first image information, the second image information and the occlusion determination function includes: combining the first image information with the second Substituting the image information into the first similarity function to obtain the first similarity; substituting the second image information into the second similarity function to obtain the second similarity; combining the target image size with the template image Substituting the size into the change degree function to determine the change degree value; making a difference between the first similarity and the second similarity to obtain a similarity difference; if the similarity difference is greater than or equal to the second preset threshold, then use the similarity difference as the occlusion judgment value; if the similarity difference is smaller than the second preset threshold, use the product of the difference similarity and the change degree value as The occlusion judgment value.
  • the first similarity function is:
  • the preset distance threshold in the target image is d, the ratio of the number of pixels with color level i to the total number of pixels in the target image;
  • the preset distance threshold in the template image is d, the number of pixels whose color grade is i accounts for the ratio of the total number of pixels in the template image;
  • D is a set of preset distance thresholds;
  • q is the number of the color grades.
  • the first image information further includes: a template image size
  • the second image information further includes: a target image size
  • the change degree function determining the change degree value includes: calculating the size change rate of the target image size compared with the template image size based on the change degree function; if the size change rate is greater than or equal to a third preset Threshold, determine the value of the change degree as 1; if the size change rate is less than the third preset threshold, determine the value of the change degree as 0.
  • the second image information further includes: a color autocorrelation histogram of at least one image block in the target image, a color autocorrelation histogram of a background block of the image block;
  • the second similarity function is:
  • the preset distance threshold When the preset distance threshold is d in the jth image block, the ratio of the number of pixels with color level i to the total number of pixels in the jth image block; When the preset distance threshold is d in the jth background block, the number of pixels whose color level is i accounts for the ratio of the total number of pixels in the jth background block; D is the set of preset distance thresholds; q is the color level number, u is the number of the image block.
  • the method further includes: longitudinally dividing the target image to obtain a first image block and a second image block; wherein, the first image block is located at the edge of the second image block Upper part; the target image is horizontally divided to obtain a third image block and a fourth image block; wherein the third image block is located on the left of the fourth image block; wherein the at least one image block includes The 1st image block, the 2nd image block, the 3rd image block and the 4th image block; in the current frame, select the side above the 1st image block adjacent to the side of the 1st image block, and have the same size In the current frame, select an area located below the second image block adjacent to the side of the second image block and having the same size as the second background block; in the current frame In the frame, select an area located on the left side of the third image block adjacent to the side of the third image block and has the same size as the third background block; in the current frame, select an area located on the fourth image block The area on the right adjacent
  • the method further includes: determining the intersection of the target image and the template image based on the position information of the target image and the template image The intersecting area, and the area of the template image; calculating the ratio of the intersecting area to the area of the template image; determining a target occlusion degree value based on the ratio.
  • the method further includes: when the occlusion determination function value of the current frame is smaller than the first preset threshold, using the current frame as a new template frame; based on the new template frame, determine the value of the occlusion decision function for the next frame.
  • an embodiment of the present disclosure further provides a tracking target occlusion determination device, including: a processing module configured to determine a template frame from a video, and the template frame includes a template image of the tracking target; an acquisition module configured to Acquire the first image information of the template image; the processing module is also configured to track the tracking target in the current frame of the video, and determine the target image; the acquisition module is also configured to acquire the target The second image information of the image; the processing module is further configured to determine the occlusion determination value corresponding to the current frame based on the first image information, the second image information and the occlusion determination function; wherein, the occlusion determination function Including: a first similarity function between the target image and the template image, a second similarity function between the target image and the background image, and a change degree function between the size of the target image and the size of the template image; The processing module is further configured to determine that the tracking target is blocked when the occlusion determination value is greater than or equal to a first prese
  • an embodiment of the present disclosure provides a tracking target occlusion determination device, including: a processor and a memory configured to store a computer program that can run on the processor, wherein the processor is configured to run the computer program , execute the steps of the method for determining the occlusion of the tracking target as described in the first aspect.
  • an embodiment of the present disclosure provides a computer storage medium on which a computer program is stored, wherein when the computer program is executed by a processor, the step of determining the occlusion of the tracking target as described in the first aspect is implemented.
  • the technical solution of the embodiment of the present disclosure by setting the occlusion judgment that combines the similarity function between the template image and the target image, the similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image.
  • the function when judging the occlusion situation, combines multiple influencing factors for judging the occlusion situation, that is, the first similarity and change degree value between the target image and the template image, and the second similarity between the target image and the background image Combined to determine and judge the occlusion situation, improve the accuracy and robustness of the occlusion judgment method.
  • FIG. 1 is a schematic flowchart of a first flow chart of a method for determining occlusion of a tracking target in an embodiment of the present disclosure
  • FIG. 2 is a first schematic diagram of a target image in a current frame in an embodiment of the present disclosure
  • FIG. 3 is a second schematic diagram of a target image in a current frame in an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart of a method for determining an occlusion judgment value corresponding to a current frame in an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of an image block and a background block of a target image in an embodiment of the present disclosure
  • FIG. 6 is a second schematic flowchart of a method for determining occlusion of a tracking target in an embodiment of the present disclosure
  • FIG. 7 is a schematic flowchart of a third flowchart of a method for determining occlusion of a tracking target in an embodiment of the present disclosure
  • FIG. 8 is a schematic diagram of the composition and structure of a tracking target occlusion determination device in an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram of the composition and structure of a tracking target occlusion determination device in an embodiment of the present disclosure.
  • FIG. 1 is a schematic flow chart of a first method for identifying a network device in an embodiment of the present disclosure. As shown in Figure 1, the method for judging the occlusion of the tracking target may specifically include:
  • Step 101 determining a template frame from the video, and the template frame includes a template image of the tracking target;
  • the video is the video captured by the video capture device
  • the tracking target is the target object to be tracked in the video, such as a certain vehicle or a certain person in the video.
  • the template frame is a frame with a template function in the video, which is used for comparison with the current frame of the video, and based on the comparison result, it is judged whether the tracking target in the current frame is blocked.
  • the template frame may be an initial frame of the video, and may also be an image frame in the video where the tracking target is not occluded.
  • the image within the tracking frame in the template frame is used as the template image.
  • the method further includes: using the first frame of the video as a template frame; selecting a region in the first frame that contains the tracking target as a template image.
  • Step 102 Acquiring first image information of the template image
  • the first image information includes template image related information.
  • the first image information includes: a color autocorrelation histogram of the template image.
  • the color autocorrelation histogram includes: under a preset distance threshold, the ratio of the number of pixels of each color level in the entire image to the total number of pixels in the entire image.
  • the color autocorrelation histogram not only contains the quantity distribution information of different levels of color in the entire image, but also contains the spatial distribution information of the same level of color, and it is beneficial to obtain more accurate image similarity by using it for similarity calculation .
  • Step 103 track the tracking target of the current frame in the video, and determine the target image
  • the current frame is an image frame in the video that needs to track the tracking target and determine the occlusion of the tracking target.
  • Tracking the tracking target of the current frame in the video can be achieved by a tracking algorithm.
  • the tracking algorithm includes but is not limited to a single target tracking algorithm, such as Siamese Region Proposal Networks (SiamRPN, Siamese Region Proposal Networks) Target tracking algorithm, Discriminative Scale Space Tracker (DSST, Discriminatiive Scale Space Tracker,) target tracking algorithm, etc.
  • the target image is the image area containing the tracking target in the current frame.
  • the image within the tracking frame in the current frame may be used as the target image.
  • FIG. 2 is a first schematic diagram of a target image in a current frame in an embodiment of the present disclosure
  • FIG. 3 is a second schematic diagram of a target image in a current frame in an embodiment of the present disclosure.
  • the vehicle image inside the box is the target image in the current frame.
  • Step 104 acquiring second image information of the target image
  • the second image information includes target image related information.
  • the second image information includes: the color autocorrelation histogram of the target image; wherein, the color autocorrelation histogram includes: under a preset distance threshold, each The ratio of the number of pixels of the color level to the total number of pixels of the entire image.
  • Step 105 Determine an occlusion determination value corresponding to the current frame based on the first image information, the second image information, and the occlusion determination function;
  • the occlusion determination function includes: a first similarity function between the target image and the template image, a second similarity function between the target image and the background image, a size of the target image and a size of the template image The degree of change function between.
  • the target image is compared with the template image to determine the similarity between the two (namely the first similarity), if the similarity between the two (namely the first similarity), if the similarity between the two.
  • the occlusion situation can be determined by the similarity between the target image and the template image. Incorporating the first similarity into the occlusion judgment function can improve the accuracy of the occlusion judgment value, and further improve the accuracy of the occlusion judgment result.
  • the background image is a partial image area around the target image in the current frame. If the similarity between the background image and the target image (that is, the second similarity) is greater, it indicates that the background image is more similar to the target image, indicating that the probability of the tracking target being blocked is greater. Therefore, the occlusion situation can be determined by comparing the similarity between the target image and the background image. Integrating the second similarity into the occlusion determination function can improve the accuracy of the occlusion determination value, and further improve the accuracy of the occlusion determination result.
  • the current more advanced tracking algorithm can solve the problem of partial occlusion of the tracking target, so not all occlusion situations need to be dealt with .
  • the car on the left is the tracking target.
  • the current excellent target tracking algorithms all have good scale adaptive capabilities, that is, when the target is occluded, the tracking algorithm will adjust the size of the tracking frame in time.
  • the car on the left is
  • the size of the tracking frame (target image size) changes greatly. Therefore, the scale change of the tracking frame is correlated with the target occlusion, and the robustness of the occlusion determination method can be improved by incorporating the change degree function between the target image size and the template image size into the occlusion determination function.
  • the occlusion judgment function is jointly determined by the first similarity function between the target image and the template image, the second similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image.
  • multiple influencing factors for judging the occlusion situation are combined, that is, the first similarity and change degree value between the target image and the template image, and the second similarity between the target image and the background image are combined to obtain Determine and judge the occlusion situation, and improve the accuracy and robustness of the occlusion judgment method.
  • Step 106 When the occlusion determination value is greater than or equal to a first preset threshold, determine that the tracking target is occluded.
  • the first preset threshold may be constant, and may be determined according to actual requirements.
  • the method further includes: when the occlusion determination function value of the current frame is smaller than the first preset threshold, using the current frame as a new template frame; based on the new template frame to determine the value of the occlusion decision function for the next frame.
  • the occlusion determination function value of the current frame is smaller than the first preset threshold, it is determined that the tracking target is not occluded.
  • the current frame where the target is not occluded is used as the template frame, and the occlusion judgment function value of the next frame is carried out, so as to realize the occlusion judgment of the tracking target in the whole video.
  • the subject of execution of steps 101 to 106 may be a processor of the tracking target occlusion determination device.
  • the tracking target occlusion determination method in the present disclosure can also be coupled to a specific target tracking algorithm, which has good scalability.
  • the occlusion judgment function that combines the similarity function between the template image and the target image, the similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image.
  • the factors that can be used to judge the occlusion situation are combined, that is, the first similarity, the second similarity and the change degree value are combined, and the occlusion situation is determined and judged by the combined result, which improves the occlusion judgment method. accuracy and robustness.
  • the method for determining the color autocorrelation histogram of the target image includes:
  • Step 201 convert the target image into HSV color space by RGB color space, and extract the H (hue) component of HSV color space;
  • Step 202 Non-uniformly quantize the H color component of the target image into q intervals according to the following formula, where each interval corresponds to a color level.
  • q the value of q here is 14.
  • the H (hue) component in the HSV color space represents color information, and its value ranges from 0° to 360°, and its value corresponds to the wavelength of each color in the visible spectrum, so the H component range corresponding to each color It is non-uniformly distributed; this method first non-uniformly quantizes the H color component of the target image into 7 intervals according to the seven colors of red, orange, yellow, green, blue, blue, and purple, and then divides each quantization interval into two In the small interval, the H color component is finally non-uniformly quantized into 14 intervals. Exemplarily, if the H component of pixel i in the target image is between 0° and 23°, it indicates that the color level of the pixel is 0. If the H component of pixel i in the target image is located between 330° and 360°, it indicates that the color level of the pixel is 1.
  • Step 203 Calculate the color autocorrelation histogram of the target image in the tth frame (current frame) according to the following formula, and normalize the histogram:
  • max(
  • ) is used to calculate the distance between pixel points p 1 and p 2
  • D is A set of preset distance values d
  • the color autocorrelation histogram of the template image can be obtained and color autocorrelation histograms for other images.
  • FIG. 4 is a schematic flowchart of a method for determining an occlusion determination value corresponding to a current frame in an embodiment of the present disclosure.
  • the determining the occlusion determination value corresponding to the current frame based on the first image information, the second image information and the occlusion determination function includes:
  • Step 401 Substituting the first image information and the second image information into the first similarity function to obtain a first similarity
  • the first similarity function is:
  • the preset distance threshold in the target image is d, the ratio of the number of pixels with color level i to the total number of pixels in the target image;
  • the preset distance threshold in the template image is d, the number of pixels whose color grade is i accounts for the ratio of the total number of pixels in the template image;
  • D is a set of preset distance thresholds;
  • q is the number of the color grades.
  • Step 402 Substituting the second image information into the second similarity function to obtain a second similarity
  • the second image information further includes: a color autocorrelation histogram of at least one image block in the target image, a color autocorrelation histogram of a background block of the image block;
  • the second similarity function is:
  • the preset distance threshold When the preset distance threshold is d in the jth image block, the ratio of the number of pixels with color level i to the total number of pixels in the jth image block; When the preset distance threshold is d in the jth background block, the number of pixels whose color level is i accounts for the ratio of the total number of pixels in the jth background block; D is the set of preset distance thresholds; q is the color level number, u is the number of the image block. Exemplarily, the value of u may be 4.
  • At least one image block is a plurality of image blocks obtained by dividing the target image into blocks.
  • the method further includes: longitudinally dividing the target image to obtain a first image block and a second image block; wherein, the first image block is located in the second image The upper part of the block; the target image is horizontally divided to obtain the third image block and the fourth image block; wherein the third image block is located at the left part of the fourth image block; wherein the at least one image
  • the blocks include a first image block, a second image block, a third image block and a fourth image block;
  • the current frame select an area above the first image block that is adjacent to the side of the first image block and has the same size as the first background block; in the current frame, select an area located on the first image block The area below the 2 image block adjacent to the side of the 2nd image block and having the same size is used as the 2nd background block; in the current frame, select the area located on the left side of the 3rd image block and The area with adjacent sides and the same size is used as the third background block; in the current frame, select the area on the right side of the fourth image block that is adjacent to the side of the fourth image block and has the same size as the first background block 4 background blocks.
  • unequal division When dividing the target image into blocks, unequal division may be performed.
  • the unequal division can be: dividing the left two-thirds of the target image as an image block; dividing the right two-thirds of the target image as an image block; dividing the upper two-thirds of the target image as an image block , divide the lower two-thirds of the target image as an image block.
  • the following takes equal division as an example to describe in detail.
  • FIG. 5 is a schematic diagram of an image block and a background block of a target image in an embodiment of the present disclosure.
  • o1 is the upper half image of the target image, which is equivalent to the above-mentioned first image block
  • o2 is the lower half image of the target image, which is equivalent to the above-mentioned second image block
  • o3 is the left half image of the target image, which is equivalent to In the above-mentioned third image block
  • o4 is the right half image of the target image, which is equivalent to the above-mentioned fourth image block.
  • b1, b2, b3, and b4 correspond to the above-mentioned first background block, second background block, third background block, and fourth background block, respectively.
  • Step 403 Substituting the size of the target image and the size of the template image into the degree of change function to determine a degree of change value
  • the first image information further includes: a template image size
  • the second image information further includes: a target image size
  • the step of substituting the target image size and the template image size into the change degree function to determine the change degree value includes: calculating the size of the target image size compared with the template image size based on the change degree function Rate of change; if the rate of change in size is greater than or equal to a third preset threshold, determine that the value of the degree of change is 1; if the rate of change in size is less than the third preset threshold, determine that the value of the degree of change is 0 .
  • the size of the tracking frame in the current frame may be used as the target image size
  • the size of the tracking frame in the template frame may be used as the size of the template image.
  • the size of the target image includes: the width and height of the tracking frame in the current frame
  • the size of the template image includes: the width and height of the tracking frame in the template frame.
  • the change degree function can be represented by the following formula.
  • w t and h t represent the width and height of the tracking frame in the tth frame (current frame), respectively
  • w T and h T represent the width and height of the tracking frame in the template frame, respectively
  • is the third preset threshold.
  • the value of ⁇ may be 0.7.
  • Step 404 making a difference between the first similarity and the second similarity to obtain a similarity difference
  • Step 405 If the similarity difference is greater than or equal to a second preset threshold, use the similarity difference as the occlusion determination value; if the similarity difference is smaller than the second preset threshold, The product of the difference similarity and the change degree value is then used as the occlusion determination value.
  • the method of determining the occlusion judgment value can be expressed by the following formula:
  • represents the similarity difference, that is, the greater the similarity between the target image and the target template, and the smaller the similarity between the target image and the background image, the greater the value of ⁇ .
  • is an occlusion determination function
  • is a second preset threshold.
  • n can take a value of 0.3, when ⁇ is less than the second preset threshold, It is the product of the degree of change and the value of ⁇ .
  • the current excellent target tracking algorithms have good scale adaptive capabilities, that is, when the target is occluded, the tracking algorithm will adjust the size of the tracking frame in time. At this time, it is not accurate enough to judge whether the target is occluded by only using the similarity of the color distribution. of. Because the color autocorrelation histogram not only contains the quantity distribution information of different grade colors in the whole image, but also contains the spatial distribution information of the same grade color.
  • the first similarity and the second similarity based on the color autocorrelation histogram of the image, it is beneficial to obtain a more accurate image similarity, thereby improving the accuracy of the tracking target occlusion determination method; by combining the target image size with The variation degree value between template image sizes is integrated into the determination process of the occlusion judgment value, which can improve the robustness of the occlusion judgment method.
  • FIG. 6 is a second schematic flow chart of a method for determining occlusion of a tracking target in an embodiment of the present disclosure.
  • the tracking target occlusion determination method may specifically include:
  • Step 601 Determine a template frame from the video, and the template frame includes a template image of the tracking target;
  • the method further includes: using the first frame of the video as a template frame; selecting a region in the first frame that contains the tracking target as a template image.
  • Step 602 Obtain first image information of the template image
  • the first image information includes: the color autocorrelation histogram of the template image and the size of the template image.
  • the color autocorrelation histogram includes: under a preset distance threshold, the ratio of the number of pixels of each color level in the entire image to the total number of pixels in the entire image.
  • Step 603 Track the tracking target in the current frame of the video, and determine the target image
  • tracking the tracking target in the current frame of the video may be implemented by a tracking algorithm.
  • the tracking algorithm includes but not limited to a single target tracking algorithm SiamRPN, DSST target tracking algorithm, and the like.
  • the target image is the image area containing the tracking target in the current frame.
  • the target image is the image area within the tracking frame.
  • Step 604 Obtain second image information of the target image
  • the second image information includes: the color autocorrelation histogram of the target image, the size of the target image, the color autocorrelation histogram of at least one image block in the target image, and the color autocorrelation histogram of the background block of the image block.
  • Step 605 Determine an occlusion determination value corresponding to the current frame based on the first image information, the second image information, and the occlusion determination function;
  • the occlusion determination function includes: a first similarity function between the target image and the template image, a second similarity function between the target image and the background image, a size of the target image and a size of the template image The degree of change function between.
  • determining the occlusion determination value corresponding to the current frame includes:
  • Step 606 When the occlusion determination value is greater than or equal to a first preset threshold, determine that the tracking target is occluded;
  • Step 607 Based on the position information of the target image and the template image, determine the intersection area of the target image and the template image and the area of the template image;
  • Step 608 Calculate the ratio of the intersection area to the area of the template image
  • Step 609 Determine a target occlusion degree value based on the ratio.
  • the calculation method of the occlusion degree value can be expressed by the following formula:
  • w 0 min(x T +h T ,x t +h t )-max(x T ,x t ) ;
  • h 0 min(y T +w T ,y t +w t )-max(y T ,y t )
  • w 0 h 0 represents the area of the intersection of the current frame tracking frame and the template frame tracking frame
  • w T h T represents the area of the template frame tracking frame
  • w T , h T represent the width and High
  • represents the degree of occlusion, and its value range is [0, 1]. The larger the value, the higher the degree of occlusion of the target.
  • the subject of execution of steps 601 to 609 may be a processor of the tracking target occlusion determination device.
  • an occlusion judgment function that combines the similarity function between the template image and the target image, the similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image.
  • FIG. 7 is a schematic flowchart of a third method for determining occlusion of a tracking target in an embodiment of the present disclosure.
  • the method for judging the occlusion of the tracking target may specifically include:
  • Step 701 Obtain the tth frame
  • Step 702 Determine whether t is equal to 1; if so, execute step 703; if not, execute step 704;
  • Step 703 Frame the target and determine the template frame and template image
  • the first frame is used as the template frame, and the tracking frame containing the tracking target is selected, and the image in the frame is used as the template image;
  • Step 704 Obtain the size and color autocorrelation histogram of the template image
  • the size of the template image is the size of the tracking frame
  • the color autocorrelation histogram is the color autocorrelation histogram of the template image.
  • the template image may be a preset image area containing a tracking target in a preset frame;
  • Step 705 Determine the target image in the tth frame based on the tracking algorithm
  • Step 706 Obtain the size of the target image, the color autocorrelation diagram of the image block and the color autocorrelation histogram of the background block;
  • the size of the target image is the size of the tracking box in the current frame.
  • Step 707 Calculating the first similarity, second similarity, and change degree values
  • the first similarity is the similarity between the target image and the template image
  • the second similarity is the similarity between the target image and the background image
  • the change degree value is the change degree value between the target image size and the template image size.
  • Step 708 Calculate the occlusion judgment value
  • Step 709 Whether it is blocked; if yes, execute steps 711 and 712; if not, execute step 710;
  • Step 710 use the tth frame as a template frame
  • Step 712 Calculate the intersection area of the intersecting part between the target image and the template image and the area of the template image;
  • Step 713 Determine the target occlusion degree value.
  • the occlusion judgment function that combines the similarity function between the template image and the target image, the similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image.
  • the degree of occlusion is calculated by the intersection of the intersection, which can more accurately describe the occlusion of the tracking target;
  • the first similarity and the second similarity are determined by the color autocorrelation histogram based on the image, which is beneficial to obtain a more accurate image similarity, and then improve Accuracy of tracking object occlusion determination method.
  • Fig. 8 is a schematic diagram of the composition and structure of a tracking target occlusion determination device in an embodiment of the present disclosure, showing a tracking target occlusion determination device 80, the tracking target occlusion determination device 80 specifically includes:
  • the processing module 801 is configured to determine a template frame from the video, and the template frame includes a template image of the tracking target;
  • An acquiring module 802 configured to acquire first image information of the template image
  • the processing module 801 is further configured to track the tracking target in the current frame of the video, and determine a target image
  • the obtaining module 802 is further configured to obtain second image information of the target image
  • the processing module 801 is further configured to determine an occlusion determination value corresponding to the current frame based on the first image information, the second image information, and an occlusion determination function; wherein the occlusion determination function includes: the target image A first similarity function with the template image, a second similarity function between the target image and the background image, a change degree function between the size of the target image and the size of the template image;
  • the processing module 801 is further configured to determine that the tracking target is blocked when the occlusion determination value is greater than or equal to a first preset threshold.
  • the first image information includes: the color autocorrelation histogram of the template image; the second image information includes: the color autocorrelation histogram of the target image; wherein, the color autocorrelation histogram
  • the correlation histogram includes: the ratio of the number of pixels of each color level in the entire image to the total number of pixels in the entire image under a preset distance threshold.
  • the processing module 801 is configured to substitute the first image information and the second image information into the first similarity function to obtain the first similarity; and substitute the second image information into the The second similarity function is used to obtain the second similarity; the target image size and the template image size are substituted into the change degree function to determine the change degree value; for the first similarity and the second similarity If the similarity difference is greater than or equal to the second preset threshold, then use the similarity difference as the occlusion judgment value; if the similarity difference is less than the set If the second preset threshold is used, the product of the difference similarity and the change degree value is used as the occlusion determination value.
  • the first similarity function is:
  • the preset distance threshold in the target image is d, the ratio of the number of pixels with color level i to the total number of pixels in the target image;
  • the preset distance threshold in the template image is d, the number of pixels whose color grade is i accounts for the ratio of the total number of pixels in the template image;
  • D is a set of preset distance thresholds;
  • q is the number of the color grades.
  • the first image information further includes: template image size
  • the second image information further includes: target image size
  • the processing module 801 is configured to calculate the target based on the change degree function The size change rate of the image size compared with the size of the template image; if the size change rate is greater than or equal to a third preset threshold, determine that the change degree value is 1; if the size change rate is less than the third preset threshold A preset threshold is used to determine that the value of the degree of change is 0.
  • the second image information further includes: a color autocorrelation histogram of at least one image block in the target image, a color autocorrelation histogram of a background block of the image block;
  • the second similarity function is:
  • the preset distance threshold When the preset distance threshold is d in the jth image block, the ratio of the number of pixels with color level i to the total number of pixels in the jth image block; When the preset distance threshold is d in the jth background block, the number of pixels whose color level is i accounts for the ratio of the total number of pixels in the jth background block; D is the set of preset distance thresholds; q is the color level number, u is the number of the image block.
  • the processing module 801 is further configured to vertically divide the target image to obtain a first image block and a second image block; wherein, the first image block is located in the second image block The upper part; the target image is horizontally divided to obtain the third image block and the fourth image block; wherein the third image block is located at the left part of the fourth image block; wherein the at least one image block Including the first image block, the second image block, the third image block and the fourth image block; in the current frame, select the side adjacent to the first image block located above the first image block, and the size The same area is used as the first background block; in the current frame, select an area located below the second image block adjacent to the side of the second image block and of the same size as the second background block; in the In the current frame, select an area located on the left side of the third image block that is adjacent to the side of the third image block and has the same size as the third background block; in the current frame, select an area located in the fourth image The right side of the block is adjacent to the fourth image
  • the processing module 801 is further configured to determine the intersection area of the intersection of the target image and the template image based on the position information of the target image and the template image, and the template image calculating the ratio of the intersecting area to the area of the template image; determining a target occlusion degree value based on the ratio.
  • the processing module 801 is further configured to use the current frame as a new template frame when the occlusion determination function value of the current frame is smaller than the first preset threshold; based on the new Template frame, which determines the occlusion judgment function value of the next frame.
  • the embodiment of the present disclosure also provides another tracking target occlusion determination device,
  • FIG. 9 is a schematic diagram of the composition and structure of a tracking target occlusion determination device in an embodiment of the present disclosure.
  • the device 90 includes: a processor 901 and a memory 902 configured to store a computer program that can run on the processor; wherein, when the processor 901 is configured to run the computer program, execute the methods in the foregoing embodiments step.
  • bus system 903 various components in the tracking target occlusion determination device are coupled together through a bus system 903 .
  • the bus system 903 is used to realize connection and communication between these components.
  • the bus system 903 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled bus system 903 in FIG. 9 for clarity of illustration.
  • the above-mentioned processor can be application specific integrated circuit (ASIC, Application Specific Integrated Circuit), digital signal processing device (DSPD, Digital Signal Processing Device), programmable logic device (PLD, Programmable Logic Device), on-site At least one of a programmable gate array (FPGA, Field-Programmable Gate Array), a controller, a microcontroller, and a microprocessor.
  • ASIC Application Specific Integrated Circuit
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • controller a microcontroller
  • microprocessor programmable gate array
  • memory can be volatile memory (volatile memory), such as random access memory (RAM, Random-Access Memory); Or non-volatile memory (non-volatile memory), such as read-only memory (ROM, Read-Only Memory), flash memory (flash memory), hard disk (HDD, Hard Disk Drive) or solid-state drive (SSD, Solid-State Drive); or a combination of the above types of memory, and provide instructions and data to the processor.
  • volatile memory such as random access memory (RAM, Random-Access Memory
  • non-volatile memory such as read-only memory (ROM, Read-Only Memory), flash memory (flash memory), hard disk (HDD, Hard Disk Drive) or solid-state drive (SSD, Solid-State Drive); or a combination of the above types of memory, and provide instructions and data to the processor.
  • an embodiment of the present disclosure further provides a computer-readable storage medium, such as a memory including a computer program, and the computer program can be executed by a processor of a network device identification device to complete the steps of the foregoing method.
  • a computer-readable storage medium such as a memory including a computer program
  • first, second, third, etc. may be used in the present disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another and are not necessarily used to describe a specific order or sequence.
  • first information may also be called second information, and similarly, second information may also be called first information.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units; Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may be used as a single unit, or two or more units may be integrated into one unit; the above-mentioned integration
  • the unit can be realized in the form of hardware or in the form of hardware plus software functional unit.
  • the present disclosure provides a tracking target occlusion determination method, device, device, and storage medium, including: acquiring first and second image information of a template image and a target image respectively; based on the first and second image information and an occlusion determination function, determining Blocking judgment value; wherein, the blocking judgment function includes: the first similarity function of the target image and the template image, the second similarity function of the target image and the background image, the change degree function between the size of the target image and the size of the template image ; Determine the occlusion situation based on the occlusion judgment value.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the present disclosure disclose a tracking target occlusion determination method and apparatus, a device and a storage medium. The method comprises: acquiring first and second image information of a template image and a target image, respectively; determining an occlusion determination value on the basis of the first and second image information and an occlusion determination function, the occlusion determination function comprising a first similarity function between the target image and the template image, a second similarity function between the target image and a background image, and a change degree function between the size of the target image and the size of the template image; and determining an occlusion situation on the basis of the occlusion determination value.

Description

一种跟踪目标遮挡判定方法、装置、设备及存储介质A tracking target occlusion determination method, device, equipment and storage medium
相关申请的交叉引用Cross References to Related Applications
本申请基于申请号为202111274073.6,申请日为2021年10月29日,申请名称为“一种跟踪目标遮挡判定方法、装置、设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。This application is based on the Chinese patent application with the application number 202111274073.6, the application date is October 29, 2021, and the application name is "A method, device, equipment and storage medium for tracking target occlusion determination", and requires the Chinese patent application Priority, the entire content of this Chinese patent application is hereby incorporated into this application by reference.
技术领域technical field
本公开涉及计算机视觉领域,具体涉及一种跟踪目标遮挡判定方法、装置、设备及存储介质。The present disclosure relates to the field of computer vision, and in particular to a tracking target occlusion determination method, device, equipment and storage medium.
背景技术Background technique
目标跟踪技术是计算机视觉领域的重要技术,其已经广泛应用于智慧交通、智慧安防、军事武器等场景中。在目标跟踪技术中,目标被遮挡时如何实现目标准确跟踪是极为重要的,而准确判断目标是否遮挡则是目标跟踪技术的基础。当确定目标被遮挡时,需要及时调整跟踪框的尺寸。因此,跟踪目标遮挡判定结果的准确性会影响跟踪框的调整效率,进而影响跟踪效果。然而,现有的跟踪目标遮挡判定方法存在准确率低、鲁棒性差等问题。Target tracking technology is an important technology in the field of computer vision, and it has been widely used in scenarios such as smart transportation, smart security, and military weapons. In target tracking technology, how to achieve accurate target tracking when the target is occluded is extremely important, and accurately judging whether the target is occluded is the basis of target tracking technology. When it is determined that the target is occluded, the size of the tracking frame needs to be adjusted in time. Therefore, the accuracy of the determination result of tracking target occlusion will affect the adjustment efficiency of the tracking frame, and then affect the tracking effect. However, the existing tracking target occlusion determination methods have problems such as low accuracy and poor robustness.
发明内容Contents of the invention
本公开实施例期望提供一种跟踪目标遮挡判定方法、装置、设备及存储介质,可以提高遮挡判定方法的准确性和鲁棒性。Embodiments of the present disclosure expect to provide a tracking target occlusion determination method, device, device, and storage medium, which can improve the accuracy and robustness of the occlusion determination method.
本公开实施例的技术方案是这样实现的:The technical scheme of the embodiment of the present disclosure is realized in this way:
第一方面,本公开实施例提供了一种跟踪目标遮挡判定方法,包括:In the first aspect, an embodiment of the present disclosure provides a method for determining occlusion of a tracking target, including:
从视频中确定模板帧,以及所述模板帧中包含跟踪目标的模板图像;获取所述模板图像的第一图像信息;对所述视频中当前帧的所述跟踪目标进行跟踪,确定目标图像;获取所述目标图像的第二图像信息;基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值;其中,所述遮挡判定函数包括:所述目标图像与所述模板图像的第一相似度函数,所述目标图像与背景图像的第二相似度函数,所述目标图像尺寸与所述模板图像尺寸之间的变化程度函数;所述遮挡判定值大于或者等于第一预设阈值时,确定所述跟踪目标被遮挡。Determining a template frame from the video, and including a template image of a tracking target in the template frame; obtaining first image information of the template image; tracking the tracking target of the current frame in the video, and determining a target image; Acquiring second image information of the target image; determining an occlusion determination value corresponding to the current frame based on the first image information, the second image information, and an occlusion determination function; wherein, the occlusion determination function includes: the The first similarity function between the target image and the template image, the second similarity function between the target image and the background image, the change degree function between the size of the target image and the size of the template image; the occlusion determination When the value is greater than or equal to the first preset threshold, it is determined that the tracking target is blocked.
本公开的一些实施例中,所述第一图像信息包括:所述模板图像的颜色自相关直方图;所述第二图像信息包括:所述目标图像的颜色自相关直方图;其中,所述颜色自相关直方图包括:预设距离阈值下,整个图像中各个颜色等级的像素个数占所述整个图像像素总数的比例。In some embodiments of the present disclosure, the first image information includes: the color autocorrelation histogram of the template image; the second image information includes: the color autocorrelation histogram of the target image; wherein, the The color autocorrelation histogram includes: under a preset distance threshold, the ratio of the number of pixels of each color level in the entire image to the total number of pixels in the entire image.
本公开的一些实施例中,所述基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值,包括:将所述第一图像信息和第二图像信息代入所述第一相似度函数,得到第一相似度;将所述第二图像信息代入所述第二相似度函数,得到第二相似度;将所述目标图像尺寸与所述模板图像尺寸代入所述变化程度函数,确定变化程度值;对所述第一相似度与所述第二相似度作差,得到相似度差值;若所述相似度差值大于或等于第二预设阈值,则将所述相似度差值作为所述遮挡判定值;若所述相似度差值小于所述第二预设阈值,则将所述差值相似度与所述变化程度值的乘积作为所述遮挡判定值。In some embodiments of the present disclosure, the determining the occlusion determination value corresponding to the current frame based on the first image information, the second image information and the occlusion determination function includes: combining the first image information with the second Substituting the image information into the first similarity function to obtain the first similarity; substituting the second image information into the second similarity function to obtain the second similarity; combining the target image size with the template image Substituting the size into the change degree function to determine the change degree value; making a difference between the first similarity and the second similarity to obtain a similarity difference; if the similarity difference is greater than or equal to the second preset threshold, then use the similarity difference as the occlusion judgment value; if the similarity difference is smaller than the second preset threshold, use the product of the difference similarity and the change degree value as The occlusion judgment value.
本公开的一些实施例中,所述第一相似度函数为:In some embodiments of the present disclosure, the first similarity function is:
Figure PCTCN2022128719-appb-000001
Figure PCTCN2022128719-appb-000001
式中,
Figure PCTCN2022128719-appb-000002
为所述目标图像中预设距离阈值为d时,颜色等级为i 的像素个数占整个所述目标图像像素总数的比例;
Figure PCTCN2022128719-appb-000003
为模板图像中预设距离阈值为d时,颜色等级为i的像素个数占整个所述模板图像像素总数的比例;D为预设距离阈值的集合;q为所述颜色等级的个数。
In the formula,
Figure PCTCN2022128719-appb-000002
When the preset distance threshold in the target image is d, the ratio of the number of pixels with color level i to the total number of pixels in the target image;
Figure PCTCN2022128719-appb-000003
When the preset distance threshold in the template image is d, the number of pixels whose color grade is i accounts for the ratio of the total number of pixels in the template image; D is a set of preset distance thresholds; q is the number of the color grades.
本公开的一些实施例中,所述第一图像信息还包括:模板图像尺寸,所述第二图像信息还包括:目标图像尺寸;所述将所述目标图像尺寸与所述模板图像尺寸代入所述变化程度函数,确定变化程度值,包括:基于所述变化程度函数计算所述目标图像尺寸与所述模板图像尺寸相比的尺寸变化率;若所述尺寸变化率大于或等于第三预设阈值,确定所述变化程度值为1;若所述尺寸变化率小于所述第三预设阈值,确定所述变化程度值为0。In some embodiments of the present disclosure, the first image information further includes: a template image size, and the second image information further includes: a target image size; the substituting the target image size and the template image size into the The change degree function, determining the change degree value includes: calculating the size change rate of the target image size compared with the template image size based on the change degree function; if the size change rate is greater than or equal to a third preset Threshold, determine the value of the change degree as 1; if the size change rate is less than the third preset threshold, determine the value of the change degree as 0.
本公开的一些实施例中,所述第二图像信息还包括:所述目标图像中至少一个图像块的颜色自相关直方图,所述图像块的背景块的颜色自相关直方图;In some embodiments of the present disclosure, the second image information further includes: a color autocorrelation histogram of at least one image block in the target image, a color autocorrelation histogram of a background block of the image block;
所述第二相似度函数为:The second similarity function is:
Figure PCTCN2022128719-appb-000004
Figure PCTCN2022128719-appb-000004
式中,
Figure PCTCN2022128719-appb-000005
为第j图像块中预设距离阈值为d时,颜色等级为i的像素个数占所述第j图像块像素总数的比例;
Figure PCTCN2022128719-appb-000006
为第j背景块中预设距离阈值为d时,颜色等级为i的像素个数占所述第j背景块像素总数的比例;D为预设距离阈值的集合;q为所述颜色等级的个数,u为所述图像块的个数。
In the formula,
Figure PCTCN2022128719-appb-000005
When the preset distance threshold is d in the jth image block, the ratio of the number of pixels with color level i to the total number of pixels in the jth image block;
Figure PCTCN2022128719-appb-000006
When the preset distance threshold is d in the jth background block, the number of pixels whose color level is i accounts for the ratio of the total number of pixels in the jth background block; D is the set of preset distance thresholds; q is the color level number, u is the number of the image block.
本公开的一些实施例中,所述方法还包括:对所述目标图像进行纵向划分,得到第1图像块和第2图像块;其中,所述第1图像块位于所述第2图像块的上部;对所述目标图像进行横向划分,得到第3图像块和第4图像块;其中,所述第3图像块位于所述第4图像块的左部;其中,所述至少一个图像块包括第1图像块、第2图像块、第3图像块和第4图像块;在所述当前帧中,选取位于所述第1图像块上方与所述第1图像块边相邻, 且尺寸相同的区域作为第1背景块;在所述当前帧中,选取位于所述第2图像块下方与所述第2图像块边相邻,且尺寸相同的区域作为第2背景块;在所述当前帧中,选取位于所述第3图像块左方与所述第3图像块边相邻,且尺寸相同的区域作为第3背景块;在所述当前帧中,选取位于所述第4图像块右方与所述第4图像块边相邻,且尺寸相同的区域作为第4背景块。In some embodiments of the present disclosure, the method further includes: longitudinally dividing the target image to obtain a first image block and a second image block; wherein, the first image block is located at the edge of the second image block Upper part; the target image is horizontally divided to obtain a third image block and a fourth image block; wherein the third image block is located on the left of the fourth image block; wherein the at least one image block includes The 1st image block, the 2nd image block, the 3rd image block and the 4th image block; in the current frame, select the side above the 1st image block adjacent to the side of the 1st image block, and have the same size In the current frame, select an area located below the second image block adjacent to the side of the second image block and having the same size as the second background block; in the current frame In the frame, select an area located on the left side of the third image block adjacent to the side of the third image block and has the same size as the third background block; in the current frame, select an area located on the fourth image block The area on the right adjacent to the side of the fourth image block with the same size is used as the fourth background block.
本公开的一些实施例中,确定所述跟踪目标被遮挡之后,所述方法还包括:基于所述目标图像与所述模板图像的位置信息,确定所述目标图像与所述模板图像相交部分的相交面积,以及所述模板图像的面积;计算所述相交面积与所述模板图像的面积的比值;基于所述比值确定目标遮挡程度值。In some embodiments of the present disclosure, after it is determined that the tracking target is blocked, the method further includes: determining the intersection of the target image and the template image based on the position information of the target image and the template image The intersecting area, and the area of the template image; calculating the ratio of the intersecting area to the area of the template image; determining a target occlusion degree value based on the ratio.
本公开的一些实施例中,所述方法还包括:所述当前帧的遮挡判定函数值小于所述第一预设阈值时,将所述当前帧作为新的模板帧;基于所述新的模板帧,确定下一帧的遮挡判定函数值。In some embodiments of the present disclosure, the method further includes: when the occlusion determination function value of the current frame is smaller than the first preset threshold, using the current frame as a new template frame; based on the new template frame, determine the value of the occlusion decision function for the next frame.
第二方面,本公开实施例还提供一种跟踪目标遮挡判定装置,包括:处理模块,配置为从视频中确定模板帧,以及所述模板帧中包含跟踪目标的模板图像;获取模块,配置为获取所述模板图像的第一图像信息;所述处理模块,还配置为对所述视频中当前帧的所述跟踪目标进行跟踪,确定目标图像;所述获取模块,还配置为获取所述目标图像的第二图像信息;所述处理模块,还配置为基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值;其中,所述遮挡判定函数包括:所述目标图像与所述模板图像的第一相似度函数,所述目标图像与背景图像的第二相似度函数,所述目标图像尺寸与所述模板图像尺寸之间的变化程度函数;所述处理模块,还配置为所述遮挡判定值大于或者等于第一预设阈值时,确定所述跟踪目标被遮挡。In a second aspect, an embodiment of the present disclosure further provides a tracking target occlusion determination device, including: a processing module configured to determine a template frame from a video, and the template frame includes a template image of the tracking target; an acquisition module configured to Acquire the first image information of the template image; the processing module is also configured to track the tracking target in the current frame of the video, and determine the target image; the acquisition module is also configured to acquire the target The second image information of the image; the processing module is further configured to determine the occlusion determination value corresponding to the current frame based on the first image information, the second image information and the occlusion determination function; wherein, the occlusion determination function Including: a first similarity function between the target image and the template image, a second similarity function between the target image and the background image, and a change degree function between the size of the target image and the size of the template image; The processing module is further configured to determine that the tracking target is blocked when the occlusion determination value is greater than or equal to a first preset threshold.
第三方面,本公开实施例提供一种跟踪目标遮挡判定设备,包括:处 理器和配置为存储能够在处理器上运行的计算机程序的存储器,其中,所述处理器配置为运行所述计算机程序时,执行如第一方面所述的跟踪目标遮挡判定方法的步骤。In a third aspect, an embodiment of the present disclosure provides a tracking target occlusion determination device, including: a processor and a memory configured to store a computer program that can run on the processor, wherein the processor is configured to run the computer program , execute the steps of the method for determining the occlusion of the tracking target as described in the first aspect.
第四方面,本公开实施例提供一种计算机存储介质,其上存储有计算机程序,其中,该计算机程序被处理器执行时实现如第一方面所述的跟踪目标遮挡判定的步骤。In a fourth aspect, an embodiment of the present disclosure provides a computer storage medium on which a computer program is stored, wherein when the computer program is executed by a processor, the step of determining the occlusion of the tracking target as described in the first aspect is implemented.
本公开实施例的技术方案,通过设置融合了模板图像与目标图像的相似度函数、目标图像与背景图像的相似度函数及目标图像尺寸与所述模板图像尺寸之间的变化程度函数的遮挡判定函数,在进行遮挡情况判定时,将用于判断遮挡情况的多个影响因素结合起来,即将目标图像与模板图像的第一相似度和变化程度值,以及目标图像与背景图像的第二相似度相结合,来确定判断遮挡情况,提高遮挡判定方法的准确性和鲁棒性。The technical solution of the embodiment of the present disclosure, by setting the occlusion judgment that combines the similarity function between the template image and the target image, the similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image The function, when judging the occlusion situation, combines multiple influencing factors for judging the occlusion situation, that is, the first similarity and change degree value between the target image and the template image, and the second similarity between the target image and the background image Combined to determine and judge the occlusion situation, improve the accuracy and robustness of the occlusion judgment method.
附图说明Description of drawings
图1为本公开实施例中跟踪目标遮挡判定方法的第一流程示意图;FIG. 1 is a schematic flowchart of a first flow chart of a method for determining occlusion of a tracking target in an embodiment of the present disclosure;
图2为本公开实施例中当前帧中目标图像的第一示意图;FIG. 2 is a first schematic diagram of a target image in a current frame in an embodiment of the present disclosure;
图3为本公开实施例中当前帧中目标图像的第二示意图;FIG. 3 is a second schematic diagram of a target image in a current frame in an embodiment of the present disclosure;
图4为本公开实施例中当前帧对应的遮挡判定值确定方法的流程示意图;FIG. 4 is a schematic flowchart of a method for determining an occlusion judgment value corresponding to a current frame in an embodiment of the present disclosure;
图5为本公开实施例中目标图像的图像块及背景块的示意图;5 is a schematic diagram of an image block and a background block of a target image in an embodiment of the present disclosure;
图6为本公开实施例中跟踪目标遮挡判定方法的第二流程示意图;FIG. 6 is a second schematic flowchart of a method for determining occlusion of a tracking target in an embodiment of the present disclosure;
图7为本公开实施例中跟踪目标遮挡判定方法的第三流程示意图;FIG. 7 is a schematic flowchart of a third flowchart of a method for determining occlusion of a tracking target in an embodiment of the present disclosure;
图8为本公开实施例中跟踪目标遮挡判定装置的组成结构示意图;FIG. 8 is a schematic diagram of the composition and structure of a tracking target occlusion determination device in an embodiment of the present disclosure;
图9为本公开实施例中跟踪目标遮挡判定设备的组成结构示意图。FIG. 9 is a schematic diagram of the composition and structure of a tracking target occlusion determination device in an embodiment of the present disclosure.
具体实施方式Detailed ways
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。In order to understand the characteristics and technical content of the embodiments of the present disclosure in more detail, the implementation of the embodiments of the present disclosure will be described in detail below in conjunction with the accompanying drawings. The attached drawings are only for reference and description, and are not intended to limit the embodiments of the present disclosure.
图1为本公开实施例中网络设备识别方法的第一流程示意图。如图1所示,跟踪目标遮挡判定方法具体可以包括:FIG. 1 is a schematic flow chart of a first method for identifying a network device in an embodiment of the present disclosure. As shown in Figure 1, the method for judging the occlusion of the tracking target may specifically include:
步骤101:从视频中确定模板帧,以及所述模板帧中包含跟踪目标的模板图像;Step 101: determining a template frame from the video, and the template frame includes a template image of the tracking target;
这里,视频是视频采集装置采集到的视频,跟踪目标为视频中需要进行跟踪的目标对象,如视频中某一辆车或某一个人等。模板帧为视频中具有模板作用的帧,用于与视频的当前帧进行对比,基于对比结果判断当前帧中的跟踪目标是否被遮挡。示例性的,在一些实施例中,模板帧可以为视频的初始帧、还可以为视频中跟踪目标未被遮挡的图像帧。Here, the video is the video captured by the video capture device, and the tracking target is the target object to be tracked in the video, such as a certain vehicle or a certain person in the video. The template frame is a frame with a template function in the video, which is used for comparison with the current frame of the video, and based on the comparison result, it is judged whether the tracking target in the current frame is blocked. Exemplarily, in some embodiments, the template frame may be an initial frame of the video, and may also be an image frame in the video where the tracking target is not occluded.
示例性的,在实际应用中,将模板帧中跟踪框内的图像作为模板图像。示例性的,在一些实施例中,所述方法还包括:将视频的第一帧作为模板帧;框选出第一帧中包含跟踪目标的区域作为模板图像。Exemplarily, in an actual application, the image within the tracking frame in the template frame is used as the template image. Exemplarily, in some embodiments, the method further includes: using the first frame of the video as a template frame; selecting a region in the first frame that contains the tracking target as a template image.
步骤102:获取所述模板图像的第一图像信息;Step 102: Acquiring first image information of the template image;
第一图像信息包括模板图像相关信息。示例性的,在一些实施例中,所述第一图像信息包括:所述模板图像的颜色自相关直方图。其中,所述颜色自相关直方图包括:预设距离阈值下,整个图像中各个颜色等级的像素个数占所述整个图像像素总数的比例。The first image information includes template image related information. Exemplarily, in some embodiments, the first image information includes: a color autocorrelation histogram of the template image. Wherein, the color autocorrelation histogram includes: under a preset distance threshold, the ratio of the number of pixels of each color level in the entire image to the total number of pixels in the entire image.
需要说明的是,颜色自相关直方图不仅包含了整个图像中不同等级颜色的数量分布信息,还包含相同等级颜色的空间分布信息,将其用于相似度计算有利于获得更准确的图像相似度。It should be noted that the color autocorrelation histogram not only contains the quantity distribution information of different levels of color in the entire image, but also contains the spatial distribution information of the same level of color, and it is beneficial to obtain more accurate image similarity by using it for similarity calculation .
步骤103:对所述视频中当前帧的所述跟踪目标进行跟踪,确定目标图 像;Step 103: track the tracking target of the current frame in the video, and determine the target image;
这里,当前帧为视频中需要对跟踪目标进行跟踪,以及跟踪目标遮挡判定的图像帧。对所述视频中当前帧的所述跟踪目标进行跟踪,可以通过跟踪算法来实现,示例性的,跟踪算法包括但不限于单目标跟踪算法,如孪生区域候选网络(SiamRPN,Siamese Region Proposal Networks)目标跟踪算法,分辨尺度空间跟踪器(DSST,Discriminatiive Scale Space Tracker,)目标跟踪算法等。Here, the current frame is an image frame in the video that needs to track the tracking target and determine the occlusion of the tracking target. Tracking the tracking target of the current frame in the video can be achieved by a tracking algorithm. Exemplary, the tracking algorithm includes but is not limited to a single target tracking algorithm, such as Siamese Region Proposal Networks (SiamRPN, Siamese Region Proposal Networks) Target tracking algorithm, Discriminative Scale Space Tracker (DSST, Discriminatiive Scale Space Tracker,) target tracking algorithm, etc.
这里,目标图像为当前帧中包含跟踪目标的图像区域。示例性的,在实际应用中,可以将当前帧中跟踪框内的图像作为目标图像。示例性的,图2为本公开实施例中当前帧中目标图像的第一示意图,图3为本公开实施例中当前帧中目标图像的第二示意图。如图2和图3所示,方框内的车辆图像即为当前帧中的目标图像。Here, the target image is the image area containing the tracking target in the current frame. Exemplarily, in practical applications, the image within the tracking frame in the current frame may be used as the target image. Exemplarily, FIG. 2 is a first schematic diagram of a target image in a current frame in an embodiment of the present disclosure, and FIG. 3 is a second schematic diagram of a target image in a current frame in an embodiment of the present disclosure. As shown in Figure 2 and Figure 3, the vehicle image inside the box is the target image in the current frame.
步骤104:获取所述目标图像的第二图像信息;Step 104: acquiring second image information of the target image;
第二图像信息包括目标图像相关信息。示例性的,在一些实施例中,所述第二图像信息包括:所述目标图像的颜色自相关直方图;其中,所述颜色自相关直方图包括:预设距离阈值下,整个图像中各个颜色等级的像素个数占所述整个图像像素总数的比例。The second image information includes target image related information. Exemplarily, in some embodiments, the second image information includes: the color autocorrelation histogram of the target image; wherein, the color autocorrelation histogram includes: under a preset distance threshold, each The ratio of the number of pixels of the color level to the total number of pixels of the entire image.
步骤105:基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值;Step 105: Determine an occlusion determination value corresponding to the current frame based on the first image information, the second image information, and the occlusion determination function;
其中,所述遮挡判定函数包括:所述目标图像与所述模板图像的第一相似度函数,所述目标图像与背景图像的第二相似度函数,所述目标图像尺寸与所述模板图像尺寸之间的变化程度函数。Wherein, the occlusion determination function includes: a first similarity function between the target image and the template image, a second similarity function between the target image and the background image, a size of the target image and a size of the template image The degree of change function between.
这里,由于模板图像为视频中跟踪目标未被遮挡时的图像,通过目标图像与模板图像进行对比并确定两者之间的相似度(即第一相似度),若两者之间的相似度较大,表明目标图像与模板图像越接近,即目标图像中跟 踪目标越接近完整的跟踪目标。因此,可以通过目标图像与模板图像的相似度来确定遮挡情况。将该第一相似度融入遮挡判定函数,可以提高遮挡判定值的准确性,进而提高遮挡判定结果的准确性。Here, since the template image is the image when the tracking target in the video is not occluded, the target image is compared with the template image to determine the similarity between the two (namely the first similarity), if the similarity between the two The larger the value, the closer the target image is to the template image, that is, the closer the tracking target in the target image is to the complete tracking target. Therefore, the occlusion situation can be determined by the similarity between the target image and the template image. Incorporating the first similarity into the occlusion judgment function can improve the accuracy of the occlusion judgment value, and further improve the accuracy of the occlusion judgment result.
这里,背景图像为当前帧中,目标图像周围的部分图像区域。若背景图像与目标图像的相似度(即第二相似度)越大,则表明背景图像与目标图像越相似,表明跟踪目标被遮挡的概率越大。因此,可以对比目标图像与背景图像的相似度来确定遮挡情况。将该第二相似度融入遮挡判定函数,可以提高遮挡判定值的准确性,进而提高遮挡判定结果的准确性。Here, the background image is a partial image area around the target image in the current frame. If the similarity between the background image and the target image (that is, the second similarity) is greater, it indicates that the background image is more similar to the target image, indicating that the probability of the tracking target being blocked is greater. Therefore, the occlusion situation can be determined by comparing the similarity between the target image and the background image. Integrating the second similarity into the occlusion determination function can improve the accuracy of the occlusion determination value, and further improve the accuracy of the occlusion determination result.
这里,将遮挡判定函数中引入该变化程度函数的原因主要有两点:一方面,当前较为先进的跟踪算法可以解决跟踪目标部分遮挡的问题,故并非对所有出现遮挡的情况都需要做出处理。如图2所示的目标图像的第一示意图中,左边的汽车为跟踪目标,虽然存在部分遮挡,但这种情况无需对该遮挡情况进行处理,即可以判定为未被遮挡;另一方面,当前优秀的目标跟踪算法都具备较好的尺度自适应能力,即目标出现遮挡时,跟踪算法会及时调整跟踪框的尺寸,如图3所示的目标图像的第二示意图中,左边的汽车为跟踪目标,当跟踪目标被遮挡时,跟踪框尺寸(目标图像尺寸)发生较大变化。因此,跟踪框的尺度变化情况与目标遮挡情况具有相关性,通过将目标图像尺寸与模板图像尺寸之间的变化程度函数融入遮挡判定函数,可以提高遮挡判定方法的鲁棒性。Here, there are two main reasons for introducing the change degree function into the occlusion determination function: on the one hand, the current more advanced tracking algorithm can solve the problem of partial occlusion of the tracking target, so not all occlusion situations need to be dealt with . In the first schematic diagram of the target image shown in Figure 2, the car on the left is the tracking target. Although there is a partial occlusion, it can be determined that it is not occluded without processing the occlusion situation; on the other hand, The current excellent target tracking algorithms all have good scale adaptive capabilities, that is, when the target is occluded, the tracking algorithm will adjust the size of the tracking frame in time. In the second schematic diagram of the target image shown in Figure 3, the car on the left is When tracking the target, when the tracking target is occluded, the size of the tracking frame (target image size) changes greatly. Therefore, the scale change of the tracking frame is correlated with the target occlusion, and the robustness of the occlusion determination method can be improved by incorporating the change degree function between the target image size and the template image size into the occlusion determination function.
综上,通过由目标图像与模板图像的第一相似度函数、目标图像与背景图像的第二相似度函数及目标图像尺寸与模板图像尺寸之间的变化程度函数共同确定遮挡判定函数,在进行遮挡情况判定时,将用于判断遮挡情况的多个影响因素结合起来,即将目标图像与模板图像的第一相似度和变化程度值,以及目标图像与背景图像的第二相似度相结合,来确定判断遮挡情况,提高遮挡判定方法的准确性和鲁棒性。In summary, the occlusion judgment function is jointly determined by the first similarity function between the target image and the template image, the second similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image. When judging the occlusion situation, multiple influencing factors for judging the occlusion situation are combined, that is, the first similarity and change degree value between the target image and the template image, and the second similarity between the target image and the background image are combined to obtain Determine and judge the occlusion situation, and improve the accuracy and robustness of the occlusion judgment method.
步骤106:所述遮挡判定值大于或者等于第一预设阈值时,确定所述跟踪目标被遮挡。Step 106: When the occlusion determination value is greater than or equal to a first preset threshold, determine that the tracking target is occluded.
这里,第一预设阈值可以为常量,可以根据实际需求进行确定。Here, the first preset threshold may be constant, and may be determined according to actual requirements.
示例性的,在一些实施例中,所述方法还包括:所述当前帧的遮挡判定函数值小于所述第一预设阈值时,将所述当前帧作为新的模板帧;基于所述新的模板帧,确定下一帧的遮挡判定函数值。Exemplarily, in some embodiments, the method further includes: when the occlusion determination function value of the current frame is smaller than the first preset threshold, using the current frame as a new template frame; based on the new template frame to determine the value of the occlusion decision function for the next frame.
这里,当前帧的遮挡判定函数值小于所述第一预设阈值时,确定该跟踪目标没有被遮挡。将目标没有被遮挡的当前帧作为模板帧,进行下一帧的遮挡判定函数值,进而实现整个视频中跟踪目标的遮挡判定。Here, when the occlusion determination function value of the current frame is smaller than the first preset threshold, it is determined that the tracking target is not occluded. The current frame where the target is not occluded is used as the template frame, and the occlusion judgment function value of the next frame is carried out, so as to realize the occlusion judgment of the tracking target in the whole video.
这里,步骤101至步骤106的执行主体可以为跟踪目标遮挡判定设备的处理器。示例性的,本公开中的跟踪目标遮挡判定方法还可以耦合于具体的目标跟踪算法中,具有较好的拓展性。Here, the subject of execution of steps 101 to 106 may be a processor of the tracking target occlusion determination device. Exemplarily, the tracking target occlusion determination method in the present disclosure can also be coupled to a specific target tracking algorithm, which has good scalability.
本公开的技术方案,通过设置融合了模板图像与目标图像的相似度函数、目标图像与背景图像的相似度函数及目标图像尺寸与所述模板图像尺寸之间的变化程度函数的遮挡判定函数,在进行遮挡情况判定时,将可以用于判断遮挡情况的因素结合起来,即将第一相似度、第二相似度和变化程度值结合起来,由结合结果来确定判断遮挡情况,提高遮挡判定方法的准确性和鲁棒性。In the technical solution of the present disclosure, by setting an occlusion judgment function that combines the similarity function between the template image and the target image, the similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image, When judging the occlusion situation, the factors that can be used to judge the occlusion situation are combined, that is, the first similarity, the second similarity and the change degree value are combined, and the occlusion situation is determined and judged by the combined result, which improves the occlusion judgment method. accuracy and robustness.
为了对颜色自相关直方图进行说明,以目标图像的颜色自相关直方图的确定过程为例,进行进一步的举例说明。In order to illustrate the color autocorrelation histogram, the process of determining the color autocorrelation histogram of the target image is taken as an example for further illustration.
示例性的,在实际应用中,确定目标图像的颜色自相关直方图的方法包括:Exemplarily, in practical applications, the method for determining the color autocorrelation histogram of the target image includes:
步骤201:将目标图像由RGB颜色空间转换为HSV颜色空间,提取HSV颜色空间的H(色调)分量;Step 201: convert the target image into HSV color space by RGB color space, and extract the H (hue) component of HSV color space;
步骤202:依据下面的式子将目标图像的H颜色分量非均匀量化为q 个区间,其中,每个区间对应一个颜色等级。示例性的,这里的q取值为14。Step 202: Non-uniformly quantize the H color component of the target image into q intervals according to the following formula, where each interval corresponds to a color level. Exemplarily, the value of q here is 14.
Figure PCTCN2022128719-appb-000007
Figure PCTCN2022128719-appb-000007
其中,HSV颜色空间中的H(色调)分量代表色彩信息,其取值范围是0°~360°,其值与可见光谱中各种颜色的波长相对应,故各种颜色对应的H分量区间是非均匀分布的;本方法首先将目标图像的H颜色分量按照红、橙、黄、绿、青、蓝、紫7种颜色非均匀量化为7个区间,再将每个量化区间划分为两个小区间,最终将H颜色分量非均匀量化为14个区间。示例性的,若目标图像中像素i的H分量位于0°~23°,则表明该像素的颜色等级为0。若目标图像中像素i的H分量位于330°~360°,则表明该像素的颜色等级为1。Among them, the H (hue) component in the HSV color space represents color information, and its value ranges from 0° to 360°, and its value corresponds to the wavelength of each color in the visible spectrum, so the H component range corresponding to each color It is non-uniformly distributed; this method first non-uniformly quantizes the H color component of the target image into 7 intervals according to the seven colors of red, orange, yellow, green, blue, blue, and purple, and then divides each quantization interval into two In the small interval, the H color component is finally non-uniformly quantized into 14 intervals. Exemplarily, if the H component of pixel i in the target image is between 0° and 23°, it indicates that the color level of the pixel is 0. If the H component of pixel i in the target image is located between 330° and 360°, it indicates that the color level of the pixel is 1.
步骤203:根据下式计算得到第t帧(当前帧)中目标图像的颜色自相关直方图,并对直方图做归一化处理:Step 203: Calculate the color autocorrelation histogram of the target image in the tth frame (current frame) according to the following formula, and normalize the histogram:
Η(r,d)=Num({(p 1,p 2)|p 1∈I,p 2∈I,r s∈I R,|p 1-p 2|=d}) Η(r,d)=Num({(p 1 ,p 2 )|p 1 ∈I,p 2 ∈I,r s ∈I R ,|p 1 -p 2 |=d})
其中,p 1=(x 1,y 1)和p 2=(x 2,y 2)表示图像I(目标图像)中的像素点,I R=(0,1,…,13)表示步骤202中确定的颜色等级集合,|p 1-p 2|=max(|x 1-x 2|,|y 1-y 2|)用于计算像素点p 1和p 2之间的距离,D为预设距离值d的集合,Num(·)用于统计满足条件的像素个数,对于每种预设距离值,分别得到具有14个bin(相当于刻度)的颜色自相关直方图
Figure PCTCN2022128719-appb-000008
示例性的,预设距离值d可以为1,3,5,d∈D=(1,3,5)。
Among them, p 1 =(x 1 ,y 1 ) and p 2 =(x 2 ,y 2 ) represent the pixels in image I (target image), and I R =(0,1,...,13) represents step 202 The set of color levels determined in , |p 1 -p 2 |=max(|x 1 -x 2 |, |y 1 -y 2 |) is used to calculate the distance between pixel points p 1 and p 2 , D is A set of preset distance values d, Num( ) is used to count the number of pixels that meet the conditions, and for each preset distance value, a color autocorrelation histogram with 14 bins (equivalent to a scale) is obtained
Figure PCTCN2022128719-appb-000008
Exemplarily, the preset distance value d may be 1, 3, 5, where d∈D=(1, 3, 5).
基于上述步骤201-203同样的方法,可以得到模板图像的颜色自相关直 方图
Figure PCTCN2022128719-appb-000009
以及其他图像的颜色自相关直方图。
Based on the same method as above steps 201-203, the color autocorrelation histogram of the template image can be obtained
Figure PCTCN2022128719-appb-000009
and color autocorrelation histograms for other images.
在上述实施例的基础上,对步骤105基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值的方法进行进一步的举例说明。示例性的,图4为本公开实施例中当前帧对应的遮挡判定值确定方法的流程示意图。On the basis of the above-mentioned embodiments, the method of determining the occlusion determination value corresponding to the current frame in step 105 based on the first image information, the second image information and the occlusion determination function is further illustrated with an example. Exemplarily, FIG. 4 is a schematic flowchart of a method for determining an occlusion determination value corresponding to a current frame in an embodiment of the present disclosure.
示例性的,在一些实施例中,所述基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值,包括:Exemplarily, in some embodiments, the determining the occlusion determination value corresponding to the current frame based on the first image information, the second image information and the occlusion determination function includes:
步骤401:将所述第一图像信息和第二图像信息代入所述第一相似度函数,得到第一相似度;Step 401: Substituting the first image information and the second image information into the first similarity function to obtain a first similarity;
示例性的,在一些实施例中,所述第一相似度函数为:Exemplarily, in some embodiments, the first similarity function is:
Figure PCTCN2022128719-appb-000010
Figure PCTCN2022128719-appb-000010
式中,
Figure PCTCN2022128719-appb-000011
为所述目标图像中预设距离阈值为d时,颜色等级为i的像素个数占整个所述目标图像像素总数的比例;
Figure PCTCN2022128719-appb-000012
为模板图像中预设距离阈值为d时,颜色等级为i的像素个数占整个所述模板图像像素总数的比例;D为预设距离阈值的集合;q为所述颜色等级的个数。
In the formula,
Figure PCTCN2022128719-appb-000011
When the preset distance threshold in the target image is d, the ratio of the number of pixels with color level i to the total number of pixels in the target image;
Figure PCTCN2022128719-appb-000012
When the preset distance threshold in the template image is d, the number of pixels whose color grade is i accounts for the ratio of the total number of pixels in the template image; D is a set of preset distance thresholds; q is the number of the color grades.
步骤402:将所述第二图像信息代入所述第二相似度函数,得到第二相似度;Step 402: Substituting the second image information into the second similarity function to obtain a second similarity;
示例性的,在一些实施例中,所述第二图像信息还包括:所述目标图像中至少一个图像块的颜色自相关直方图,所述图像块的背景块的颜色自相关直方图;Exemplarily, in some embodiments, the second image information further includes: a color autocorrelation histogram of at least one image block in the target image, a color autocorrelation histogram of a background block of the image block;
所述第二相似度函数为:The second similarity function is:
Figure PCTCN2022128719-appb-000013
Figure PCTCN2022128719-appb-000013
式中,
Figure PCTCN2022128719-appb-000014
为第j图像块中预设距离阈值为d时,颜色等级为i的像素个数占所述第j图像块像素总数的比例;
Figure PCTCN2022128719-appb-000015
为第j背景块中预设 距离阈值为d时,颜色等级为i的像素个数占所述第j背景块像素总数的比例;D为预设距离阈值的集合;q为所述颜色等级的个数,u为所述图像块的个数。示例性的,u的取值可以为4。
In the formula,
Figure PCTCN2022128719-appb-000014
When the preset distance threshold is d in the jth image block, the ratio of the number of pixels with color level i to the total number of pixels in the jth image block;
Figure PCTCN2022128719-appb-000015
When the preset distance threshold is d in the jth background block, the number of pixels whose color level is i accounts for the ratio of the total number of pixels in the jth background block; D is the set of preset distance thresholds; q is the color level number, u is the number of the image block. Exemplarily, the value of u may be 4.
这里,至少一个图像块为对目标图像进行分块,得到的多个图像块。Here, at least one image block is a plurality of image blocks obtained by dividing the target image into blocks.
示例性的,在一些实施例中,所述方法还包括:对所述目标图像进行纵向划分,得到第1图像块和第2图像块;其中,所述第1图像块位于所述第2图像块的上部;对所述目标图像进行横向划分,得到第3图像块和第4图像块;其中,所述第3图像块位于所述第4图像块的左部;其中,所述至少一个图像块包括第1图像块、第2图像块、第3图像块和第4图像块;Exemplarily, in some embodiments, the method further includes: longitudinally dividing the target image to obtain a first image block and a second image block; wherein, the first image block is located in the second image The upper part of the block; the target image is horizontally divided to obtain the third image block and the fourth image block; wherein the third image block is located at the left part of the fourth image block; wherein the at least one image The blocks include a first image block, a second image block, a third image block and a fourth image block;
在所述当前帧中,选取位于所述第1图像块上方与所述第1图像块边相邻,且尺寸相同的区域作为第1背景块;在所述当前帧中,选取位于所述第2图像块下方与所述第2图像块边相邻,且尺寸相同的区域作为第2背景块;在所述当前帧中,选取位于所述第3图像块左方与所述第3图像块边相邻,且尺寸相同的区域作为第3背景块;在所述当前帧中,选取位于所述第4图像块右方与所述第4图像块边相邻,且尺寸相同的区域作为第4背景块。In the current frame, select an area above the first image block that is adjacent to the side of the first image block and has the same size as the first background block; in the current frame, select an area located on the first image block The area below the 2 image block adjacent to the side of the 2nd image block and having the same size is used as the 2nd background block; in the current frame, select the area located on the left side of the 3rd image block and The area with adjacent sides and the same size is used as the third background block; in the current frame, select the area on the right side of the fourth image block that is adjacent to the side of the fourth image block and has the same size as the first background block 4 background blocks.
在对目标图像进行分块划分时,可以是进行不等分。如不等分可以是:划分目标图像左部的三分之二作为一个图像块;划分目标图像右部的三分之二作为一个图像块;划分目标图像上部的三分之二作为一个图像块,划分目标图像下部的三分之二作为一个图像块。下面以等分划分为例,进行详细说明。When dividing the target image into blocks, unequal division may be performed. For example, the unequal division can be: dividing the left two-thirds of the target image as an image block; dividing the right two-thirds of the target image as an image block; dividing the upper two-thirds of the target image as an image block , divide the lower two-thirds of the target image as an image block. The following takes equal division as an example to describe in detail.
图5为本公开实施例中目标图像的图像块及背景块的示意图。图5中,o1为目标图像的上半部分图像,相当于上述第1图像块;o2目标图像的下半部分图像,相当于上述第2图像块;o3为目标图像的左半部分图像,相 当于上述第3图像块;o4为目标图像的右半部分图像,相当于上述第4图像块。b1、b2、b3、b4分别相当于上述第1背景块、第2背景块、第3背景块、第4背景块。FIG. 5 is a schematic diagram of an image block and a background block of a target image in an embodiment of the present disclosure. In Fig. 5, o1 is the upper half image of the target image, which is equivalent to the above-mentioned first image block; o2 is the lower half image of the target image, which is equivalent to the above-mentioned second image block; o3 is the left half image of the target image, which is equivalent to In the above-mentioned third image block; o4 is the right half image of the target image, which is equivalent to the above-mentioned fourth image block. b1, b2, b3, and b4 correspond to the above-mentioned first background block, second background block, third background block, and fourth background block, respectively.
步骤403:将所述目标图像尺寸与所述模板图像尺寸代入所述变化程度函数,确定变化程度值;Step 403: Substituting the size of the target image and the size of the template image into the degree of change function to determine a degree of change value;
示例性的,在一些实施例中,所述第一图像信息还包括:模板图像尺寸,所述第二图像信息还包括:目标图像尺寸。Exemplarily, in some embodiments, the first image information further includes: a template image size, and the second image information further includes: a target image size.
所述将所述目标图像尺寸与所述模板图像尺寸代入所述变化程度函数,确定变化程度值,包括:基于所述变化程度函数计算所述目标图像尺寸与所述模板图像尺寸相比的尺寸变化率;若所述尺寸变化率大于或等于第三预设阈值,确定所述变化程度值为1;若所述尺寸变化率小于所述第三预设阈值,确定所述变化程度值为0。The step of substituting the target image size and the template image size into the change degree function to determine the change degree value includes: calculating the size of the target image size compared with the template image size based on the change degree function Rate of change; if the rate of change in size is greater than or equal to a third preset threshold, determine that the value of the degree of change is 1; if the rate of change in size is less than the third preset threshold, determine that the value of the degree of change is 0 .
示例性的,在实际应用中,可以将当前帧中跟踪框尺寸作为目标图像尺寸,将模板帧中跟踪框的尺寸作为模板图像尺寸。示例性的,目标图像尺寸包括:当前帧中跟踪框的宽和高;模板图像尺寸包括:模板帧中跟踪框的宽和高。Exemplarily, in practical applications, the size of the tracking frame in the current frame may be used as the target image size, and the size of the tracking frame in the template frame may be used as the size of the template image. Exemplarily, the size of the target image includes: the width and height of the tracking frame in the current frame; the size of the template image includes: the width and height of the tracking frame in the template frame.
示例性的,可以通过下面的公式来表示变化程度函数。Exemplarily, the change degree function can be represented by the following formula.
Figure PCTCN2022128719-appb-000016
Figure PCTCN2022128719-appb-000016
Figure PCTCN2022128719-appb-000017
Figure PCTCN2022128719-appb-000017
式中,w t和h t分别表示第t帧(当前帧)中跟踪框的宽和高,w T和h T分别表示模板帧中跟踪框的宽和高,μ为第三预设阈值。示例性的,μ取值可以为0.7。 In the formula, w t and h t represent the width and height of the tracking frame in the tth frame (current frame), respectively, w T and h T represent the width and height of the tracking frame in the template frame, respectively, and μ is the third preset threshold. Exemplarily, the value of μ may be 0.7.
步骤404:对所述第一相似度与所述第二相似度作差,得到相似度差值;Step 404: making a difference between the first similarity and the second similarity to obtain a similarity difference;
步骤405:若所述相似度差值大于或等于第二预设阈值,则将所述相似 度差值作为所述遮挡判定值;若所述相似度差值小于所述第二预设阈值,则将所述差值相似度与所述变化程度值的乘积作为所述遮挡判定值。Step 405: If the similarity difference is greater than or equal to a second preset threshold, use the similarity difference as the occlusion determination value; if the similarity difference is smaller than the second preset threshold, The product of the difference similarity and the change degree value is then used as the occlusion determination value.
这里,确定遮挡判定值的方法可以通过以下公式表示:Here, the method of determining the occlusion judgment value can be expressed by the following formula:
Figure PCTCN2022128719-appb-000018
Figure PCTCN2022128719-appb-000018
式中,φ表示相似度差值,即目标图像与目标模板的相似度越大,同时目标图像与背景图像的相似度越小,则φ值越大。
Figure PCTCN2022128719-appb-000019
为遮挡判定函数,η为第二预设阈值。示例性的,η可以取值为0.3,当φ小于第二预设阈值时,
Figure PCTCN2022128719-appb-000020
为变化程度值与φ值的乘积。
In the formula, φ represents the similarity difference, that is, the greater the similarity between the target image and the target template, and the smaller the similarity between the target image and the background image, the greater the value of φ.
Figure PCTCN2022128719-appb-000019
is an occlusion determination function, and η is a second preset threshold. Exemplary, n can take a value of 0.3, when φ is less than the second preset threshold,
Figure PCTCN2022128719-appb-000020
It is the product of the degree of change and the value of φ.
当前优秀的目标跟踪算法都具备较好的尺度自适应能力,即目标出现遮挡时,跟踪算法会及时调整跟踪框的尺寸,此时仅利用颜色分布的相似度来判断目标是否被遮挡是不够准确的。由于颜色自相关直方图不仅包含了整个图像中不同等级颜色的数量分布信息,还包含相同等级颜色的空间分布信息。因此,通过基于图像的颜色自相关直方图来确定第一相似度、第二相似度,有利于获得更准确的图像相似度,进而提高跟踪目标遮挡判定方法的准确性;通过将目标图像尺寸与模板图像尺寸之间的变化程度值融入遮挡判定值的确定过程,可以提高遮挡判定方法的鲁棒性。The current excellent target tracking algorithms have good scale adaptive capabilities, that is, when the target is occluded, the tracking algorithm will adjust the size of the tracking frame in time. At this time, it is not accurate enough to judge whether the target is occluded by only using the similarity of the color distribution. of. Because the color autocorrelation histogram not only contains the quantity distribution information of different grade colors in the whole image, but also contains the spatial distribution information of the same grade color. Therefore, by determining the first similarity and the second similarity based on the color autocorrelation histogram of the image, it is beneficial to obtain a more accurate image similarity, thereby improving the accuracy of the tracking target occlusion determination method; by combining the target image size with The variation degree value between template image sizes is integrated into the determination process of the occlusion judgment value, which can improve the robustness of the occlusion judgment method.
为了能更加体现本公开的目的,在本公开上实施例的基础上,进行进一步的举例说明,图6为本公开实施例中跟踪目标遮挡判定方法的第二流程示意图。如图6所示,跟踪目标遮挡判定方法具体可以包括:In order to better reflect the purpose of the present disclosure, on the basis of the embodiments of the present disclosure, further illustrations are made. FIG. 6 is a second schematic flow chart of a method for determining occlusion of a tracking target in an embodiment of the present disclosure. As shown in Figure 6, the tracking target occlusion determination method may specifically include:
步骤601:从视频中确定模板帧,以及所述模板帧中包含跟踪目标的模板图像;Step 601: Determine a template frame from the video, and the template frame includes a template image of the tracking target;
示例性的,在一些实施例中,所述方法还包括:将视频的第一帧作为模板帧;框选出第一帧中包含跟踪目标的区域作为模板图像。Exemplarily, in some embodiments, the method further includes: using the first frame of the video as a template frame; selecting a region in the first frame that contains the tracking target as a template image.
步骤602:获取所述模板图像的第一图像信息;Step 602: Obtain first image information of the template image;
其中,第一图像信息包括:模板图像的颜色自相关直方图和模板图像 尺寸。其中,所述颜色自相关直方图包括:预设距离阈值下,整个图像中各个颜色等级的像素个数占所述整个图像像素总数的比例。Wherein, the first image information includes: the color autocorrelation histogram of the template image and the size of the template image. Wherein, the color autocorrelation histogram includes: under a preset distance threshold, the ratio of the number of pixels of each color level in the entire image to the total number of pixels in the entire image.
步骤603:对所述视频中当前帧的所述跟踪目标进行跟踪,确定目标图像;Step 603: Track the tracking target in the current frame of the video, and determine the target image;
这里,对所述视频中当前帧的所述跟踪目标进行跟踪,可以通过跟踪算法来实现,示例性的,跟踪算法包括但不限于单目标跟踪算法SiamRPN,DSST目标跟踪算法等。这里,目标图像为当前帧中包含跟踪目标的图像区域。示例性的,在实际应用中,目标图像为跟踪框内的图像区域。Here, tracking the tracking target in the current frame of the video may be implemented by a tracking algorithm. Exemplarily, the tracking algorithm includes but not limited to a single target tracking algorithm SiamRPN, DSST target tracking algorithm, and the like. Here, the target image is the image area containing the tracking target in the current frame. Exemplarily, in practical applications, the target image is the image area within the tracking frame.
步骤604:获取所述目标图像的第二图像信息;Step 604: Obtain second image information of the target image;
这里,第二图像信息包括:所述目标图像的颜色自相关直方图、目标图像尺寸,目标图像中至少一个图像块的颜色自相关直方图,图像块的背景块的颜色自相关直方图。Here, the second image information includes: the color autocorrelation histogram of the target image, the size of the target image, the color autocorrelation histogram of at least one image block in the target image, and the color autocorrelation histogram of the background block of the image block.
步骤605:基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值;Step 605: Determine an occlusion determination value corresponding to the current frame based on the first image information, the second image information, and the occlusion determination function;
其中,所述遮挡判定函数包括:所述目标图像与所述模板图像的第一相似度函数,所述目标图像与背景图像的第二相似度函数,所述目标图像尺寸与所述模板图像尺寸之间的变化程度函数。Wherein, the occlusion determination function includes: a first similarity function between the target image and the template image, a second similarity function between the target image and the background image, a size of the target image and a size of the template image The degree of change function between.
具体的,基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值,包括:Specifically, based on the first image information, the second image information, and the occlusion determination function, determining the occlusion determination value corresponding to the current frame includes:
将所述第一图像信息和第二图像信息代入所述第一相似度函数,得到第一相似度;将所述第二图像信息代入所述第二相似度函数,得到第二相似度;将所述目标图像尺寸与所述模板图像尺寸代入所述变化程度函数,确定变化程度值;若所述相似度差值大于或等于第二预设阈值,则将所述相似度差值作为所述遮挡判定值;若所述相似度差值小于所述第二预设阈值,则将所述差值相似度与所述变化程度值的乘积作为所述遮挡判定值。Substituting the first image information and the second image information into the first similarity function to obtain a first similarity; substituting the second image information into the second similarity function to obtain a second similarity; Substituting the size of the target image and the size of the template image into the degree of change function to determine the degree of change; if the difference in similarity is greater than or equal to a second preset threshold, use the difference in similarity as the An occlusion determination value; if the similarity difference is smaller than the second preset threshold, the product of the difference similarity and the change degree value is used as the occlusion determination value.
步骤606:所述遮挡判定值大于或者等于第一预设阈值时,确定所述跟踪目标被遮挡;Step 606: When the occlusion determination value is greater than or equal to a first preset threshold, determine that the tracking target is occluded;
步骤607:基于所述目标图像与所述模板图像的位置信息,确定所述目标图像与所述模板图像相交部分的相交面积,以及所述模板图像的面积;Step 607: Based on the position information of the target image and the template image, determine the intersection area of the target image and the template image and the area of the template image;
步骤608:计算所述相交面积与所述模板图像的面积的比值;Step 608: Calculate the ratio of the intersection area to the area of the template image;
步骤609:基于所述比值确定目标遮挡程度值。Step 609: Determine a target occlusion degree value based on the ratio.
示例性的,在一些实施例中,遮挡程度值的计算方法可以通过以下公式表示:Exemplarily, in some embodiments, the calculation method of the occlusion degree value can be expressed by the following formula:
Figure PCTCN2022128719-appb-000021
Figure PCTCN2022128719-appb-000021
w 0=min(x T+h T,x t+h t)-max(x T,x t) w 0 =min(x T +h T ,x t +h t )-max(x T ,x t ) ;
h 0=min(y T+w T,y t+w t)-max(y T,y t) h 0 =min(y T +w T ,y t +w t )-max(y T ,y t )
式中,w 0·h 0表示当前帧跟踪框与模板帧跟踪框相交部分的面积,w T·h T表示模板帧跟踪框的面积;w T,h T分别表示模板帧跟踪框的宽和高;ρ代表遮挡程度,其取值范围为[0,1],其值越大表示目标被遮挡程度越高。 In the formula, w 0 h 0 represents the area of the intersection of the current frame tracking frame and the template frame tracking frame, w T h T represents the area of the template frame tracking frame; w T , h T represent the width and High; ρ represents the degree of occlusion, and its value range is [0, 1]. The larger the value, the higher the degree of occlusion of the target.
这里,步骤601至步骤609的执行主体可以为跟踪目标遮挡判定设备的处理器。Here, the subject of execution of steps 601 to 609 may be a processor of the tracking target occlusion determination device.
本公开的技术方案,通过设置融合了模板图像与目标图像的相似度函数、目标图像与背景图像的相似度函数及目标图像尺寸与所述模板图像尺寸之间的变化程度函数的遮挡判定函数,可以在进行遮挡情况判定时,将用于判断遮挡情况的多个影响因素结合起来,即将目标图像与模板图像的第一相似度和变化程度值,以及目标图像与背景图像的第二相似度相结合,来确定判断遮挡情况,提高遮挡判定方法的准确性和鲁棒性。In the technical solution of the present disclosure, by setting an occlusion judgment function that combines the similarity function between the template image and the target image, the similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image, When judging the occlusion situation, multiple influencing factors for judging the occlusion situation can be combined, that is, the first similarity and change degree value between the target image and the template image, and the second similarity between the target image and the background image. Combined, to determine and judge the occlusion situation, improve the accuracy and robustness of the occlusion judgment method.
为了能更加体现本公开的目的,在本公开上实施例的基础上,进行进一步的举例说明,图7为本公开实施例中跟踪目标遮挡判定方法的第三流程示意图。如图7所示,跟踪目标遮挡判定方法具体可以包括:In order to better reflect the purpose of the present disclosure, on the basis of the embodiments of the present disclosure, a further example is given. FIG. 7 is a schematic flowchart of a third method for determining occlusion of a tracking target in an embodiment of the present disclosure. As shown in Figure 7, the method for judging the occlusion of the tracking target may specifically include:
步骤701:获取第t帧;Step 701: Obtain the tth frame;
步骤702:判断t是否等于1;若是,则执行步骤703;若否,执行步骤704;Step 702: Determine whether t is equal to 1; if so, execute step 703; if not, execute step 704;
步骤703:框选目标并确定模板帧及模板图像;Step 703: Frame the target and determine the template frame and template image;
具体的,将第1帧作为模板帧,并框选出包含跟踪目标的跟踪框,将框内图像作为模板图像;Specifically, the first frame is used as the template frame, and the tracking frame containing the tracking target is selected, and the image in the frame is used as the template image;
步骤704:获取模板图像的尺寸及颜色自相关直方图;Step 704: Obtain the size and color autocorrelation histogram of the template image;
具体的,模板图像的尺寸为跟踪框的尺寸,颜色自相关直方图为模板图像的颜色自相关直方图。这里,模板图像可以为预设某一帧中包含跟踪目标的预设图像区域;Specifically, the size of the template image is the size of the tracking frame, and the color autocorrelation histogram is the color autocorrelation histogram of the template image. Here, the template image may be a preset image area containing a tracking target in a preset frame;
步骤705:基于跟踪算法确定第t帧中的目标图像;Step 705: Determine the target image in the tth frame based on the tracking algorithm;
步骤706:获取目标图像的尺寸、图像块的颜色自相关图和背景块的颜色自相关直方图;Step 706: Obtain the size of the target image, the color autocorrelation diagram of the image block and the color autocorrelation histogram of the background block;
这里,目标图像的尺寸即为当前帧中的跟踪框尺寸。Here, the size of the target image is the size of the tracking box in the current frame.
步骤707:计算第一相似度、第二相似度、变化程度值;Step 707: Calculating the first similarity, second similarity, and change degree values;
这里,第一相似度为目标图像与模板图像的相似度,第二相似度为目标图像与背景图像的相似度,变化程度值为目标图像尺寸与所述模板图像尺寸之间的变化程度值。Here, the first similarity is the similarity between the target image and the template image, the second similarity is the similarity between the target image and the background image, and the change degree value is the change degree value between the target image size and the template image size.
步骤708:计算遮挡判定值;Step 708: Calculate the occlusion judgment value;
步骤709:是否被遮挡;若是,执行步骤711及712;若否,执行步骤710;Step 709: Whether it is blocked; if yes, execute steps 711 and 712; if not, execute step 710;
步骤710:将第t帧作为模板帧;Step 710: use the tth frame as a template frame;
步骤711:令t=t+1,并返回执行步骤704;Step 711: set t=t+1, and return to step 704;
步骤712:计算目标图像与所述模板图像相交部分的相交面积及模板图像的面积;Step 712: Calculate the intersection area of the intersecting part between the target image and the template image and the area of the template image;
步骤713:确定目标遮挡程度值。Step 713: Determine the target occlusion degree value.
本公开的技术方案,通过设置融合了模板图像与目标图像的相似度函数、目标图像与背景图像的相似度函数及目标图像尺寸与所述模板图像尺寸之间的变化程度函数的遮挡判定函数,可以在进行遮挡情况判定时,将可以用于判断遮挡情况的因素结合起来,由结合结果来确定判断遮挡情况,提高遮挡判定方法的准确性和鲁棒性;通过基于目标跟踪框与目标模板框的交集计算遮挡程度,能够更精确地描述跟踪目标的遮挡情况;通过基于图像的颜色自相关直方图来确定第一相似度、第二相似度,有利于获得更准确的图像相似度,进而提高跟踪目标遮挡判定方法的准确性。In the technical solution of the present disclosure, by setting an occlusion judgment function that combines the similarity function between the template image and the target image, the similarity function between the target image and the background image, and the change degree function between the size of the target image and the size of the template image, When judging the occlusion situation, the factors that can be used to judge the occlusion situation can be combined, and the occlusion situation can be determined and judged by the combined results, so as to improve the accuracy and robustness of the occlusion judgment method; The degree of occlusion is calculated by the intersection of the intersection, which can more accurately describe the occlusion of the tracking target; the first similarity and the second similarity are determined by the color autocorrelation histogram based on the image, which is beneficial to obtain a more accurate image similarity, and then improve Accuracy of tracking object occlusion determination method.
图8为本公开实施例中跟踪目标遮挡判定装置的组成结构示意图,展示了一种跟踪目标遮挡判定装置80,该跟踪目标遮挡判定装置80具体包括:Fig. 8 is a schematic diagram of the composition and structure of a tracking target occlusion determination device in an embodiment of the present disclosure, showing a tracking target occlusion determination device 80, the tracking target occlusion determination device 80 specifically includes:
处理模块801,配置为从视频中确定模板帧,以及所述模板帧中包含跟踪目标的模板图像;The processing module 801 is configured to determine a template frame from the video, and the template frame includes a template image of the tracking target;
获取模块802,配置为获取所述模板图像的第一图像信息;An acquiring module 802, configured to acquire first image information of the template image;
所述处理模块801,还配置为对所述视频中当前帧的所述跟踪目标进行跟踪,确定目标图像;The processing module 801 is further configured to track the tracking target in the current frame of the video, and determine a target image;
所述获取模块802,还配置为获取所述目标图像的第二图像信息;The obtaining module 802 is further configured to obtain second image information of the target image;
所述处理模块801,还配置为基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值;其中,所述遮挡判定函数包括:所述目标图像与所述模板图像的第一相似度函数,所述目标图像与背景图像的第二相似度函数,所述目标图像尺寸与所述模板图像尺寸之间的变化程度函数;The processing module 801 is further configured to determine an occlusion determination value corresponding to the current frame based on the first image information, the second image information, and an occlusion determination function; wherein the occlusion determination function includes: the target image A first similarity function with the template image, a second similarity function between the target image and the background image, a change degree function between the size of the target image and the size of the template image;
所述处理模块801,还配置为所述遮挡判定值大于或者等于第一预设阈值时,确定所述跟踪目标被遮挡。The processing module 801 is further configured to determine that the tracking target is blocked when the occlusion determination value is greater than or equal to a first preset threshold.
在一些实施例中,所述第一图像信息包括:所述模板图像的颜色自相 关直方图;所述第二图像信息包括:所述目标图像的颜色自相关直方图;其中,所述颜色自相关直方图包括:预设距离阈值下,整个图像中各个颜色等级的像素个数占所述整个图像像素总数的比例。In some embodiments, the first image information includes: the color autocorrelation histogram of the template image; the second image information includes: the color autocorrelation histogram of the target image; wherein, the color autocorrelation histogram The correlation histogram includes: the ratio of the number of pixels of each color level in the entire image to the total number of pixels in the entire image under a preset distance threshold.
在一些实施例中,所述处理模块801,配置为将所述第一图像信息和第二图像信息代入所述第一相似度函数,得到第一相似度;将所述第二图像信息代入所述第二相似度函数,得到第二相似度;将所述目标图像尺寸与所述模板图像尺寸代入所述变化程度函数,确定变化程度值;对所述第一相似度与所述第二相似度作差,得到相似度差值;若所述相似度差值大于或等于第二预设阈值,则将所述相似度差值作为所述遮挡判定值;若所述相似度差值小于所述第二预设阈值,则将所述差值相似度与所述变化程度值的乘积作为所述遮挡判定值。In some embodiments, the processing module 801 is configured to substitute the first image information and the second image information into the first similarity function to obtain the first similarity; and substitute the second image information into the The second similarity function is used to obtain the second similarity; the target image size and the template image size are substituted into the change degree function to determine the change degree value; for the first similarity and the second similarity If the similarity difference is greater than or equal to the second preset threshold, then use the similarity difference as the occlusion judgment value; if the similarity difference is less than the set If the second preset threshold is used, the product of the difference similarity and the change degree value is used as the occlusion determination value.
在一些实施例中,所述第一相似度函数为:In some embodiments, the first similarity function is:
Figure PCTCN2022128719-appb-000022
Figure PCTCN2022128719-appb-000022
式中,
Figure PCTCN2022128719-appb-000023
为所述目标图像中预设距离阈值为d时,颜色等级为i的像素个数占整个所述目标图像像素总数的比例;
Figure PCTCN2022128719-appb-000024
为模板图像中预设距离阈值为d时,颜色等级为i的像素个数占整个所述模板图像像素总数的比例;D为预设距离阈值的集合;q为所述颜色等级的个数。
In the formula,
Figure PCTCN2022128719-appb-000023
When the preset distance threshold in the target image is d, the ratio of the number of pixels with color level i to the total number of pixels in the target image;
Figure PCTCN2022128719-appb-000024
When the preset distance threshold in the template image is d, the number of pixels whose color grade is i accounts for the ratio of the total number of pixels in the template image; D is a set of preset distance thresholds; q is the number of the color grades.
在一些实施例中,所述第一图像信息还包括:模板图像尺寸,所述第二图像信息还包括:目标图像尺寸;所述处理模块801,配置为基于所述变化程度函数计算所述目标图像尺寸与所述模板图像尺寸相比的尺寸变化率;若所述尺寸变化率大于或等于第三预设阈值,确定所述变化程度值为1;若所述尺寸变化率小于所述第三预设阈值,确定所述变化程度值为0。In some embodiments, the first image information further includes: template image size, and the second image information further includes: target image size; the processing module 801 is configured to calculate the target based on the change degree function The size change rate of the image size compared with the size of the template image; if the size change rate is greater than or equal to a third preset threshold, determine that the change degree value is 1; if the size change rate is less than the third preset threshold A preset threshold is used to determine that the value of the degree of change is 0.
在一些实施例中,所述第二图像信息还包括:所述目标图像中至少一个图像块的颜色自相关直方图,所述图像块的背景块的颜色自相关直方图;In some embodiments, the second image information further includes: a color autocorrelation histogram of at least one image block in the target image, a color autocorrelation histogram of a background block of the image block;
所述第二相似度函数为:The second similarity function is:
Figure PCTCN2022128719-appb-000025
Figure PCTCN2022128719-appb-000025
式中,
Figure PCTCN2022128719-appb-000026
为第j图像块中预设距离阈值为d时,颜色等级为i的像素个数占所述第j图像块像素总数的比例;
Figure PCTCN2022128719-appb-000027
为第j背景块中预设距离阈值为d时,颜色等级为i的像素个数占所述第j背景块像素总数的比例;D为预设距离阈值的集合;q为所述颜色等级的个数,u为所述图像块的个数。
In the formula,
Figure PCTCN2022128719-appb-000026
When the preset distance threshold is d in the jth image block, the ratio of the number of pixels with color level i to the total number of pixels in the jth image block;
Figure PCTCN2022128719-appb-000027
When the preset distance threshold is d in the jth background block, the number of pixels whose color level is i accounts for the ratio of the total number of pixels in the jth background block; D is the set of preset distance thresholds; q is the color level number, u is the number of the image block.
在一些实施例中,所述处理模块801,还配置为对所述目标图像进行纵向划分,得到第1图像块和第2图像块;其中,所述第1图像块位于所述第2图像块的上部;对所述目标图像进行横向划分,得到第3图像块和第4图像块;其中,所述第3图像块位于所述第4图像块的左部;其中,所述至少一个图像块包括第1图像块、第2图像块、第3图像块和第4图像块;在所述当前帧中,选取位于所述第1图像块上方与所述第1图像块边相邻,且尺寸相同的区域作为第1背景块;在所述当前帧中,选取位于所述第2图像块下方与所述第2图像块边相邻,且尺寸相同的区域作为第2背景块;在所述当前帧中,选取位于所述第3图像块左方与所述第3图像块边相邻,且尺寸相同的区域作为第3背景块;在所述当前帧中,选取位于所述第4图像块右方与所述第4图像块边相邻,且尺寸相同的区域作为第4背景块。In some embodiments, the processing module 801 is further configured to vertically divide the target image to obtain a first image block and a second image block; wherein, the first image block is located in the second image block The upper part; the target image is horizontally divided to obtain the third image block and the fourth image block; wherein the third image block is located at the left part of the fourth image block; wherein the at least one image block Including the first image block, the second image block, the third image block and the fourth image block; in the current frame, select the side adjacent to the first image block located above the first image block, and the size The same area is used as the first background block; in the current frame, select an area located below the second image block adjacent to the side of the second image block and of the same size as the second background block; in the In the current frame, select an area located on the left side of the third image block that is adjacent to the side of the third image block and has the same size as the third background block; in the current frame, select an area located in the fourth image The right side of the block is adjacent to the fourth image block, and the area with the same size is used as the fourth background block.
在一些实施例中,所述处理模块801,还配置为基于所述目标图像与所述模板图像的位置信息,确定所述目标图像与所述模板图像相交部分的相交面积,以及所述模板图像的面积;计算所述相交面积与所述模板图像的面积的比值;基于所述比值确定目标遮挡程度值。In some embodiments, the processing module 801 is further configured to determine the intersection area of the intersection of the target image and the template image based on the position information of the target image and the template image, and the template image calculating the ratio of the intersecting area to the area of the template image; determining a target occlusion degree value based on the ratio.
在一些实施例中,所述处理模块801,还配置为所述当前帧的遮挡判定函数值小于所述第一预设阈值时,将所述当前帧作为新的模板帧;基于所述新的模板帧,确定下一帧的遮挡判定函数值。In some embodiments, the processing module 801 is further configured to use the current frame as a new template frame when the occlusion determination function value of the current frame is smaller than the first preset threshold; based on the new Template frame, which determines the occlusion judgment function value of the next frame.
基于上述跟踪目标遮挡判定装置中各单元的硬件实现,本公开实施例 还提供了另一种跟踪目标遮挡判定设备,Based on the hardware implementation of each unit in the above-mentioned tracking target occlusion determination device, the embodiment of the present disclosure also provides another tracking target occlusion determination device,
图9为本公开实施例中跟踪目标遮挡判定设备的组成结构示意图。如图9所示,该设备90包括:处理器901和配置为存储能够在处理器上运行的计算机程序的存储器902;其中,处理器901配置为运行计算机程序时,执行前述实施例中的方法步骤。FIG. 9 is a schematic diagram of the composition and structure of a tracking target occlusion determination device in an embodiment of the present disclosure. As shown in FIG. 9 , the device 90 includes: a processor 901 and a memory 902 configured to store a computer program that can run on the processor; wherein, when the processor 901 is configured to run the computer program, execute the methods in the foregoing embodiments step.
当然,实际应用时,如图9所示,该跟踪目标遮挡判定设备中的各个组件通过总线系统903耦合在一起。可理解,总线系统903用于实现这些组件之间的连接通信。总线系统903除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图9中将各种总线都标为总线系统903。Certainly, in actual application, as shown in FIG. 9 , various components in the tracking target occlusion determination device are coupled together through a bus system 903 . It can be understood that the bus system 903 is used to realize connection and communication between these components. In addition to the data bus, the bus system 903 also includes a power bus, a control bus and a status signal bus. However, the various buses are labeled bus system 903 in FIG. 9 for clarity of illustration.
在实际应用中,上述处理器可以为特定用途集成电路(ASIC,Application Specific Integrated Circuit)、数字信号处理装置(DSPD,Digital Signal Processing Device)、可编程逻辑装置(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的设备,用于实现上述处理器功能的电子器件还可以为其它,本公开实施例不作具体限定。In practical application, the above-mentioned processor can be application specific integrated circuit (ASIC, Application Specific Integrated Circuit), digital signal processing device (DSPD, Digital Signal Processing Device), programmable logic device (PLD, Programmable Logic Device), on-site At least one of a programmable gate array (FPGA, Field-Programmable Gate Array), a controller, a microcontroller, and a microprocessor. It can be understood that, for different devices, the electronic device used to implement the above processor function may also be other, which is not specifically limited in this embodiment of the present disclosure.
上述存储器可以是易失性存储器(volatile memory),例如随机存取存储器(RAM,Random-Access Memory);或者非易失性存储器(non-volatile memory),例如只读存储器(ROM,Read-Only Memory),快闪存储器(flash memory),硬盘(HDD,Hard Disk Drive)或固态硬盘(SSD,Solid-State Drive);或者上述种类的存储器的组合,并向处理器提供指令和数据。Above-mentioned memory can be volatile memory (volatile memory), such as random access memory (RAM, Random-Access Memory); Or non-volatile memory (non-volatile memory), such as read-only memory (ROM, Read-Only Memory), flash memory (flash memory), hard disk (HDD, Hard Disk Drive) or solid-state drive (SSD, Solid-State Drive); or a combination of the above types of memory, and provide instructions and data to the processor.
在示例性实施例中,本公开实施例还提供了一种计算机可读存储介质,例如包括计算机程序的存储器,计算机程序可由网络设备识别设备的处理器执行,以完成前述方法的步骤。In an exemplary embodiment, an embodiment of the present disclosure further provides a computer-readable storage medium, such as a memory including a computer program, and the computer program can be executed by a processor of a network device identification device to complete the steps of the foregoing method.
应当理解,在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。本公开中表述“具有”、“可以具有”、“包括”和“包含”、或者“可以包括”和“可以包含”在本文中可以用于指示存在对应的特征(例如,诸如数值、功能、操作或组件等元素),但不排除附加特征的存在。It should be understood that the terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the present disclosure. As used in this disclosure and the appended claims, the singular forms "a", "the", and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. The expressions "has", "may have", "comprises" and "comprises", or "may comprise" and "may comprise" in this disclosure may be used herein to indicate the presence of corresponding features (eg, such as values, functions, elements such as operations or components), but does not exclude the presence of additional features.
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开,不必用于描述特定的顺序或先后次序。例如,在不脱离本发明范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。It should be understood that although the terms first, second, third, etc. may be used in the present disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another and are not necessarily used to describe a specific order or sequence. For example, without departing from the scope of the present invention, first information may also be called second information, and similarly, second information may also be called first information.
本公开实施例所记载的技术方案之间,在不冲突的情况下,可以任意组合。The technical solutions described in the embodiments of the present disclosure may be combined arbitrarily if there is no conflict.
在本公开的几个实施例中,应该理解到,所揭露的方法、装置和设备,可以通过其它的方式实现。以上所描述的实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments of the present disclosure, it should be understood that the disclosed methods, devices and equipment can be implemented in other ways. The above-described embodiments are only illustrative. For example, the division of units is only a logical function division. In actual implementation, there may be other division methods, such as: multiple units or components can be combined, or can be integrated into Another system, or some features may be ignored, or not implemented. In addition, the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分 或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units; Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本公开各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may be used as a single unit, or two or more units may be integrated into one unit; the above-mentioned integration The unit can be realized in the form of hardware or in the form of hardware plus software functional unit.
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。The above is only a specific implementation of the present disclosure, but the scope of protection of the present disclosure is not limited thereto. Anyone skilled in the art can easily think of changes or substitutions within the technical scope of the present disclosure. should fall within the protection scope of the present disclosure.
工业实用性Industrial Applicability
本公开提供一种跟踪目标遮挡判定方法、装置、设备及存储介质,包括:分别获取模板图像和目标图像的第一、第二图像信息;基于第一、第二图像信息和遮挡判定函数,确定遮挡判定值;其中,遮挡判定函数包括:目标图像与模板图像的第一相似度函数,目标图像与背景图像的第二相似度函数,目标图像尺寸与所述模板图像尺寸之间的变化程度函数;基于遮挡判定值,确定遮挡情况。如此,通过设置融合两种相似度函数及变化程度函数的遮挡判定函数,在进行遮挡情况判定时,将用于判断遮挡情况的多个影响因素结合起来,即将目标图像与模板图像的第一相似度和变化程度值,以及目标图像与背景图像的第二相似度相结合,来确定判断遮挡情况,提高遮挡判定方法的准确性和鲁棒性。The present disclosure provides a tracking target occlusion determination method, device, device, and storage medium, including: acquiring first and second image information of a template image and a target image respectively; based on the first and second image information and an occlusion determination function, determining Blocking judgment value; wherein, the blocking judgment function includes: the first similarity function of the target image and the template image, the second similarity function of the target image and the background image, the change degree function between the size of the target image and the size of the template image ; Determine the occlusion situation based on the occlusion judgment value. In this way, by setting an occlusion judgment function that combines two similarity functions and change degree functions, when judging the occlusion situation, multiple influencing factors for judging the occlusion situation are combined, that is, the first similarity between the target image and the template image Degree and change degree values, and the second similarity between the target image and the background image are combined to determine and judge the occlusion situation, and improve the accuracy and robustness of the occlusion judgment method.

Claims (12)

  1. 一种跟踪目标遮挡判定方法,所述方法包括:A method for judging a tracking target occlusion, the method comprising:
    从视频中确定模板帧,以及所述模板帧中包含跟踪目标的模板图像;Determining a template frame from the video, and including a template image of the tracking target in the template frame;
    获取所述模板图像的第一图像信息;acquiring first image information of the template image;
    对所述视频中当前帧的所述跟踪目标进行跟踪,确定目标图像;Tracking the tracking target in the current frame of the video, and determining the target image;
    获取所述目标图像的第二图像信息;acquiring second image information of the target image;
    基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值;其中,所述遮挡判定函数包括:所述目标图像与所述模板图像的第一相似度函数,所述目标图像与背景图像的第二相似度函数,所述目标图像尺寸与所述模板图像尺寸之间的变化程度函数;Based on the first image information, the second image information, and an occlusion determination function, determine an occlusion determination value corresponding to the current frame; wherein the occlusion determination function includes: a first similarity between the target image and the template image A degree function, a second similarity function between the target image and the background image, a change degree function between the size of the target image and the size of the template image;
    所述遮挡判定值大于或者等于第一预设阈值时,确定所述跟踪目标被遮挡。When the occlusion determination value is greater than or equal to a first preset threshold, it is determined that the tracking target is occluded.
  2. 根据权利要求1所述的方法,其中,所述第一图像信息包括:所述模板图像的颜色自相关直方图;所述第二图像信息包括:所述目标图像的颜色自相关直方图;其中,所述颜色自相关直方图包括:预设距离阈值下,整个图像中各个颜色等级的像素个数占所述整个图像像素总数的比例。The method according to claim 1, wherein the first image information comprises: a color autocorrelation histogram of the template image; the second image information comprises: a color autocorrelation histogram of the target image; wherein , the color autocorrelation histogram includes: under a preset distance threshold, the ratio of the number of pixels of each color level in the entire image to the total number of pixels in the entire image.
  3. 根据权利要求2所述的方法,其中,所述基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值,包括:The method according to claim 2, wherein said determining the occlusion determination value corresponding to the current frame based on the first image information, the second image information and the occlusion determination function comprises:
    将所述第一图像信息和第二图像信息代入所述第一相似度函数,得到第一相似度;Substituting the first image information and the second image information into the first similarity function to obtain a first similarity;
    将所述第二图像信息代入所述第二相似度函数,得到第二相似度;Substituting the second image information into the second similarity function to obtain a second similarity;
    将所述目标图像尺寸与所述模板图像尺寸代入所述变化程度函数,确定变化程度值;Substituting the target image size and the template image size into the change degree function to determine the change degree value;
    对所述第一相似度与所述第二相似度作差,得到相似度差值;Making a difference between the first similarity and the second similarity to obtain a similarity difference;
    若所述相似度差值大于或等于第二预设阈值,则将所述相似度差值作 为所述遮挡判定值;If the similarity difference is greater than or equal to a second preset threshold, the similarity difference is used as the occlusion judgment value;
    若所述相似度差值小于所述第二预设阈值,则将所述差值相似度与所述变化程度值的乘积作为所述遮挡判定值。If the similarity difference is smaller than the second preset threshold, the product of the difference similarity and the change degree value is used as the occlusion determination value.
  4. 根据权利要求3所述的方法,其中,所述第一相似度函数为:The method according to claim 3, wherein the first similarity function is:
    Figure PCTCN2022128719-appb-100001
    Figure PCTCN2022128719-appb-100001
    式中,
    Figure PCTCN2022128719-appb-100002
    为所述目标图像中预设距离阈值为d时,颜色等级为i的像素个数占整个所述目标图像像素总数的比例;
    Figure PCTCN2022128719-appb-100003
    为模板图像中预设距离阈值为d时,颜色等级为i的像素个数占整个所述模板图像像素总数的比例;D为预设距离阈值的集合;q为所述颜色等级的个数。
    In the formula,
    Figure PCTCN2022128719-appb-100002
    When the preset distance threshold in the target image is d, the ratio of the number of pixels with color level i to the total number of pixels in the target image;
    Figure PCTCN2022128719-appb-100003
    When the preset distance threshold in the template image is d, the number of pixels whose color grade is i accounts for the ratio of the total number of pixels in the template image; D is a set of preset distance thresholds; q is the number of the color grades.
  5. 根据权利要求3所述的方法,其中,所述第一图像信息还包括:模板图像尺寸,所述第二图像信息还包括:目标图像尺寸;The method according to claim 3, wherein the first image information further includes: template image size, and the second image information further includes: target image size;
    所述将所述目标图像尺寸与所述模板图像尺寸代入所述变化程度函数,确定变化程度值,包括:The step of substituting the target image size and the template image size into the change degree function to determine the change degree value includes:
    基于所述变化程度函数计算所述目标图像尺寸与所述模板图像尺寸相比的尺寸变化率;calculating a size change rate of the target image size compared to the template image size based on the change degree function;
    若所述尺寸变化率大于或等于第三预设阈值,确定所述变化程度值为1;If the dimensional change rate is greater than or equal to a third preset threshold, determine that the change degree is 1;
    若所述尺寸变化率小于所述第三预设阈值,确定所述变化程度值为0。If the size change rate is less than the third preset threshold, determine that the change degree is 0.
  6. 根据权利要求3所述的方法,其中,所述第二图像信息还包括:所述目标图像中至少一个图像块的颜色自相关直方图,所述图像块的背景块的颜色自相关直方图;The method according to claim 3, wherein the second image information further comprises: a color autocorrelation histogram of at least one image block in the target image, a color autocorrelation histogram of a background block of the image block;
    所述第二相似度函数为:The second similarity function is:
    Figure PCTCN2022128719-appb-100004
    Figure PCTCN2022128719-appb-100004
    式中,
    Figure PCTCN2022128719-appb-100005
    为第j图像块中预设距离阈值为d时,颜色等级为i的 像素个数占所述第j图像块像素总数的比例;
    Figure PCTCN2022128719-appb-100006
    为第j背景块中预设距离阈值为d时,颜色等级为i的像素个数占所述第j背景块像素总数的比例;D为预设距离阈值的集合;q为所述颜色等级的个数,u为所述图像块的个数。
    In the formula,
    Figure PCTCN2022128719-appb-100005
    When the preset distance threshold is d in the jth image block, the ratio of the number of pixels with color level i to the total number of pixels in the jth image block;
    Figure PCTCN2022128719-appb-100006
    When the preset distance threshold is d in the jth background block, the number of pixels whose color level is i accounts for the ratio of the total number of pixels in the jth background block; D is the set of preset distance thresholds; q is the color level number, u is the number of the image block.
  7. 根据权利要求6所述的方法,其中,所述方法还包括:The method according to claim 6, wherein the method further comprises:
    对所述目标图像进行纵向划分,得到第1图像块和第2图像块;其中,所述第1图像块位于所述第2图像块的上部;The target image is divided vertically to obtain a first image block and a second image block; wherein, the first image block is located on the upper part of the second image block;
    对所述目标图像进行横向划分,得到第3图像块和第4图像块;其中,所述第3图像块位于所述第4图像块的左部;其中,所述至少一个图像块包括第1图像块、第2图像块、第3图像块和第4图像块;The target image is horizontally divided to obtain a third image block and a fourth image block; wherein the third image block is located on the left of the fourth image block; wherein the at least one image block includes the first image block, the second image block, the third image block and the fourth image block;
    在所述当前帧中,选取位于所述第1图像块上方与所述第1图像块边相邻,且尺寸相同的区域作为第1背景块;In the current frame, select an area above the first image block that is adjacent to the side of the first image block and has the same size as the first background block;
    在所述当前帧中,选取位于所述第2图像块下方与所述第2图像块边相邻,且尺寸相同的区域作为第2背景块;In the current frame, select an area located below the second image block adjacent to the side of the second image block and having the same size as the second background block;
    在所述当前帧中,选取位于所述第3图像块左方与所述第3图像块边相邻,且尺寸相同的区域作为第3背景块;In the current frame, select an area located on the left side of the third image block adjacent to the side of the third image block and having the same size as the third background block;
    在所述当前帧中,选取位于所述第4图像块右方与所述第4图像块边相邻,且尺寸相同的区域作为第4背景块。In the current frame, select an area on the right side of the fourth image block adjacent to the fourth image block and having the same size as the fourth background block.
  8. 根据权利要求1所述的方法,其中,所述确定所述跟踪目标被遮挡之后,所述方法还包括:The method according to claim 1, wherein, after determining that the tracking target is blocked, the method further comprises:
    基于所述目标图像与所述模板图像的位置信息,确定所述目标图像与所述模板图像相交部分的相交面积,以及所述模板图像的面积;Based on the position information of the target image and the template image, determine the intersection area of the intersection part of the target image and the template image, and the area of the template image;
    计算所述相交面积与所述模板图像的面积的比值;calculating the ratio of the intersection area to the area of the template image;
    基于所述比值确定目标遮挡程度值。A target occlusion degree value is determined based on the ratio.
  9. 根据权利要求1所述的方法,其中,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    所述当前帧的遮挡判定函数值小于所述第一预设阈值时,将所述当前帧作为新的模板帧;When the occlusion determination function value of the current frame is less than the first preset threshold, the current frame is used as a new template frame;
    基于所述新的模板帧,确定下一帧的遮挡判定函数值。Based on the new template frame, an occlusion decision function value of the next frame is determined.
  10. 一种跟踪目标遮挡判定装置,所述装置包括:A tracking target occlusion determination device, said device comprising:
    处理模块,配置为从视频中确定模板帧,以及所述模板帧中包含跟踪目标的模板图像;A processing module configured to determine a template frame from the video, and the template frame includes a template image of the tracking target;
    获取模块,配置为获取所述模板图像的第一图像信息;an acquisition module configured to acquire the first image information of the template image;
    所述处理模块,还配置为对所述视频中当前帧的所述跟踪目标进行跟踪,确定目标图像;The processing module is further configured to track the tracking target in the current frame of the video, and determine the target image;
    所述获取模块,还配置为获取所述目标图像的第二图像信息;The acquiring module is further configured to acquire second image information of the target image;
    所述处理模块,还配置为基于所述第一图像信息、第二图像信息和遮挡判定函数,确定所述当前帧对应的遮挡判定值;其中,所述遮挡判定函数包括:所述目标图像与所述模板图像的第一相似度函数,所述目标图像与背景图像的第二相似度函数,所述目标图像尺寸与所述模板图像尺寸之间的变化程度函数;The processing module is further configured to determine an occlusion determination value corresponding to the current frame based on the first image information, the second image information, and an occlusion determination function; wherein, the occlusion determination function includes: the target image and The first similarity function of the template image, the second similarity function between the target image and the background image, the change degree function between the size of the target image and the size of the template image;
    所述处理模块,还配置为所述遮挡判定值大于或者等于第一预设阈值时,确定所述跟踪目标被遮挡。The processing module is further configured to determine that the tracking target is blocked when the occlusion determination value is greater than or equal to a first preset threshold.
  11. 一种跟踪目标遮挡判定设备,其中,所述设备包括:处理器和配置为存储能够在处理器上运行的计算机程序的存储器,A tracking target occlusion determination device, wherein the device includes: a processor and a memory configured to store a computer program that can run on the processor,
    其中,所述处理器配置为运行所述计算机程序时,执行权利要求1-9任一项所述方法的步骤。Wherein, the processor is configured to execute the steps of the method according to any one of claims 1-9 when running the computer program.
  12. 一种计算机可读存储介质,其上存储有计算机程序,其中,该计算机程序被处理器执行时实现权利要求1-9任一项所述的方法的步骤。A computer-readable storage medium, on which a computer program is stored, wherein, when the computer program is executed by a processor, the steps of the method described in any one of claims 1-9 are implemented.
PCT/CN2022/128719 2021-10-29 2022-10-31 Tracking target occlusion determination method and apparatus, device and storage medium WO2023072290A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111274073.6 2021-10-29
CN202111274073.6A CN116091536A (en) 2021-10-29 2021-10-29 Tracking target shielding judging method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023072290A1 true WO2023072290A1 (en) 2023-05-04

Family

ID=86159085

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/128719 WO2023072290A1 (en) 2021-10-29 2022-10-31 Tracking target occlusion determination method and apparatus, device and storage medium

Country Status (2)

Country Link
CN (1) CN116091536A (en)
WO (1) WO2023072290A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876970A (en) * 2024-03-11 2024-04-12 青岛三诚众合智能设备科技有限公司 Workshop intelligent management method and system based on image processing and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820996A (en) * 2015-05-11 2015-08-05 河海大学常州校区 Target tracking method based on self-adaptive blocks of video
JP2018139086A (en) * 2017-02-24 2018-09-06 三菱電機株式会社 Correlation tracking device, correlation tracking method and correlation tracking program
CN108920997A (en) * 2018-04-10 2018-11-30 国网浙江省电力有限公司信息通信分公司 Judge that non-rigid targets whether there is the tracking blocked based on profile
CN109398533A (en) * 2018-11-22 2019-03-01 华南理工大学 A kind of mobile platform and the method for mobile platform tracking for a long time
CN110689555A (en) * 2019-10-12 2020-01-14 四川航天神坤科技有限公司 KCF tracking target loss detection method and system based on foreground detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820996A (en) * 2015-05-11 2015-08-05 河海大学常州校区 Target tracking method based on self-adaptive blocks of video
JP2018139086A (en) * 2017-02-24 2018-09-06 三菱電機株式会社 Correlation tracking device, correlation tracking method and correlation tracking program
CN108920997A (en) * 2018-04-10 2018-11-30 国网浙江省电力有限公司信息通信分公司 Judge that non-rigid targets whether there is the tracking blocked based on profile
CN109398533A (en) * 2018-11-22 2019-03-01 华南理工大学 A kind of mobile platform and the method for mobile platform tracking for a long time
CN110689555A (en) * 2019-10-12 2020-01-14 四川航天神坤科技有限公司 KCF tracking target loss detection method and system based on foreground detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876970A (en) * 2024-03-11 2024-04-12 青岛三诚众合智能设备科技有限公司 Workshop intelligent management method and system based on image processing and electronic equipment

Also Published As

Publication number Publication date
CN116091536A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN106845455B (en) Image processing method, system and server based on skin color detection
CN109361910B (en) Self-adaptive white balance correction method and device
US9710715B2 (en) Image processing system, image processing device, and image processing method
EP1482446A2 (en) Region detecting method and apparatus
WO2018040756A1 (en) Vehicle body colour identification method and device
CN107742274A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN107358245B (en) Method for detecting image collaborative salient region
US8369609B2 (en) Reduced-complexity disparity map estimation
CN111480183B (en) Light field image rendering method and system for generating perspective effect
WO2023072290A1 (en) Tracking target occlusion determination method and apparatus, device and storage medium
US9769460B1 (en) Conversion of monoscopic visual content to stereoscopic 3D
CN105513080B (en) A kind of infrared image target Salience estimation
US9406140B2 (en) Method and apparatus for generating depth information
CN105184771A (en) Adaptive moving target detection system and detection method
CN106557729B (en) Apparatus and method for processing face image
CN104331890B (en) A kind of global disparity method of estimation and system
CN106529543A (en) Method and system for dynamically calculating multi-color-grade binary adaptive threshold
CN112149476A (en) Target detection method, device, equipment and storage medium
CN114022790A (en) Cloud layer detection and image compression method and device in remote sensing image and storage medium
CN109859142B (en) Image brightness adjusting method and device, computer equipment and storage medium
CN112822476A (en) Automatic white balance method, system and terminal for color cast of large number of monochrome scenes
CN111160292B (en) Human eye detection method
CN107045713A (en) Enhancement method of low-illumination image based on census Stereo matchings
CN106340037B (en) Based on coloration center than coloration centrifuge away from image color shift detection method
CN114331919A (en) Depth recovery method, electronic device, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22886165

Country of ref document: EP

Kind code of ref document: A1