CN112800811B - Color block tracking method and device and terminal equipment - Google Patents

Color block tracking method and device and terminal equipment Download PDF

Info

Publication number
CN112800811B
CN112800811B CN201911107802.1A CN201911107802A CN112800811B CN 112800811 B CN112800811 B CN 112800811B CN 201911107802 A CN201911107802 A CN 201911107802A CN 112800811 B CN112800811 B CN 112800811B
Authority
CN
China
Prior art keywords
color block
frame image
target object
color
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911107802.1A
Other languages
Chinese (zh)
Other versions
CN112800811A (en
Inventor
邝嘉隆
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201911107802.1A priority Critical patent/CN112800811B/en
Publication of CN112800811A publication Critical patent/CN112800811A/en
Application granted granted Critical
Publication of CN112800811B publication Critical patent/CN112800811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a color block tracking method, a device and terminal equipment, which are suitable for the technical field of visual identification, and the method comprises the following steps: sampling the video; performing color block positioning of a target object according to the first frame image obtained by sampling to obtain a first candidate frame region where the color block of the target object is located in the first frame image; selecting a first active region containing a first candidate frame region from the first frame image; determining a second active area corresponding to the first active area from the second frame image obtained by sampling; and positioning the color block of the target object in the second active area to obtain a second candidate frame area of the color block of the target object in the second frame image, and selecting a third active area containing the second candidate frame area from the second frame image. The embodiment of the application can improve the efficiency and accuracy of tracking the target object color lump.

Description

Color block tracking method and device and terminal equipment
Technical Field
The application belongs to the technical field of visual identification, and particularly relates to a color block tracking method and terminal equipment.
Background
The color block tracking refers to performing color block identification on a target object existing in a video frame image, and positioning a candidate frame area where a color block of the target object is located in each frame image so as to realize continuous tracking of the color block position of the target object.
The related color block tracking method directly scans all frame images and identifies the target object color block from all scanned color blocks, so that although continuous tracking of the target object color block can be realized, on one hand, the workload of global scanning processing is high and low, and the load on processing equipment is extremely high, on the other hand, because interfering objects possibly exist in each acquired frame image, larger interference exists in each identification, and tracking failure is extremely easy to cause, therefore, the known color block tracking method has low efficiency and the accuracy is difficult to ensure.
Disclosure of Invention
In view of this, the embodiment of the application provides a color block tracking method and terminal equipment, which can solve the problem of low color block tracking efficiency and accuracy.
A first aspect of an embodiment of the present application provides a color patch tracking method, including:
sampling the video;
performing color block positioning of a target object according to a first frame image obtained by sampling to obtain a first candidate frame region where a color block of the target object is located in the first frame image;
selecting a first active region including the first candidate frame region from the first frame image;
Determining a second active region corresponding to the first active region from a second frame image obtained by sampling, wherein the second frame image is adjacent to the first frame image in sampling time, the sampling time of the second frame image is later than that of the first frame image, and the coordinates of the second active region in the second frame image are the same as those of the first active region in the first frame image;
and positioning the color block of the target object in the second active area to obtain a second candidate frame area of the color block of the target object in the second frame image, and selecting a third active area containing the second candidate frame area in the second frame image.
In a first possible implementation manner of the first aspect, the performing, according to the sampled first frame image, positioning a color block of the target object to obtain a first candidate frame area where the color block of the target object is located in the first frame image includes:
taking the N-1 video frame obtained by sampling as a third frame image, taking the N Zhang Shipin frame obtained by sampling as the first frame image, obtaining a fourth active area in the third frame image, and determining a fifth active area corresponding to the fourth active area from the first frame image, wherein N is an integer greater than 1, and the fourth active area comprises a third candidate frame area where the target object color block is located in the third frame image;
And positioning the color block of the target object in the fifth active area to obtain the first candidate frame area of the color block of the target object in the first frame image.
In a second possible implementation manner of the first aspect, the performing, according to the sampled first frame image, positioning a color block of the target object to obtain a first candidate frame area where the color block of the target object is located in the first frame image, includes:
and taking the 1 st video frame obtained by sampling as the first frame image, carrying out global color block scanning on the first frame image, and identifying the first candidate frame area where the target object color block is located in the first frame image.
In a third possible implementation manner of the first aspect, the selecting, in the first frame image, a first active area including the first candidate frame area includes:
and acquiring a first length and a first width, amplifying the size of the first candidate frame region based on the first length and the first width, and taking the amplified image region as the first active region.
With reference to the first possible implementation manner to the third possible implementation manner, as a fourth possible implementation manner of the first aspect, the performing color block positioning on the second active area on the target object to obtain a second candidate frame area of the color block of the target object in the second frame image includes:
Color screening is carried out on all color blocks in the second active area to obtain one or more color blocks to be detected, wherein the color of the one or more color blocks to be detected is the same as that of the target object, and a fourth candidate frame area where each color block to be detected is located in the second frame image is obtained;
performing color block shape recognition on each fourth candidate frame area to obtain a corresponding color block shape;
and acquiring the color block shape of the target object color block, matching the color block shape of the target object color block with each color block to be detected, identifying the color block to be detected which is successfully matched as the target object color block, and identifying the fourth candidate frame area corresponding to the color block to be detected which is successfully matched as the second candidate frame area corresponding to the target object color block.
On the basis of the fourth possible implementation manner, as a fifth possible implementation manner of the first aspect, the performing color block shape recognition for each fourth candidate frame area includes:
drawing four rectangles with the first size at the four corners of the fourth candidate frame area by taking the four corners of the fourth candidate frame area as base points to obtain four corresponding rectangular image areas;
Detecting overlapping part images of the four rectangular image areas and the color blocks to be detected in the fourth candidate frame area respectively to obtain corresponding M overlapping part images, wherein M is [1,2,3,4];
and identifying the color block graph of the color block to be detected in the fourth candidate frame area according to the M overlapped partial images.
On the basis of the fifth possible implementation manner, as a sixth possible implementation manner of the first aspect, the identifying, according to the M overlapping partial images, a color patch pattern of the color patch to be detected in the fourth candidate frame area includes:
if m=4, the difference between the pixel numbers contained in the 4 overlapping partial images is smaller than a first threshold, and the shapes of the 4 overlapping partial images are all rectangular, determining that the color block pattern of the color block to be detected in the fourth candidate frame area is rectangular;
if m=4, the difference between the pixel numbers contained in the 4 overlapping partial images is smaller than a first threshold, and the shapes of the 4 overlapping partial images are all fan-shaped with an included angle of 90 degrees, determining that the color patch pattern of the color patch to be detected in the fourth candidate frame area is circular.
A second aspect of an embodiment of the present application provides a patch tracking device, including:
the sampling module is used for sampling the video;
the first positioning module is used for positioning the color block of the target object according to the first frame image obtained by sampling to obtain a first candidate frame area where the color block of the target object is located in the first frame image;
an active region selection module, configured to select a first active region including the first candidate frame region from the first frame image;
the active region searching module is used for determining a second active region corresponding to the first active region from a second frame image obtained by sampling, wherein the second frame image is adjacent to the first frame image in sampling time, the sampling time of the second frame image is later than that of the first frame image, and the coordinates of the second active region in the second frame image are the same as those of the first active region in the first frame image;
and the second positioning module is used for positioning the color block of the target object in the second active area, obtaining a second candidate frame area of the color block of the target object in the second frame image, and selecting a third active area containing the second candidate frame area in the second frame image.
A third aspect of an embodiment of the present application provides a terminal device, the terminal device comprising a memory, a processor, the memory having stored thereon a computer program executable on the processor, the processor implementing the steps of the color patch tracking method as described in any one of the first aspects when the computer program is executed.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium comprising: a computer program is stored, characterized in that it when executed by a processor implements the steps of the color patch tracking method according to any one of the above first aspects.
A fifth aspect of an embodiment of the present application provides a computer program product, which when run on a terminal device, causes the terminal device to perform the color patch tracking method according to any one of the first aspects above.
Compared with the prior art, the embodiment of the application has the beneficial effects that: when the color block tracking is carried out, the candidate frame area where the color block of the target object is positioned for the frame image sampled firstly, meanwhile, the area range where the target object is likely to move is identified, when the frame image sampled next time is processed, the color block positioning of the target object is directly carried out for the active area identified last time, and meanwhile, the area range where the target object is likely to move is identified, so that the target object color block of the frame image is positioned each time, only the image area where one target object is likely to move in the frame image is required to be identified and positioned, the workload of identifying the color block of the target object each time is greatly reduced, the identification efficiency is improved, and meanwhile, the identification and positioning accuracy rate of the interference object in the frame image on the identification and positioning is greatly reduced, so that the efficiency and the accuracy rate of tracking the color block of the target object can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flowchart of an implementation of a color patch tracking method according to a first embodiment of the present application;
fig. 2 is a schematic diagram of an implementation flow of a color block tracking method according to a second embodiment of the present application;
fig. 3 is a schematic flowchart of an implementation of a color patch tracking method according to a third embodiment of the present application;
fig. 4A is a schematic flowchart of an implementation of a color patch tracking method according to a fourth embodiment of the present application;
FIG. 4B is a schematic view of a second active area provided by a fourth embodiment of the present application;
FIG. 4C is a schematic diagram of rectangular drawing of a second active area according to a fourth embodiment of the present application;
FIG. 4D is a schematic diagram of rectangular drawing of a second active area according to a fourth embodiment of the present application;
FIG. 4E is a schematic diagram of rectangular drawing of a second active area according to a fourth embodiment of the present application;
FIG. 4F is a schematic diagram of rectangular drawing of a second active area according to a fourth embodiment of the present application;
fig. 5A is a schematic flowchart of an implementation of a color block tracking method according to a fifth embodiment of the present application;
FIG. 5B is a schematic diagram of a second active area provided by a fifth embodiment of the present application;
FIG. 5C is a schematic diagram of rectangular drawing of a second active area according to a fifth embodiment of the present application;
fig. 6 is a schematic structural diagram of a color patch tracking device according to a sixth embodiment of the present application;
fig. 7 is a schematic diagram of a terminal device according to a seventh embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
In order to facilitate understanding of the present application, the embodiments of the present application are briefly described herein, where color block tracking is widely applied to scenes such as material sorting, trademark identification, image processing, product quality inspection, and vehicle identification, and effective tracking of color blocks of a target object can be achieved, but related color block tracking methods are all direct global color block scanning for all frame images, for example, a find_blobs function of color block tracking built in an OpenMV machine vision module, and global scanning has a large and inefficient processing workload, and a very heavy load on processing equipment, especially for some processing equipment with relatively tight computing resources such as a single chip microcomputer, and on the other hand, because there may be an interfering object in each acquired frame image, so that there may be a relatively large interference in each identification, and tracking failure is very easy to be caused, so that the known color block tracking method has a low efficiency and a difficult to guarantee accuracy.
In order to improve efficiency and accuracy of color block tracking, the embodiment of the application performs target object color block positioning when continuous sampling of a video is started, wherein for any two frame images sampled adjacently, only a possible moving image area of a target object in the frame image is required to be identified and positioned for each time, while the candidate frame area where the target object color block is positioned, the area range of the possible moving object is considered to be very limited in the period of a shorter sampling interval, and therefore, the area range of the possible moving object is also identified, and when the frame image sampled later is processed, the color block positioning of the target object is directly performed for the previous identified moving area, and the area range of the possible moving object is identified at the same time, so that the possible moving image area of the target object in the frame image is only required to be identified and positioned for each time, the work amount of identifying the target object color block in each time is greatly reduced, and meanwhile, the interference of identifying and positioning of the target object in the frame image is greatly reduced, and the accuracy of the target object tracking is improved.
The terms that may be involved in the embodiments of the present application are described as follows:
the candidate frame region refers to an image region selected by the candidate frame in which the color patch is located.
The active area refers to an image area in which a target object color block may be located in a frame image of a next sample, and because the sampling frequency of the video frame image is higher during color block tracking, the interval time between two adjacent sample frame images is extremely short, after knowing the candidate frame area in which the target object color block is located in the previous frame image, the corresponding active area of the target object color block in the next sample video frame can be estimated approximately, and then the image area corresponding to the active area can be determined.
Meanwhile, the execution main body of the color block tracking method in the embodiment of the application is a terminal device with a certain data processing function, wherein a certain data processing capability refers to the capability of performing color block shape recognition and target object color block recognition and positioning processing in the embodiment of the application, the specific device type of the execution main body is not limited, and the execution main body can be set by technicians according to actual requirements, including but not limited to terminal devices with weaker processing capability such as a singlechip or some terminal devices with stronger processing capability such as a computer.
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance. It will also be understood that, although the terms "first," "second," etc. may be used herein in some embodiments of the application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first frame image may be named a second frame image, and similarly, a second frame image may be named a first frame image without departing from the scope of the various described embodiments. The first frame image and the second frame image are both video frames, but they are not the same video frame.
The process of color patch tracking according to the embodiment of the present application is described in detail below
Fig. 1 shows a flowchart of an implementation of a color patch tracking method according to a first embodiment of the present application, which is described in detail below:
s101, sampling video.
After starting to sample the video, the embodiment of the present application may continuously acquire a new video frame until receiving a related instruction for stopping tracking, or stopping sampling when some specific stopping conditions are reached, where the specific stopping conditions may be set by a technician according to requirements of different actual scenes, and are not limited herein, for example, a color block tracking time period is preset, and whether the current time is the end time of the time period is corresponding to the specific stopping conditions. Meanwhile, the embodiment of the application does not limit the specific sampling frequency, and can be set according to practical application requirements, for example, the frequency can be set to be a fixed frequency, for example, 50 pieces/second, or the sampling interval of video frames can be set firstly, for example, each frame of image is sampled, and then the sampling frequency is calculated according to the sampling interval and the frame rate of the video.
It should be understood that, depending on the actual situation of the executing body, there may be various differences in the video capturing manner, where when the video is captured by the camera of the executing body, the video captured by the camera is directly captured and processed in real time, and when the video is captured by other devices and sent to the executing body, the received video needs to be processed in real time, and then a video receiving process exists in the video capturing process.
As a specific embodiment of triggering the tracking of the color lump of the target object, the method comprises the following steps:
and if the trigger instruction is detected, starting to sample the video.
The triggering instruction refers to a class of instructions capable of triggering a color block tracking function of a target object, corresponding actual instructions of the triggering instruction may be different under different scenes, for example, under a scene of normally calling the color block tracking function, the triggering instruction may be a starting instruction of the color block tracking function, and under some special scenes, the triggering instruction may also be corresponding specific instructions, for example, under a scene that the color block tracking function is automatically operated when the startup is set, the startup instruction is the triggering instruction.
Upon detection of a trigger instruction, embodiments of the present application begin sampling the video image of a video frame to analyze and track the color patch of the target object.
S102, performing color block positioning of a target object according to the first frame image obtained by sampling to obtain a first candidate frame area where the color block of the target object is located in the first frame image.
While continuously sampling the video in S101, the embodiment of the present application performs color block positioning of the target object on the sampled video frame, specifically:
in the embodiment of the application, the first frame image may be any video frame except the last one in the sampled video frames, and the second frame image is the video frame sampled next after the first frame image, so that the sampling condition of the first frame image changes, and the second frame image may be any video frame except the first one in the sampled video frames. In the embodiment of the application, the processing manner of the adjacent sampled video frames is the same, so that only a single group of adjacent sampled video frames are used: for the first frame image and the second frame image are described as an example, it should be understood that, in the embodiment of the present application, the first frame image may be any video frame except the last one of the sampled video frames, so that the combination of the first frame image and the second frame image is the combination of any pair of adjacent sampled video frames in the sampled video frames, thereby ensuring that the color block of the target object can be located for each sampled video frame, and realizing the final color block tracking function.
For the first frame image sampled first, the embodiment of the application locates the target object color block of the first frame image, so as to determine the candidate frame area where the target object color block is located in the first frame image, and because the processing of the video frame in the embodiment of the application is a cyclic process, the operation of locating the target object color block of the first frame image can be directly referred to the operation of the second frame image in S104-S105 when the first frame image is not the first sampled video frame.
In the embodiment of the application, the method for positioning the target object color block of the first frame image is not limited when the first frame image is the sampled first video frame, and can be selected or set by a technician according to actual requirements, including but not limited to global color block scanning, or dividing the video frame into a plurality of large areas in advance, and then sequentially identifying and positioning the target object color block of each area according to a preset sequence.
As one embodiment of the present application, the step of locating the target object patch of the first frame image includes:
and taking the 1 st video frame obtained by sampling as a first frame image, carrying out global color block scanning on the first frame image, and identifying a first candidate frame area where the color block of the target object is positioned in the first frame image.
In the embodiment of the application, under the condition that the first frame image is the sampled first video frame, a global color block scanning method is selected to perform first identification and positioning on the color block of the target object so as to ensure the accuracy of first positioning on the target object, wherein the global color block scanning refers to performing color block scanning on the whole image of the first frame image and identifying the color block corresponding to the target object.
S103, selecting a first active area containing a first candidate frame area from the first frame image.
As can be seen from the above description, on the basis of determining the candidate frame area where the target object color block is located, since the time interval of each sampling is extremely short, the total possible moving range of the target object color block in the next frame image can be estimated, so that after determining the first candidate frame area corresponding to the target object color block in the first frame image, the embodiment of the application performs positioning identification of the corresponding first moving area based on the first candidate frame area, so as to facilitate subsequent processing of the second frame image. The above embodiment of the present application does not limit a specific method for selecting an active area, and a technician may select or set the active area according to actual needs, for example, select an active area larger than the candidate frame area with the candidate frame area as the center, or analyze the motion trend of the target object color block in each historical video frame, determine the corresponding motion direction, and select an active area according to the motion direction, where the candidate frame area may not be in the center of the active area, but any selecting method is required to meet that the active area must include the corresponding candidate frame area.
As a specific implementation manner of selecting the first active area in the first embodiment of the present application, the operation for selecting the first active area includes:
and acquiring a first length and a first width, amplifying the size of the first candidate frame region based on the first length and the first width, and taking the amplified image region as a first active region.
In the embodiment of the application, considering that the moving direction of the target object color block is difficult to determine, namely, in actual application, the moving direction of the target object color block in the next video frame is difficult to determine, the size of the first candidate area is amplified based on the first candidate area, the effect of amplifying the first candidate area is realized, and the amplified image area is taken as the corresponding first active area, so that the selected active area can meet the moving condition of the target object color block in any direction. The specific values of the first length and the first width are not limited herein, and may be set by a technician, or may be a fixed value or a value obtained by performing coefficient amplification on the size of the actual first candidate frame area, for example, the first length may be set to be three times the length of the first candidate frame.
S104, determining a second active area corresponding to the first active area from the second frame image obtained by sampling, wherein the second frame image is adjacent to the first frame image in sampling time, and the sampling time of the second frame image is later than that of the first frame image, and the coordinates of the second active area in the second frame image are the same as those of the first active area in the first frame image.
On the basis that the first frame image finishes selecting the first active area, the target object color block positioning is started to be performed on the next sampled frame image, specifically, the embodiment of the application can find a second active area corresponding to the first active area in the second frame image, the coordinates of the second active area in the second frame image are the same as those of the first active area in the first frame image, for example, the first active area is assumed to be the whole left half area image of the first frame image, and the second active area is assumed to be the whole left half area image of the second frame image, so that the possible identification and positioning of the target object color block in the second frame image can be realized.
S105, performing color block positioning of the target object on the second active area to obtain a second candidate frame area of the color block of the target object in the second frame image, and selecting a third active area containing the second candidate frame area from the second frame image.
After determining the second active area, the embodiment of the application directly performs color lump positioning of the target object in the second active area to determine the corresponding second candidate frame area, and as the second active area is only a part of image areas in the second frame image, the workload is greatly reduced relative to the global color lump scanning, and the efficiency is greatly improved.
When the positioning of the second candidate frame area is realized, the embodiment of the application further selects the third active area corresponding to the second candidate area for processing and using the subsequent video frames, so that each video frame except the first sampled video frame can refer to the active area of the previous sampled video frame to quickly and accurately position the target object color block. The selection method of the third active area may be referred to the above description of the selection method of the first active area, which is not repeated herein.
In order to improve efficiency and accuracy of color block tracking, when continuous sampling of a video is started, target object color block positioning is started, wherein for any two frame images sampled adjacently, the first sampled frame image is only required to be identified and positioned for a possible moving image area of a target object in the frame image, while the candidate frame area where the target object color block is positioned, the area range of the possible moving object is considered to be very limited in the period of a shorter sampling interval, and therefore the area range of the possible moving object is also identified, when the next sampled frame image is processed, the color block positioning of the target object is directly performed for the previous identified moving area, and meanwhile, the area range of the possible moving object is identified, so that the identification and positioning of the target object color block of each time in the frame image are only required, the work amount of identifying the target object color block of each time is greatly reduced, and meanwhile, the identification and positioning interference of the identification and positioning of an interference object in the frame image is greatly reduced, and the accuracy of the target object tracking can be improved.
As a specific implementation manner of positioning a target object color block of a first frame image in the first embodiment of the present application, as shown in fig. 2, for a case that the first frame image is not a first sampled video frame, the step of positioning the target object color block in the embodiment of the present application includes:
s201, taking the N-1 video frame obtained by sampling as a third frame image, taking the N Zhang Shipin frame obtained by sampling as a first frame image, obtaining a fourth active area in the third frame image, and determining a fifth active area corresponding to the fourth active area from the first frame image, wherein N is an integer greater than 1, and the fourth active area comprises a third candidate frame area where the target object color block is located in the third frame image.
In the embodiment of the application, in order to process the first frame image, firstly, an active area corresponding to the target object color block in a video frame (namely, a third frame image) sampled before the first frame image is acquired, so that the active area corresponding to the first frame image is searched, and the rapid positioning of the target object color block of the first frame image is ensured. The value of N varies with the real-time sampling of the video, but is not a fixed value, for example, if the first frame image processed in real time is the third video frame collected, N is 3, and if the first frame image processed in real time is the tenth video frame collected, N is 10.
S202, performing color block positioning of the target object on the fifth active area to obtain a first candidate frame area of the color block of the target object in the first frame image.
The searching of the fifth active area and the positioning of the target object color lump in the fifth active area in the embodiment of the present application are the same as the searching of the second active area and the positioning of the target object color lump in the second active area in the first embodiment of the present application, and details thereof will be described in the first embodiment of the present application, and will not be repeated here.
As a specific implementation manner of positioning the target object color lump in the second active area in the first embodiment of the present application, considering that the active area is generally not set too small in practical application, so as to prevent the color lump from tracking failure caused by the outside of the active area in the movement of the target object color lump, when the active area is relatively large, there is a high possibility that the color lump of the interfering object enters the active area, so that the identification accuracy of the target object color lump is difficult to be ensured, although the object identification of the target object can be performed in the active area, the workload of directly performing the object identification is relatively large, and more processing resources are occupied, so that the identification efficiency of the target object color lump is greatly reduced.
In order to ensure accuracy and efficiency of target object color lump identification when positioning target object color lump in the active area, as shown in fig. 3, in the third embodiment of the present application, the step of positioning target object color lump in the second active area includes:
s301, performing color screening on all color blocks in the second active area to obtain one or more color blocks to be detected, which have the same color as the target object, and obtaining a fourth candidate frame area where each color block to be detected is located in the second frame image.
Because the target object color block has been identified in S102, that is, the color of the target object color block has been already known, or the user may also input the color of the target object color block by himself, the embodiment of the present application may directly perform color block searching on the second active area based on the known color of the target object color block, so as to determine one or more color blocks to be detected that are matched with each other, and determine one or more corresponding fourth candidate frame areas at the same time, thereby implementing a fast coarse screening operation on the target object color block.
S302, performing color block shape recognition on each fourth candidate frame area to obtain a corresponding color block shape.
In consideration of the fact that the probability of entering the disturbing object color lump in the active area is smaller, and the probability of appearing the disturbing object color lump with the same shape as the target object color lump is smaller, in order to realize quick and accurate identification of the target object color lump, in the embodiment of the application, the identification of the target object color lump is carried out by selecting a color lump shape matching mode, and therefore, the embodiment of the application can identify the shape of the color lump in each fourth candidate frame.
S303, obtaining the color block shape of the target object color block, matching the color block shape of the target object color block with each color block to be detected, identifying the successfully matched color block to be detected as the target object color block, and identifying the fourth candidate frame area corresponding to the successfully matched color block to be detected as the second candidate frame area corresponding to the target object color block.
The color block shape of the target object color block can be input in advance by a user or can be obtained by carrying out shape recognition on the target object color block, is not limited herein, and can be determined according to actual scene conditions. Meanwhile, the color block shape of the target object color block can be a single shape such as a rectangle or can simultaneously contain multiple shapes so as to meet the condition that different projection shapes of the target object appear under different conditions, and the method is not limited in specific, and can be set by a technician according to actual requirements, for example, considering the condition that the projection of a cylindrical object is either a rectangle or a circle, the color block shape of the target object color block can be set to simultaneously contain a circle and a rectangle.
Because the color block shape of the target object color block is known data, the color block shape of the target object color block can be identified by matching the color block shapes on the basis of acquiring the color block shape of each object color block to be detected, so that the target object color block contained in the active area and the second candidate frame area corresponding to the target object color block are determined.
In consideration of the practical situation, when the execution main body of the color block tracking method is a terminal device with weaker computing power such as a singlechip, if the recognition workload of the color block of the target object is larger, the processing efficiency is directly reduced greatly, and in order to reduce the workload of the color block recognition of the target object and improve the recognition efficiency, the embodiment of the application selects a color block shape recognition matching mode to perform the color block recognition of the target object, and has no complex operations such as image matching, data computing and the like, so that the recognition computation amount of the shape is extremely reduced and the recognition computation efficiency is greatly improved. Meanwhile, the active area identified by the method is a part of image area in the video frame, the area range of the search is extremely limited, so that the interference sources are generally fewer and the situation that the shapes of colored blocks are similar is difficult to occur, the identification accuracy based on the shapes of the colored blocks is ensured, and the method and the device can realize accurate and efficient identification of the color blocks of the target object.
As an embodiment of the present application, in S303, if the matching of the color block shape fails, it is indicated that the movement of the color block of the target object exceeds the corresponding moving range, and in this case, in order to ensure the accurate positioning of the color block of the target object, the embodiment of the present application performs global color block scanning on the second frame image again.
As a specific implementation manner of performing color patch shape recognition on a color patch to be detected in a fourth candidate frame area in the third embodiment of the present application, in order to further improve efficiency of color patch shape recognition, as shown in fig. 4A, a step of color patch shape recognition in the fourth embodiment of the present application includes:
and S401, drawing four rectangles with the first size at the four corners of the fourth candidate frame area by taking the four corners of the fourth candidate frame area as base points, and obtaining four corresponding rectangular image areas.
In the embodiment of the application, the four corners of the fourth candidate frame area are taken as the base points to draw the rectangle with the first size, and the situation of the four corners of the color block to be detected is detected by utilizing the overlapped part image between the drawn rectangle and the color block to be detected, so that the corresponding color block shape is identified, wherein the larger the specific value of the first size is, the larger the intersection probability of the drawn rectangle with the color block to be detected is theoretically, so that the larger the overlapped part image area is, but the larger the drawn rectangle leads to the larger the possibility of the whole similarity of the overlapped part image, so that the accuracy of shape identification is reduced, otherwise, the smaller the drawn rectangle size is, the higher the similarity of the overlapped part image is, and the identification accuracy is, so that the specific value of the first size can be determined by a technician according to the actual requirement and is not limited.
The process of drawing a rectangle is illustrated with reference to fig. 4B, and assuming that the selected area in the inner frame of the rectangular frame in fig. 4B is the fourth candidate frame, where the black rectangle is the included color block to be detected, in this case, in the embodiment of the present application, four corners of the rectangular frame are used as base points to draw four corresponding rectangles, and referring to fig. 4C and fig. 4D, both are schematic diagrams including the rectangular image area and the fourth candidate area obtained after the rectangular drawing is performed with four corners of the rectangular frame as base points, but the method of drawing in fig. 4C is to draw the rectangle with four corners as corners of the rectangle, and the rectangular drawing is performed with four corners as the rectangle center, and in the embodiment of the present application, the method of drawing a specific rectangle is not limited, and may be to draw the rectangle with four corners as the corners of the rectangle drawing in fig. 4C, or draw the rectangle with four corners as the rectangle center in fig. 4D, so long as it is ensured that there is at least one intersection color block to be detected.
S402, detecting overlapping part images of the four rectangular image areas and the color blocks to be detected in the fourth candidate frame area respectively to obtain corresponding M overlapping part images, wherein M is [1,2,3,4].
Referring to fig. 4C, 4D, 4E and 4F, it can be seen that different color block shapes and different drawn rectangle sizes may cause a certain difference in the number and shape of the finally obtained overlapping part images, so that after four rectangular image areas are obtained, corresponding M charging part images may be obtained according to the embodiment of the present application, where the specific M value is determined according to the actual situation after drawing, as in fig. 4C, 4D and 4E, M is 4, but in fig. 4F, M is 2.
S403, identifying the color block graph of the color block to be detected in the fourth candidate frame area according to the M overlapped part images.
Since there is a certain difference in the number and shape of the overlapping portion images of the color patches to be detected corresponding to the four rectangular image areas, as shown in fig. 4C, 4D, 4E, and 4F, when the color patches to be detected are regular patterns such as rectangular and circular, the number of the corresponding overlapping portion images is generally 4, and when the color patches to be detected are irregular patterns such as heart, the number of the overlapping portion images is often difficult to determine. Based on the characteristics, the embodiment of the application can judge the specific color block shape corresponding to the color block to be detected according to the specific number, shape, number of pixels and other attributes of the overlapped part images, wherein the embodiment of the application does not limit the specific used attribute number and judging method, and can be selected and set by a technician according to the actual self, including but not limited to, identifying the specific shape according to the specific number, shape and number of pixels of the overlapped part images, or identifying the shape according to the number and shape only.
As a specific implementation manner of identifying a color patch pattern of a color patch to be detected according to M overlapping partial images in the fourth embodiment of the present application, as shown in fig. 5A, an operation of identifying a color patch pattern in the fifth embodiment of the present application includes:
s501, if the difference between the numbers of pixels included in the m=4 and 4 overlapping partial images is smaller than the first threshold, and the shapes of the 4 overlapping partial images are all rectangular, determining that the color block pattern of the color block to be detected in the fourth candidate frame area is rectangular.
Referring to fig. 4C and fig. 4D, when the color block to be detected is rectangular, it will intersect with all of the 4 rectangular image areas, i.e. the number of corresponding overlapping images should be 4, and at the same time, the overlapping images should be rectangular, and the number of pixels contained in the 4 overlapping images should be almost the same (since the projected images are not necessarily standard rectangles, there will be a slight difference between the overlapping images).
Therefore, when it is recognized that the difference between the numbers of pixels included in m=4, 4 overlapping partial images is smaller than the first threshold, and the shapes of the 4 overlapping partial images are all rectangular, the embodiment of the present application directly determines that the color patch to be detected is rectangular. Wherein. The specific value of the first threshold may be set by the skilled person according to the actual requirement, and is not limited herein.
And S502, if the difference value between the pixel numbers contained in the M=4 and 4 overlapping part images is smaller than a first threshold value, and the shapes of the 4 overlapping part images are all fan-shaped with an included angle of 90 degrees, judging that the color block graph of the color block to be detected in the fourth candidate frame area is circular.
Similarly, referring to fig. 4E, when the difference between the numbers of pixels included in the m=4, 4 overlapping images is smaller than the first threshold, and the shapes of the 4 overlapping images are all sectors with an included angle of 90 degrees, it is indicated that the color block to be detected is circular.
As an embodiment of the present application, referring to fig. 4F, it is known that when a color block to be detected is an irregular pattern, it will not always generate intersections with all of the 4 rectangular image areas, or referring to fig. 5B, assuming that the color block to be detected is an irregular polygon in fig. 5B, after rectangular drawing is performed, fig. 5C may be obtained, where m=4, but the difference between the pixel points included in the 4 overlapping partial images is larger, so in the embodiment of the present application, the operation of identifying the color block pattern includes:
if M <4, or when m=4, the difference value of the number of pixel points contained in the 4 overlapping partial images is greater than or equal to the first threshold value, and the color block pattern of the color block to be detected in the third candidate frame area is determined to be an irregular pattern.
Wherein, the irregular pattern refers to a pattern which is neither circular nor rectangular.
It should be understood that, in the embodiment of the present application, the processing manners of all other sampled video frames are substantially the same except for the first sampled video frame, so that the third to fifth embodiments of the present application and the corresponding embodiments of other second frame image processing methods can be applied to any video frame other than the first sampled video frame, and all that falls within the scope of the present application is to replace the corresponding object with the video frame to be processed only by the second frame image.
Corresponding to the method of the above embodiment, fig. 6 shows a block diagram of the color patch tracking device according to the embodiment of the present application, and for convenience of explanation, only the portion relevant to the embodiment of the present application is shown. The color patch tracking device illustrated in fig. 6 may be an execution subject of the color patch tracking method provided in the first embodiment.
Referring to fig. 6, the patch tracking device includes:
a sampling module 61, configured to sample the video.
And the first positioning module 62 is configured to perform color block positioning on the target object according to the sampled first frame image, so as to obtain a first candidate frame area where the color block of the target object is located in the first frame image.
An active region selection module 63, configured to select a first active region including the first candidate frame region in the first frame image.
The active region searching module 64 is configured to determine, from the sampled second frame image, a second active region corresponding to the first active region, where the second frame image is adjacent to the first frame image in sampling time, and the sampling time of the second frame image is later than the sampling time of the first frame image, and coordinates of the second active region in the second frame image are the same as coordinates of the first active region in the first frame image.
And a second positioning module 65, configured to perform color block positioning on the second active area to obtain a second candidate frame area of the target object color block in the second frame image, and select a third active area including the second candidate frame area in the second frame image.
Further, the first positioning module 62 includes:
taking the N-1 video frame obtained by sampling as a third frame image, taking the N Zhang Shipin frame obtained by sampling as the first frame image, obtaining a fourth active area in the third frame image, and determining a fifth active area corresponding to the fourth active area from the first frame image, wherein N is an integer greater than 1, and the fourth active area comprises a third candidate frame area where the target object color block is located in the third frame image.
And positioning the color block of the target object in the fifth active area to obtain the first candidate frame area of the color block of the target object in the first frame image.
Further, the first positioning module 62 further includes:
and taking the 1 st video frame obtained by sampling as the first frame image, carrying out global color block scanning on the first frame image, and identifying the first candidate frame area where the target object color block is located in the first frame image.
Further, the active area selection module 63 includes:
and acquiring a first length and a first width, amplifying the size of the first candidate frame region based on the first length and the first width, and taking the amplified image region as the first active region.
Further, the second positioning module 65 includes:
and the color block screening module is used for carrying out color screening on all the color blocks in the second active area to obtain one or more color blocks to be detected, which have the same color as the target object, and obtaining a fourth candidate frame area where each color block to be detected is located in the second frame image.
And the shape recognition module is used for recognizing the color block shape of each fourth candidate frame area to obtain the corresponding color block shape.
And the color block matching module is used for acquiring the color block shape of the target object color block, matching the color block shape of the target object color block with each color block to be detected, identifying the successfully matched color block to be detected as the target object color block, and identifying the fourth candidate frame area corresponding to the successfully matched color block to be detected as the second candidate frame area corresponding to the target object color block.
Further, the shape recognition module includes:
and the rectangle drawing module is used for drawing four rectangles with the first size at the four corners of the fourth candidate frame area by taking the four corners of the fourth candidate frame area as base points to obtain four corresponding rectangle image areas.
And the overlapping detection module is used for detecting overlapping part images of the four rectangular image areas and the color blocks to be detected in the fourth candidate frame area respectively to obtain corresponding M overlapping part images, wherein M is [1,2,3,4].
And the identification module is used for identifying the color block graph of the color block to be detected in the fourth candidate frame area according to the M overlapped part images.
Further, the identification module includes:
And if m=4, the difference between the pixel numbers contained in the 4 overlapped part images is smaller than a first threshold, and the shapes of the 4 overlapped part images are all rectangular, determining that the color block graph of the color block to be detected in the fourth candidate frame area is rectangular.
If m=4, the difference between the pixel numbers contained in the 4 overlapping partial images is smaller than a first threshold, and the shapes of the 4 overlapping partial images are all fan-shaped with an included angle of 90 degrees, determining that the color patch pattern of the color patch to be detected in the fourth candidate frame area is circular.
The process of implementing respective functions by each module in the color block tracking device provided in the embodiment of the present application may refer to the foregoing embodiments shown in fig. 1 to 5, and descriptions of other related embodiments of the color block tracking method will not be repeated herein.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The color lump tracking method provided by the embodiment of the application can be applied to terminal equipment such as mobile phones, tablet computers, wearable equipment, vehicle-mounted equipment, augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the embodiment of the application does not limit the specific types of the terminal equipment.
For example, the terminal device may be a Station (ST) in a WLAN, a cellular telephone, a cordless telephone, a Session initiation protocol (Session InitiationProtocol, SIP) telephone, a wireless local loop (Wireless Local Loop, WLL) station, a personal digital assistant (Personal Digital Assistant, PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, an in-vehicle device, a car networking terminal, a computer, a laptop computer, a handheld communication device, a handheld computing device, a satellite radio, a wireless modem card, a television Set Top Box (STB), a customer premise equipment (customer premise equipment, CPE) and/or other devices for communicating over a wireless system as well as next generation communication systems, such as a mobile terminal in a 5G network or a mobile terminal in a future evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
By way of example, but not limitation, when the terminal device is a wearable device, the wearable device may also be a generic name for applying wearable technology to intelligently design daily wear, developing wearable devices, such as glasses, gloves, watches, apparel, shoes, and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device comprises full functions, large size, and complete or partial functions which can be realized independent of a smart phone, such as a smart watch or a smart glasses, and is only focused on certain application functions, and needs to be matched with other devices such as the smart phone for use, such as various smart bracelets, smart jewelry and the like for physical sign monitoring.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71, said memory 71 having stored therein a computer program 72 executable on said processor 70. The processor 70, when executing the computer program 72, performs the steps of the various color patch tracking method embodiments described above, such as steps 101 through 105 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 61 to 65 shown in fig. 6.
The terminal device 7 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the terminal device 7 and does not constitute a limitation of the terminal device 7, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input transmitting device, a network access device, a bus, etc.
The processor 70 may be a central processing unit (Central Processing Unit, CPU), or may be another general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. The memory 71 may be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 71 may also be used for temporarily storing data that has been transmitted or is to be transmitted.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. A method of color patch tracking, comprising:
sampling the video;
performing color block positioning of a target object according to a first frame image obtained by sampling to obtain a first candidate frame region where a color block of the target object is located in the first frame image;
selecting a first active region including the first candidate frame region from the first frame image;
determining a second active region corresponding to the first active region from a second frame image obtained by sampling, wherein the second frame image is adjacent to the first frame image in sampling time, the sampling time of the second frame image is later than that of the first frame image, and the coordinates of the second active region in the second frame image are the same as those of the first active region in the first frame image;
performing color block positioning on the second active region to obtain a second candidate frame region of the target object color block in the second frame image, and selecting a third active region containing the second candidate frame region in the second frame image;
and performing color block positioning on the second active area to obtain a second candidate frame area of the target object color block in the second frame image, where the second candidate frame area includes:
Color screening is carried out on all color blocks in the second active area to obtain one or more color blocks to be detected, wherein the color of the one or more color blocks to be detected is the same as that of the target object, and a fourth candidate frame area where each color block to be detected is located in the second frame image is obtained;
performing color block shape recognition on each fourth candidate frame area to obtain a corresponding color block shape;
obtaining the color block shape of the target object color block, matching the color block shape of the target object color block with each color block to be detected, identifying the color block to be detected which is successfully matched as the target object color block, and identifying the fourth candidate frame area corresponding to the color block to be detected which is successfully matched as the second candidate frame area corresponding to the target object color block;
wherein the candidate frame region represents an image region selected by a candidate frame in which the color patch is located; the active area represents the image area in which the target object color patch may be located in the next sampled frame image.
2. The method for tracking color patches according to claim 1, wherein said performing color patch positioning of the target object according to the first frame image obtained by sampling to obtain a first candidate frame area where the color patches of the target object are located in the first frame image includes:
Taking the N-1 video frame obtained by sampling as a third frame image, taking the N Zhang Shipin frame obtained by sampling as the first frame image, obtaining a fourth active area in the third frame image, and determining a fifth active area corresponding to the fourth active area from the first frame image, wherein N is an integer greater than 1, and the fourth active area comprises a third candidate frame area where the target object color block is located in the third frame image;
and positioning the color block of the target object in the fifth active area to obtain the first candidate frame area of the color block of the target object in the first frame image.
3. The method for tracking color patches according to claim 1, wherein said performing color patch positioning of the target object according to the first frame image obtained by sampling to obtain a first candidate frame area where the color patches of the target object are located in the first frame image includes:
and taking the 1 st video frame obtained by sampling as the first frame image, carrying out global color block scanning on the first frame image, and identifying the first candidate frame area where the target object color block is located in the first frame image.
4. The color patch tracking method according to claim 1, wherein the selecting a first active area including the first candidate frame area in the first frame image includes:
and acquiring a first length and a first width, amplifying the size of the first candidate frame region based on the first length and the first width, and taking the amplified image region as the first active region.
5. The patch tracking method according to claim 1, wherein the performing patch shape recognition for each of the fourth candidate frame areas includes:
drawing four rectangles with the first size at the four corners of the fourth candidate frame area by taking the four corners of the fourth candidate frame area as base points to obtain four corresponding rectangular image areas;
detecting overlapping part images of the four rectangular image areas and the color blocks to be detected in the fourth candidate frame area respectively to obtain corresponding M overlapping part images, wherein M is [1,2,3,4];
and identifying the color block graph of the color block to be detected in the fourth candidate frame area according to the M overlapped partial images.
6. The patch tracking method according to claim 5, wherein the identifying the patch pattern of the patch to be detected in the fourth candidate frame area based on the M overlapping partial images includes:
if m=4, the difference between the pixel numbers contained in the 4 overlapping partial images is smaller than a first threshold, and the shapes of the 4 overlapping partial images are all rectangular, determining that the color block pattern of the color block to be detected in the fourth candidate frame area is rectangular;
if m=4, the difference between the pixel numbers contained in the 4 overlapping partial images is smaller than a first threshold, and the shapes of the 4 overlapping partial images are all fan-shaped with an included angle of 90 degrees, determining that the color patch pattern of the color patch to be detected in the fourth candidate frame area is circular.
7. A color patch tracking device, comprising:
the sampling module is used for sampling the video;
the first positioning module is used for positioning the color block of the target object according to the first frame image obtained by sampling to obtain a first candidate frame area where the color block of the target object is located in the first frame image;
an active region selection module, configured to select a first active region including the first candidate frame region from the first frame image;
The active region searching module is used for determining a second active region corresponding to the first active region from a second frame image obtained by sampling, wherein the second frame image is adjacent to the first frame image in sampling time, the sampling time of the second frame image is later than that of the first frame image, and the coordinates of the second active region in the second frame image are the same as those of the first active region in the first frame image;
the second positioning module is used for positioning the color block of the target object in the second active area, obtaining a second candidate frame area of the color block of the target object in the second frame image, and selecting a third active area containing the second candidate frame area in the second frame image;
the second positioning module includes:
the color block screening module is used for carrying out color screening on all color blocks in the second active area to obtain one or more color blocks to be detected, which have the same color as the target object, and obtaining a fourth candidate frame area where each color block to be detected is located in the second frame image;
the shape recognition module is used for recognizing the color block shape aiming at each fourth candidate frame area to obtain a corresponding color block shape;
The color block matching module is used for obtaining the color block shape of the target object color block, matching the color block shape of the target object color block with each color block to be detected, identifying the color block to be detected which is successfully matched as the target object color block, and identifying the fourth candidate frame area corresponding to the color block to be detected which is successfully matched as the second candidate frame area corresponding to the target object color block;
wherein the candidate frame region represents an image region selected by a candidate frame in which the color patch is located; the active area represents the image area in which the target object color patch may be located in the next sampled frame image.
8. A terminal device, characterized in that it comprises a memory, a processor, on which a computer program is stored which is executable on the processor, the processor executing the computer program to carry out the steps of the method according to any one of claims 1 to 6.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 6.
CN201911107802.1A 2019-11-13 2019-11-13 Color block tracking method and device and terminal equipment Active CN112800811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911107802.1A CN112800811B (en) 2019-11-13 2019-11-13 Color block tracking method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911107802.1A CN112800811B (en) 2019-11-13 2019-11-13 Color block tracking method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN112800811A CN112800811A (en) 2021-05-14
CN112800811B true CN112800811B (en) 2023-10-13

Family

ID=75803369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911107802.1A Active CN112800811B (en) 2019-11-13 2019-11-13 Color block tracking method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN112800811B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650630A (en) * 2016-11-11 2017-05-10 纳恩博(北京)科技有限公司 Target tracking method and electronic equipment
CN107491714A (en) * 2016-06-13 2017-12-19 深圳光启合众科技有限公司 Intelligent robot and its target object recognition methods and device
CN108446622A (en) * 2018-03-14 2018-08-24 海信集团有限公司 Detecting and tracking method and device, the terminal of target object
CN110147750A (en) * 2019-05-13 2019-08-20 深圳先进技术研究院 A kind of image search method based on acceleration of motion, system and electronic equipment
CN110334635A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 Main body method for tracing, device, electronic equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491714A (en) * 2016-06-13 2017-12-19 深圳光启合众科技有限公司 Intelligent robot and its target object recognition methods and device
CN106650630A (en) * 2016-11-11 2017-05-10 纳恩博(北京)科技有限公司 Target tracking method and electronic equipment
CN108446622A (en) * 2018-03-14 2018-08-24 海信集团有限公司 Detecting and tracking method and device, the terminal of target object
CN110147750A (en) * 2019-05-13 2019-08-20 深圳先进技术研究院 A kind of image search method based on acceleration of motion, system and electronic equipment
CN110334635A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 Main body method for tracing, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112800811A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
US11423695B2 (en) Face location tracking method, apparatus, and electronic device
CN109035304B (en) Target tracking method, medium, computing device and apparatus
EP3163498B1 (en) Alarming method and device
CN110012209B (en) Panoramic image generation method and device, storage medium and electronic equipment
WO2021159609A1 (en) Video lag identification method and apparatus, and terminal device
CN109977859B (en) Icon identification method and related device
CN108875451B (en) Method, device, storage medium and program product for positioning image
CN106296617B (en) The processing method and processing device of facial image
CN111160202B (en) Identity verification method, device, equipment and storage medium based on AR equipment
US20140233800A1 (en) Method of tracking object and electronic device supporting the same
CN111008540B (en) Bar code identification method and equipment and computer storage medium
US20180225542A1 (en) Image information recognition processing method and device, and computer storage medium
US20150278997A1 (en) Method and apparatus for inferring facial composite
US9195872B2 (en) Object tracking method and apparatus
CN113507558B (en) Method, device, terminal equipment and storage medium for removing image glare
KR20140103046A (en) Object Tracing Method and Electronic Device supporting the same
CN112330715A (en) Tracking method, tracking device, terminal equipment and readable storage medium
CN105792131A (en) Positioning method and system
CN114187333A (en) Image alignment method, image alignment device and terminal equipment
CN111199169A (en) Image processing method and device
CN112800811B (en) Color block tracking method and device and terminal equipment
CN110097061B (en) Image display method and device
CN112437231A (en) Image shooting method and device, electronic equipment and storage medium
CN109509261B (en) Augmented reality method, device and computer storage medium
JPH11167455A (en) Hand form recognition device and monochromatic object form recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant