CN110930437B - Target tracking method and device - Google Patents

Target tracking method and device Download PDF

Info

Publication number
CN110930437B
CN110930437B CN201911143569.2A CN201911143569A CN110930437B CN 110930437 B CN110930437 B CN 110930437B CN 201911143569 A CN201911143569 A CN 201911143569A CN 110930437 B CN110930437 B CN 110930437B
Authority
CN
China
Prior art keywords
target
local
camera
position information
local camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911143569.2A
Other languages
Chinese (zh)
Other versions
CN110930437A (en
Inventor
袁潮
温建伟
方璐
赵月峰
李广涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuohe Technology Co ltd
Original Assignee
Shenzhen Zhuohe Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhuohe Technology Co ltd filed Critical Shenzhen Zhuohe Technology Co ltd
Priority to CN201911143569.2A priority Critical patent/CN110930437B/en
Publication of CN110930437A publication Critical patent/CN110930437A/en
Application granted granted Critical
Publication of CN110930437B publication Critical patent/CN110930437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The present disclosure relates to a target tracking method and apparatus, the method comprising: in tracking a target with a multi-camera array system, determining a first local camera used to track the target; in the process of tracking the target by using the first local camera, according to a first local picture acquired by the first local camera, judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera or not, and obtaining a judgment result; and if the judging result is that the target enters the acquisition area of the second local camera, tracking the target by using the second local camera.

Description

Target tracking method and device
Technical Field
The present disclosure relates to the field of information processing, and in particular, to a method and apparatus for tracking a target.
Background
Moving object capturing and tracking is a research hotspot in the current computer vision field, and is to determine the position, the size and the complete motion trail of a moving object with a certain remarkable characteristic in a video sequence by using a computer. In recent years, with the rapid increase of the data processing capability of computers and the development of image analysis techniques, real-time capturing and tracking techniques of moving objects have emerged. The method has important practical value in the fields of video monitoring, video compression coding, robot navigation and positioning, intelligent man-machine interaction, virtual reality and the like.
The prior art provides a moving object capturing and tracking device comprising a controller, a pan-tilt camera electrically connected to the controller. The working principle of the equipment is as follows: the method comprises the steps that images of a plurality of moving targets are obtained through a tripod head camera and are transmitted to a controller, the controller detects and tracks the moving targets on the received images, the moving targets needing to be tracked are determined, a control instruction is sent to the tripod head camera, and after the tripod head camera receives the control instruction, the shooting direction of a lens is regulated under the control of the control instruction, and tracking shooting is carried out on the moving targets needing to be tracked.
In the related art, a target may be tracked using a multi-camera array system, which includes a plurality of cameras, resulting in interruption of target tracking when the target moves from an acquisition area of one camera to another.
Disclosure of Invention
To overcome the problems in the related art, a target tracking method and apparatus are provided herein.
According to a first aspect herein, there is provided a target tracking method comprising:
in tracking a target with a multi-camera array system, determining a first local camera used to track the target;
in the process of tracking the target by using the first local camera, judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera according to a first local picture acquired by the first local camera, and obtaining a judgment result;
and if the judging result is that the target enters the acquisition area of the second local camera, tracking the target by using the second local camera.
In an exemplary embodiment, the method further includes, before determining, according to the first local frame acquired by the first local camera, whether the target enters an acquisition area of a second local camera adjacent to the first local camera, and obtaining a determination result:
determining a region overlapping part between the acquisition region of the first local camera and the acquisition region of the second local camera to obtain an overlapping region;
the step of judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera according to a first local picture acquired by the first local camera to obtain a judgment result comprises the following steps:
acquiring the position information of the target;
judging whether the position information of the target enters the overlapping area or not;
if the position information of the target enters the overlapping area, determining that the target enters an acquisition area of a second local camera adjacent to the first local camera; otherwise, determining that the target does not enter an acquisition region of a second local camera adjacent to the first local camera.
In an exemplary embodiment, the determining a portion of the region overlap between the acquisition region of the first local camera and the acquisition region of the second local camera, resulting in an overlap region, includes:
respectively acquiring first local pictures acquired by a first local camera and second local pictures acquired by a second local camera, and splicing the first local pictures and the second local pictures to first area position information and second area position information in a global image acquired by a global camera;
and determining an overlapping area between the first area position information and the second area position information in the global image to obtain the position information of the overlapping area.
In an exemplary embodiment, the acquiring the location information of the target includes:
determining a target grid where the target is located according to a preset grid division strategy of the first local picture information;
converting an image in a target grid by utilizing a homography matrix corresponding to the target grid, and determining the position information of the target grid in the global picture;
the determining whether the position information of the target enters the overlapping area includes:
judging whether the position information of the target grid in the global picture is in the position information of an overlapping area or not, and if the position information of the target grid in the global picture is in the position information of the overlapping area, determining that the position information of the target enters the overlapping area; otherwise, determining that the position information of the target does not enter the overlapping area.
In an exemplary embodiment, the tracking the target with the second local camera includes:
determining position information of the target in the first local picture;
calculating the position information of the target in the first local picture by using a preset position conversion strategy of the first local camera and the global camera to obtain the position information of the target in the global image of the global camera;
calculating the position information of the target in the global image of the global camera by using a preset position conversion strategy of the second local camera and the global camera to obtain the position information of the target in a second local picture of the second local camera;
and taking the position information of the target in the second local picture as an initial tracking position, and tracking the target by using the second local camera.
According to another aspect herein, there is provided a target tracking apparatus comprising:
a first determination module for determining a first local camera used for tracking a target when the target is tracked by utilizing a multi-camera array system;
the judging module is used for judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera or not according to a first local picture acquired by the first local camera in the process of tracking the target by using the first local camera, so as to obtain a judging result;
and the tracking module is used for tracking the target by using the second local camera if the judgment result shows that the target enters the acquisition area of the second local camera.
In an exemplary embodiment, the apparatus further comprises:
a second determining module, configured to determine, before determining whether the target enters an acquisition region of a second local camera adjacent to the first local camera, a portion where a region between the acquisition region of the first local camera and the acquisition region of the second local camera overlaps, to obtain an overlapping region;
the judging module comprises:
a first acquisition unit configured to acquire position information of the target;
a judging unit configured to judge whether the position information of the target enters the overlapping area; if the position information of the target enters the overlapping area, determining that the target enters an acquisition area of a second local camera adjacent to the first local camera; otherwise, determining that the target does not enter an acquisition region of a second local camera adjacent to the first local camera.
In one exemplary embodiment, the second determining module includes:
the second acquisition unit is used for respectively acquiring a first local picture acquired by the first local camera and a second local picture acquired by the second local camera, and splicing the first local picture and the second local picture to first area position information and second area position information in a global image acquired by the global camera;
and the processing unit is used for determining an overlapping area between the first area position information and the second area position information in the global image and obtaining the position information of the overlapping area.
In an exemplary embodiment, the first obtaining unit is configured to determine, according to a preset mesh division policy of the first local picture information, a target mesh where the target is located; converting an image in a target grid by utilizing a homography matrix corresponding to the target grid, and determining the position information of the target grid in the global picture;
the judging unit is used for judging whether the position information of the target grid in the global picture is in the position information of the overlapping area or not, and if the position information of the target grid in the global picture is in the position information of the overlapping area, determining that the position information of the target enters the overlapping area; otherwise, determining that the position information of the target does not enter the overlapping area.
In one exemplary embodiment, the tracking module includes:
a determining unit configured to determine position information of the target in the first partial picture;
the first calculating unit is used for calculating the position information of the target in the first local picture by utilizing a preset position conversion strategy of the first local camera and the global camera to obtain the position information of the target in the global image of the global camera;
the second calculating unit is used for calculating the position information of the target in the global image of the global camera by utilizing a preset position conversion strategy of the second local camera and the global camera to obtain the position information of the target in a second local picture of the second local camera;
and the tracking unit is used for taking the position information of the target in the second local picture as an initial tracking position and tracking the target by using the second local camera.
According to another aspect herein, there is provided a computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed implements the steps of any of the methods described above.
According to another aspect herein, there is provided a computer device comprising a processor, a memory and a computer program stored on the memory, characterized in that the processor implements the steps of any of the methods above when executing the computer program.
When a target is tracked by utilizing a multi-camera array system, a first local camera used for tracking the target is determined, in the process of tracking the target by utilizing the first local camera, whether the target enters an acquisition area of a second local camera adjacent to the first local camera or not is judged according to a first local picture acquired by the first local camera, a judgment result is obtained, if the judgment result is that the target enters the acquisition area of the second local camera, the target is tracked by utilizing the second local camera, the moving position of the target can be determined, the purpose of cross-camera target tracking in the multi-camera array system is realized, and the continuity in the target tracking process is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the disclosure, and do not constitute a limitation on the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a method of target tracking according to an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating local picture changes in a multi-camera array system, according to an example embodiment.
FIG. 3 is a block diagram of a computer device, according to an example embodiment.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments herein more apparent, the technical solutions in the embodiments herein will be clearly and completely described below with reference to the accompanying drawings in the embodiments herein, and it is apparent that the described embodiments are some, but not all, embodiments herein. All other embodiments, based on the embodiments herein, which a person of ordinary skill in the art would obtain without undue burden, are within the scope of protection herein. It should be noted that, without conflict, the embodiments and features of the embodiments herein may be arbitrarily combined with each other.
FIG. 1 is a flow chart illustrating a method of target tracking according to an exemplary embodiment. As shown in fig. 1, includes:
step 101, when a target is tracked by utilizing a multi-camera array system, determining a first local camera used for tracking the target;
in an exemplary embodiment, the multi-camera array system comprises a plurality of local cameras with the same focal length and a global camera, so that local pictures acquired by the local cameras are spliced into global images acquired by the global cameras, and image information of a large scene with details can be obtained. After all the local camera frames are spliced to the global camera frame, the whole frame is required to be displayed on the screen matrix array. And cutting out the region with the same size as the resolution of the screen matrix from the global picture, and displaying the region on a corresponding display screen.
In an exemplary embodiment, there are differences in rotation, parallax, etc. between the local cameras, so it is necessary to divide the local picture into a plurality of grids, and perform a homography rule transformation on each grid, that is, each grid has a separate homography matrix, so that the transformation from the local picture to the global picture is an irregular transformation that cannot be represented by the homography matrix. After the irregular transformation is completed, the local picture is spliced into the global picture.
102, in the process of tracking the target by using the first local camera, judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera according to a first local picture acquired by the first local camera, and obtaining a judgment result;
in an exemplary embodiment, the first local camera may perform image acquisition according to a preset image acquisition time policy, for example, the image acquisition time may be set in units of image frames, such as determining each frame of image.
In one exemplary embodiment, it may be determined whether the target enters the acquisition region of the second partial camera by directly performing a target detection operation on the acquired image information of the second partial camera. Since this approach may cause a short interruption of the target tracking image and two local cameras need to be monitored at the same time, there are problems of low efficiency and the like.
The embodiment of the application provides the following solutions:
in an exemplary embodiment, the method further includes, before determining, according to the first local frame acquired by the first local camera, whether the target enters an acquisition area of a second local camera adjacent to the first local camera, and obtaining a determination result:
determining a region overlapping part between the acquisition region of the first local camera and the acquisition region of the second local camera to obtain an overlapping region;
the step of judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera according to a first local picture acquired by the first local camera to obtain a judgment result comprises the following steps:
acquiring the position information of the target;
judging whether the position information of the target enters the overlapping area or not;
if the position information of the target enters the overlapping area, determining that the target enters an acquisition area of a second local camera adjacent to the first local camera; otherwise, determining that the target does not enter an acquisition region of a second local camera adjacent to the first local camera.
In an exemplary embodiment, the image acquired by the local camera is spliced into the image acquired by the global camera after being transformed; when the positions of the local cameras are deployed, as the positions of the first local camera and the second local camera are adjacent, the image positions of the image acquired by the first local camera and the image acquired by the second local camera are spliced into the image acquired by the global camera to be adjacent; in order to ensure that the image information is comprehensively acquired, the acquired areas of the adjacent local cameras are partially overlapped, namely the acquired areas of the first local camera and the second local camera are partially overlapped; if the target enters the overlapping area, the target enters an acquisition area of a second local camera; otherwise, the target does not enter the acquisition area of the second local camera.
And step 103, if the judgment result shows that the target enters the acquisition area of the second local camera, tracking the target by using the second local camera.
In an exemplary embodiment, after detecting that the target enters the acquisition area of the second local camera, the second local camera is used for tracking the target, so that image information of the target with clearer and more detail can be acquired.
In the method provided by the example of the present disclosure, when a target is tracked by using a multi-camera array system, a first local camera used for tracking the target is determined, in the process of tracking the target by using the first local camera, whether the target enters an acquisition region of a second local camera adjacent to the first local camera is determined according to a first local picture acquired by the first local camera, a determination result is obtained, if the determination result is that the target enters the acquisition region of the second local camera, the second local camera is used for tracking the target, so that the position of the target moving can be determined, the purpose of cross-camera target tracking in the multi-camera array system is achieved, and continuity in the target tracking process is ensured.
The method exemplarily provided herein is described below:
in an exemplary embodiment, the determining a portion of the region overlap between the acquisition region of the first local camera and the acquisition region of the second local camera, resulting in an overlap region, includes:
respectively acquiring first local pictures acquired by a first local camera and second local pictures acquired by a second local camera, and splicing the first local pictures and the second local pictures to first area position information and second area position information in a global image acquired by a global camera;
and determining an overlapping area between the first area position information and the second area position information in the global image to obtain the position information of the overlapping area.
Fig. 2 is a schematic diagram illustrating local picture changes in a multi-camera array system, according to an example embodiment. As shown in fig. 2, the local frames of the two local cameras are divided into a plurality of grids according to a grid division strategy, and are named as a graph a and a graph b according to a left-to-right sequence, and after irregular changes of the local frames, it can be determined that the graph a and the graph b have corresponding overlapping areas in the global frames.
In an exemplary embodiment, the acquiring the location information of the target includes:
determining a target grid where the target is located according to a preset grid division strategy of the first local picture information;
converting an image in a target grid by utilizing a homography matrix corresponding to the target grid, and determining the position information of the target grid in the global picture;
the determining whether the position information of the target enters the overlapping area includes:
judging whether the position information of the target grid in the global picture is in the position information of an overlapping area or not, and if the position information of the target grid in the global picture is in the position information of the overlapping area, determining that the position information of the target enters the overlapping area; otherwise, determining that the position information of the target does not enter the overlapping area.
Taking fig. 2 as an example for illustration, the full black squares shown in fig. a represent the targets, and after determining the overlapping area between the global frames of fig. a and b, it is determined whether the targets are in the overlapping area. As shown in fig. 2, the target is located in this overlapping area, thus determining that the target enters the acquisition area of one local camera from another.
In an exemplary embodiment, the tracking the target with the second local camera includes:
determining position information of the target in the first local picture;
calculating the position information of the target in the first local picture by using a preset position conversion strategy of the first local camera and the global camera to obtain the position information of the target in the global image of the global camera;
calculating the position information of the target in the global image of the global camera by using a preset position conversion strategy of the second local camera and the global camera to obtain the position information of the target in a second local picture of the second local camera;
and taking the position information of the target in the second local picture as an initial tracking position, and tracking the target by using the second local camera.
Taking fig. 2 as an example for explanation, after determining the overlapping area of the object between the images a and b, determining the position information of the object on the global image according to the position conversion relation between the image a and the global image, and determining the position information of the object on the image b according to the position conversion relation between the image b and the global image by utilizing the position relation between the object on the global image.
The following describes the method provided in the examples of the present application:
step 201, determining a first local camera which collects image information of a target from a multi-camera array system, and acquiring position information of the target from the image information collected by the first local camera;
in one exemplary embodiment, the goal may be determined by including:
mode 1: in a certain frame of image in the collected image information of the local camera, a mark of a specific area in the image by a user is received through a human-computer interaction interface, and a target is extracted from the marked area.
For example, an image frame selected by the user is output, and after detecting that the user selects a certain area by the selection frame, image information included in the selected area is extracted, and the image information is recognized, thereby obtaining a target.
Mode 2: and detecting the image information acquired by the local camera by utilizing a preset target detection strategy to obtain a detection result, and determining a tracked target according to the detection result.
The target detection strategy comprises the image information of the tracked target, the image information of the tracked target is taken as the detection basis, the content matched with the image information of the tracked target is searched in the image information collected by the local camera, a local camera that collects image information of the tracked object and position information of the object in the image information collected by the local camera are determined.
Both methods can obtain which position of the tracked target is at which camera, and take the tracked target as initial tracking information, and each frame of tracking frame moves along with the tracked target.
Step 202, after acquiring new image information acquired by the first local camera, determining position information of the target in the new image information;
the method comprises the steps of calculating the position information of a target by utilizing a preset target tracking algorithm, specifically determining a search range according to a target tracking frame by utilizing a SiamDW target tracking algorithm based on deep learning, extracting original tracking area images and searching area images by utilizing deep learning features respectively, convolving tracking features on the searching features, and finally obtaining a response diagram of the searching area, wherein the maximum response position is the position of the target tracked by the frame.
Step 203, judging whether the target has entered an acquisition area of a second local camera connected with the first local camera according to the position information of the target in the new image information;
in an exemplary embodiment, the final display result of the global camera is obtained by stitching the image information acquired by the local camera after the irregular transformation, so that the position of the target in the global image can be determined by converting the coordinates into the picture after the irregular transformation according to the conversion relation.
After the position information of the first local camera is determined, acquiring an image processing strategy used for splicing the image acquired by the first local camera to the global camera, and determining the position information of the target in the image information of the global camera according to the image processing strategy of the first local camera;
judging whether the position information of the image information of the target in the global camera is in the position information of the overlapping area or not;
when the position information of the target in the image information of the global camera is in the overlapping area position information, the target is determined to have entered the overlapping area in the acquisition areas of the first local camera and the second local camera, and step 104 is executed; otherwise, the target does not enter the acquisition area of the second local camera connected with the first local camera, and the step 102 is continuously executed;
after the position coordinates of the target are obtained, starting to calculate whether mirror crossing is needed or not; wherein cross-mirror refers to switching from one local camera to another; after the pictures acquired by the local cameras are subjected to irregular transformation, overlapping areas exist in the corresponding areas of the pictures of the adjacent local cameras, once a tracking target enters the overlapping areas, the target is judged to have crossed the mirror, and at the moment, coordinate conversion between different cameras is carried out; otherwise, the tracking target is not in the overlapping area, which means that the frame does not need to cross the mirror and does not need to do any processing.
Step 204, tracking the target by using a second local camera;
after the target is detected to need to cross the mirror, determining the coordinate of the target after crossing the mirror, and continuing to track the target according to the coordinate after crossing the mirror.
The coordinate system of the adjacent camera after irregular transformation can be set to be translated by one unit, so that the coordinate after the coordinate system is down-transformed can be converted into the adjacent coordinate system by adding and subtracting one unit.
After the coordinates of the adjacent camera are obtained, the original local camera coordinate system of the adjacent camera is reversely transformed according to the irregular transformation relation of the adjacent camera. The coordinates of the frame after crossing the mirror can be obtained. The subsequent tracking coordinates are converted to the original local camera coordinates of the neighboring camera and the subsequent extraction of the tracking frame image is converted to the image of the neighboring camera.
And (5) circulating the steps until the target moves out of the whole global picture, namely judging that the target is lost.
The method provided by the exemplary embodiment can quickly judge that the object moves from one local camera to another local camera when the object tracking operation is performed in the multi-camera array system, so that the original object can be quickly tracked on the other camera.
The embodiment of the application provides a target tracking device, which comprises:
a first determination module for determining a first local camera used for tracking a target when the target is tracked by utilizing a multi-camera array system;
the judging module is used for judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera or not according to a first local picture acquired by the first local camera in the process of tracking the target by using the first local camera, so as to obtain a judging result;
and the tracking module is used for tracking the target by using the second local camera if the judgment result shows that the target enters the acquisition area of the second local camera.
In an exemplary embodiment, the apparatus further comprises:
a second determining module, configured to determine, before determining whether the target enters an acquisition region of a second local camera adjacent to the first local camera, a portion where a region between the acquisition region of the first local camera and the acquisition region of the second local camera overlaps, to obtain an overlapping region;
the judging module comprises:
a first acquisition unit configured to acquire position information of the target;
a judging unit configured to judge whether the position information of the target enters the overlapping area; if the position information of the target enters the overlapping area, determining that the target enters an acquisition area of a second local camera adjacent to the first local camera; otherwise, determining that the target does not enter an acquisition region of a second local camera adjacent to the first local camera.
In one exemplary embodiment, the second determining module includes:
the second acquisition unit is used for respectively acquiring a first local picture acquired by the first local camera and a second local picture acquired by the second local camera, and splicing the first local picture and the second local picture to first area position information and second area position information in a global image acquired by the global camera;
and the processing unit is used for determining an overlapping area between the first area position information and the second area position information in the global image and obtaining the position information of the overlapping area.
In an exemplary embodiment, the first obtaining unit is configured to determine, according to a preset mesh division policy of the first local picture information, a target mesh where the target is located; converting an image in a target grid by utilizing a homography matrix corresponding to the target grid, and determining the position information of the target grid in the global picture;
the judging unit is used for judging whether the position information of the target grid in the global picture is in the position information of the overlapping area or not, and if the position information of the target grid in the global picture is in the position information of the overlapping area, determining that the position information of the target enters the overlapping area; otherwise, determining that the position information of the target does not enter the overlapping area.
In one exemplary embodiment of the present invention, the tracking module comprises:
a determining unit configured to determine position information of the target in the first partial picture;
the first calculating unit is used for calculating the position information of the target in the first local picture by utilizing a preset position conversion strategy of the first local camera and the global camera to obtain the position information of the target in the global image of the global camera;
the second calculating unit is used for calculating the position information of the target in the global image of the global camera by utilizing a preset position conversion strategy of the second local camera and the global camera to obtain the position information of the target in a second local picture of the second local camera;
and the tracking unit is used for taking the position information of the target in the second local picture as an initial tracking position and tracking the target by using the second local camera.
The device provided by the example of the invention determines a first local camera used for tracking the target when the target is tracked by utilizing the multi-camera array system, judges whether the target enters an acquisition area of a second local camera adjacent to the first local camera according to a first local picture acquired by the first local camera in the process of tracking the target by utilizing the first local camera, obtains a judgment result, and if the judgment result is that the target enters the acquisition area of the second local camera, the second local camera is utilized for tracking the target, so that the moving position of the target can be determined, the purpose of cross-camera target tracking in the multi-camera array system is realized, and the continuity in the target tracking process is ensured.
A computer readable storage medium having stored thereon a computer program which when executed performs the steps of the method of any of the preceding claims
Fig. 3 is a block diagram of a computer device 300, according to an example embodiment. For example, the computer device 300 may be provided as a server. Referring to fig. 3, a computer device 300 includes a processor 301, the number of which may be set to one or more as desired. The computer device 300 further comprises a memory 302 for storing instructions, such as application programs, executable by the processor 301. The number of the memories can be set to one or more according to the requirement. Which may store one or more applications. The processor 301 is configured to execute instructions to perform the above-described method.
It will be apparent to one of ordinary skill in the art that embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The description herein is with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional identical elements in an article or apparatus that comprises the element.
While preferred embodiments herein have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all alterations and modifications as fall within the scope herein.
It will be apparent to those skilled in the art that various modifications and variations can be made herein without departing from the spirit and scope of the disclosure. Thus, given that such modifications and variations herein fall within the scope of the claims herein and their equivalents, such modifications and variations are intended to be included herein.

Claims (8)

1. A target tracking method, comprising:
in tracking a target with a multi-camera array system, determining a first local camera used to track the target;
in the process of tracking the target by using the first local camera, judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera according to a first local picture acquired by the first local camera, and obtaining a judgment result;
if the judgment result shows that the target enters the acquisition area of the second local camera, tracking the target by using the second local camera;
the tracking of the target with the second local camera includes:
the multi-camera array system includes the first local camera, the second local camera, and a global camera;
determining position information of the target in the first local picture;
calculating the position information of the target in the first local picture by using a preset position conversion strategy of the first local camera and the global camera to obtain the position information of the target in the global image of the global camera;
calculating the position information of the target in the global image of the global camera by using a preset position conversion strategy of the second local camera and the global camera to obtain the position information of the target in a second local picture of the second local camera;
taking the position information of the target in a second local picture as an initial tracking position, and tracking the target by using the second local camera;
the determining the position information of the target in the first local picture includes: in a certain frame of image in the image information acquired by the first local camera, receiving a mark of a specific area in the image from a user, and extracting a target from the marked area; or alternatively; and detecting the image information acquired by the first local camera through a preset target detection strategy to obtain a detection result, and determining a tracked target according to the detection result.
2. The method according to claim 1, characterized in that:
the method further comprises the steps of judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera according to a first local picture acquired by the first local camera, and before a judgment result is obtained, judging whether the target enters the acquisition area of the second local camera adjacent to the first local camera:
determining a region overlapping part between the acquisition region of the first local camera and the acquisition region of the second local camera to obtain an overlapping region;
the step of judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera according to a first local picture acquired by the first local camera to obtain a judgment result comprises the following steps:
acquiring the position information of the target;
judging whether the position information of the target enters the overlapping area or not;
if the position information of the target enters the overlapping area, determining that the target enters an acquisition area of a second local camera adjacent to the first local camera; otherwise, determining that the target does not enter an acquisition region of a second local camera adjacent to the first local camera.
3. The method of claim 2, wherein the determining a portion of the region overlap between the acquisition region of the first local camera and the acquisition region of the second local camera results in an overlap region, comprising:
respectively acquiring first local pictures acquired by a first local camera and second local pictures acquired by a second local camera, and splicing the first local pictures and the second local pictures to first area position information and second area position information in a global image acquired by a global camera;
and determining an overlapping area between the first area position information and the second area position information in the global image to obtain the position information of the overlapping area.
4. A method according to claim 3, characterized in that:
the obtaining the position information of the target includes:
determining a target grid where the target is located according to a preset grid division strategy of the first local picture information;
converting an image in a target grid by utilizing a homography matrix corresponding to the target grid, and determining the position information of the target grid in the global picture;
the determining whether the position information of the target enters the overlapping area includes:
judging whether the position information of the target grid in the global picture is in the position information of an overlapping area or not, and if the position information of the target grid in the global picture is in the position information of the overlapping area, determining that the position information of the target enters the overlapping area; otherwise, determining that the position information of the target does not enter the overlapping area.
5. An object tracking device, comprising:
a first determination module for determining a first local camera used for tracking a target when the target is tracked by utilizing a multi-camera array system;
the judging module is used for judging whether the target enters an acquisition area of a second local camera adjacent to the first local camera or not according to a first local picture acquired by the first local camera in the process of tracking the target by using the first local camera, so as to obtain a judging result;
the tracking module is used for tracking the target by using the second local camera if the judgment result shows that the target enters the acquisition area of the second local camera;
the multi-camera array system includes the first local camera, the second local camera, and a global camera;
the tracking module comprises:
a determining unit configured to determine position information of the target in the first partial picture;
the first calculating unit is used for calculating the position information of the target in the first local picture by utilizing a preset position conversion strategy of the first local camera and the global camera to obtain the position information of the target in the global image of the global camera;
the second calculating unit is used for calculating the position information of the target in the global image of the global camera by utilizing a preset position conversion strategy of the second local camera and the global camera to obtain the position information of the target in a second local picture of the second local camera;
the tracking unit is used for taking the position information of the target in the second local picture as an initial tracking position and tracking the target by using the second local camera;
the determining the position information of the target in the first local picture includes: in a certain frame of image in the image information acquired by the first local camera, receiving a mark of a specific area in the image from a user, and extracting a target from the marked area; or alternatively; and detecting the image information acquired by the first local camera through a preset target detection strategy to obtain a detection result, and determining a tracked target according to the detection result.
6. The apparatus of claim 5, wherein the apparatus further comprises:
a second determining module, configured to determine, before determining whether the target enters an acquisition region of a second local camera adjacent to the first local camera, a portion where a region between the acquisition region of the first local camera and the acquisition region of the second local camera overlaps, to obtain an overlapping region;
the judging module comprises:
a first acquisition unit configured to acquire position information of the target;
a judging unit configured to judge whether the position information of the target enters the overlapping area; if the position information of the target enters the overlapping area, determining that the target enters an acquisition area of a second local camera adjacent to the first local camera; otherwise, determining that the target does not enter an acquisition region of a second local camera adjacent to the first local camera.
7. The apparatus of claim 6, wherein the second determining module comprises:
the second acquisition unit is used for respectively acquiring a first local picture acquired by the first local camera and a second local picture acquired by the second local camera, and splicing the first local picture and the second local picture to first area position information and second area position information in a global image acquired by the global camera;
and the processing unit is used for determining an overlapping area between the first area position information and the second area position information in the global image and obtaining the position information of the overlapping area.
8. The apparatus according to claim 7, wherein:
the first acquisition unit is used for determining a target grid where the target is located according to a preset grid division strategy of the first local picture information; converting an image in a target grid by utilizing a homography matrix corresponding to the target grid, and determining the position information of the target grid in the global picture;
the judging unit is used for judging whether the position information of the target grid on the global picture is in the position information of the overlapping area or not, if the position information of the target grid on the global picture is heavy
Determining that the position information of the target enters the overlapping area according to the position information of the overlapping area; otherwise the first set of parameters is selected,
and determining that the position information of the target does not enter the overlapping area.
CN201911143569.2A 2019-11-20 2019-11-20 Target tracking method and device Active CN110930437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911143569.2A CN110930437B (en) 2019-11-20 2019-11-20 Target tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911143569.2A CN110930437B (en) 2019-11-20 2019-11-20 Target tracking method and device

Publications (2)

Publication Number Publication Date
CN110930437A CN110930437A (en) 2020-03-27
CN110930437B true CN110930437B (en) 2023-06-23

Family

ID=69851368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911143569.2A Active CN110930437B (en) 2019-11-20 2019-11-20 Target tracking method and device

Country Status (1)

Country Link
CN (1) CN110930437B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768433A (en) * 2020-06-30 2020-10-13 杭州海康威视数字技术股份有限公司 Method and device for realizing tracking of moving target and electronic equipment
CN114125267B (en) * 2021-10-19 2024-01-19 上海赛连信息科技有限公司 Intelligent tracking method and device for camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
CN101277429A (en) * 2007-03-27 2008-10-01 中国科学院自动化研究所 Method and system for amalgamation process and display of multipath video information when monitoring
CN105120242A (en) * 2015-09-28 2015-12-02 北京伊神华虹系统工程技术有限公司 Intelligent interaction method and device of panoramic camera and high speed dome camera
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950426B (en) * 2010-09-29 2014-01-01 北京航空航天大学 Vehicle relay tracking method in multi-camera scene
CN102063724A (en) * 2010-11-25 2011-05-18 四川省绵阳西南自动化研究所 Panoramic virtual alert target relay tracking device
CN102176246A (en) * 2011-01-30 2011-09-07 西安理工大学 Camera relay relationship determining method of multi-camera target relay tracking system
CN102156863B (en) * 2011-05-16 2012-11-14 天津大学 Cross-camera tracking method for multiple moving targets
CN102497505A (en) * 2011-12-08 2012-06-13 合肥博微安全电子科技有限公司 Multi-ball machine linkage target tracking method and system based on improved Meanshift algorithm
CN103716594B (en) * 2014-01-08 2017-02-22 深圳英飞拓科技股份有限公司 Panorama splicing linkage method and device based on moving target detecting
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN104660998B (en) * 2015-02-16 2018-08-07 阔地教育科技有限公司 A kind of relay tracking method and system
CN110276789B (en) * 2018-03-15 2021-10-29 杭州海康威视系统技术有限公司 Target tracking method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101277429A (en) * 2007-03-27 2008-10-01 中国科学院自动化研究所 Method and system for amalgamation process and display of multipath video information when monitoring
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
CN105120242A (en) * 2015-09-28 2015-12-02 北京伊神华虹系统工程技术有限公司 Intelligent interaction method and device of panoramic camera and high speed dome camera
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《多摄像头接力跟踪综述》;吴雨迪;《科技与创新》(第6期);第82-83页 *

Also Published As

Publication number Publication date
CN110930437A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN107087107B (en) Image processing apparatus and method based on dual camera
CN106791710B (en) Target detection method and device and electronic equipment
CN111160172B (en) Parking space detection method, device, computer equipment and storage medium
US11887318B2 (en) Object tracking
KR101530255B1 (en) Cctv system having auto tracking function of moving target
CN111242973A (en) Target tracking method and device, electronic equipment and storage medium
CN113079325B (en) Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions
CN108537726B (en) Tracking shooting method and device and unmanned aerial vehicle
KR101645959B1 (en) The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map
US20200036895A1 (en) Image processing apparatus, control method thereof, and image capture apparatus
KR101548639B1 (en) Apparatus for tracking the objects in surveillance camera system and method thereof
CN110930437B (en) Target tracking method and device
JP6924064B2 (en) Image processing device and its control method, and image pickup device
CN111105351B (en) Video sequence image splicing method and device
CN109685062A (en) A kind of object detection method, device, equipment and storage medium
CN109543496B (en) Image acquisition method and device, electronic equipment and system
JP5127692B2 (en) Imaging apparatus and tracking method thereof
CN117014716A (en) Target tracking method and electronic equipment
KR20180075506A (en) Information processing apparatus, information processing method, and program
CN105467741A (en) Panoramic shooting method and terminal
CN112488069B (en) Target searching method, device and equipment
CN112116068A (en) Annular image splicing method, equipment and medium
CN111860050A (en) Loop detection method and device based on image frame and vehicle-mounted terminal
CN111860051A (en) Vehicle-based loop detection method and device and vehicle-mounted terminal
CN114979758B (en) Video splicing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211110

Address after: 518000 409, Yuanhua complex building, 51 Liyuan Road, merchants street, Nanshan District, Shenzhen, Guangdong

Applicant after: Shenzhen zhuohe Technology Co.,Ltd.

Address before: No. 2501-1, 25 / F, block D, Tsinghua Tongfang science and technology building, No. 1 courtyard, Wangzhuang Road, Haidian District, Beijing 100083

Applicant before: Beijing Zhuohe Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant