CN110276789A - Method for tracking target and device - Google Patents

Method for tracking target and device Download PDF

Info

Publication number
CN110276789A
CN110276789A CN201810213898.9A CN201810213898A CN110276789A CN 110276789 A CN110276789 A CN 110276789A CN 201810213898 A CN201810213898 A CN 201810213898A CN 110276789 A CN110276789 A CN 110276789A
Authority
CN
China
Prior art keywords
video camera
video
target
visual field
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810213898.9A
Other languages
Chinese (zh)
Other versions
CN110276789B (en
Inventor
杨海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810213898.9A priority Critical patent/CN110276789B/en
Publication of CN110276789A publication Critical patent/CN110276789A/en
Application granted granted Critical
Publication of CN110276789B publication Critical patent/CN110276789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses a kind of method for tracking target and devices, belong to monitoring technology field.The described method includes: during tracking target in the video of the first video camera, when target crosses over firing line in the video of the first video camera, obtain the trigger position in firing line, firing line refers to the line of demarcation between the main area of visual field of the first video camera and the main area of visual field of the second video camera, and trigger position refers to the position that the motion track of target intersects with firing line;Based on trigger position, target is positioned in the video of the second video camera;Target is tracked in the video of the second video camera.The trigger position that the present invention is based on targets in firing line, target is positioned in the video of the second video camera, continue to track target in video to automatically switch to the second video camera, artificially target is searched in the video of the second video camera to position target without user, greatly improves the efficiency of tracking target.

Description

Method for tracking target and device
Technical field
The present invention relates to monitoring technology field, in particular to a kind of method for tracking target and device.
Background technique
With the development of monitoring technology, people can lay video cameras, camera shooting in various places such as national treasury, picture shop, markets Machine can carry out target following by the video that video camera is shot, such as track a certain with captured in real-time in the process of running Suspect, a certain suspected vehicles of tracking etc..
During target following, user can substantially analyze the position that target will appear, and find out the position and laid Video camera, watch the video of the video camera, after target enters the field range of video camera, video will appear target, user The target can be chosen in video, and video camera can determine the target of user's selection, in video using signature tracking algorithm Track target.Specifically, the gray value of target can be obtained according to the image of the present frame of video, then from the next frame of video The gray value that gray value and target are found in image is more immediate, using the point as the mesh occurred in the image of next frame Mark, can similarly determine the target occurred in the image of each frame, track target in video to realize.And when target from When opening the field range of video camera, the target in video disappears, and user needs by personal experience, and checking and position target may The position of entrance finds out the video camera of position laying, and watches the video of the video camera, when in the video recording in video camera again When seeing target, target is chosen in video again, to continue to track target in the video recording of the video camera.
In the implementation of the present invention, inventor find the relevant technologies the prior art has at least the following problems:
After target leaves the field range of current camera, user is needed artificially to exist again the consuming great time Selection target in the video of video camera could position target and continue to track target, and the efficiency for tracking target is extremely low.
Summary of the invention
The embodiment of the invention provides a kind of method for tracking target and device, it is able to solve target in the related technology and leaves and work as After the field range of preceding video camera, the technical issues of target could be positioned by needing the manual operation of user.The technical solution is such as Under:
In a first aspect, providing a kind of method for tracking target, which comprises
During tracking target in the video of the first video camera, when the target is in the video of first video camera When middle leap firing line, the trigger position in the firing line is obtained, the firing line refers to the main view of first video camera Line of demarcation between wild region and the main area of visual field of the second video camera, the trigger position refer to the motion track of the target The position intersected with the firing line;
Based on the trigger position, the target is positioned in the video of second video camera;
The target is tracked in the video of second video camera.
It is described to be based on the trigger position in a kind of possible design, to institute in the video of second video camera Target is stated to be positioned, comprising:
Based on the video of first video camera, the trigger position is obtained in the main area of visual field of first video camera In point coordinate, obtain first coordinate;
Based on first coordinate, point of the trigger position in the main area of visual field of second video camera is obtained Coordinate obtains second point coordinate, the point coordinate positioned in the video of second video camera as the target.
It is described to be based on first coordinate in a kind of possible design, the trigger position is obtained in the second camera shooting Point coordinate in the main area of visual field of machine, obtains second point coordinate, comprising:
The firing line is obtained in two extreme coordinates of the main area of visual field of first video camera, obtains first end point Coordinate and the second extreme coordinates;
The firing line is obtained in two extreme coordinates of the main area of visual field of second video camera, obtains third endpoint Coordinate and the 4th extreme coordinates;
It is sat according to the first end point coordinate, second extreme coordinates, the third extreme coordinates, the 4th endpoint It is marked with and first coordinate, calculates the second point coordinate.
It is described to be based on the trigger position in a kind of possible design, to institute in the video of second video camera Target is stated to be positioned, comprising:
When the target crosses over firing line in the video of first video camera, the view of first video camera is obtained The time point of frequency record, as trigger time;
Based on the trigger position and the trigger time, to the target in the video of second video camera It is positioned.
In a kind of possible design, before the trigger position obtained in the firing line, the method also includes:
According to the video of candidate video camera, the appearance position of the target is obtained;
When the appearance position of the target belongs to the main area of visual field of the candidate video camera, by the candidate video camera As first video camera;Or,
When the appearance position of the target belongs to the fringe field of view region of the candidate video camera, the edge view is determined The corresponding video camera in wild region, as first video camera, the main area of visual field of first video camera includes the edge Area of visual field, wherein the fringe field of view region refers to the area in the area of visual field of the candidate video camera other than main area of visual field Domain.
In a kind of possible design, before acquisition target to be tracked, the method also includes:
The video camera adjacent for any two in multiple video cameras makees the public string of the area of visual field of two video cameras For the public firing line of described two video cameras.
Second aspect provides a kind of target tracker, is applied in terminal, described device includes:
Across module, during for tracking target in the video of the first video camera, when the target is described the When crossing over firing line in the video of one video camera, the trigger position in the firing line is obtained, the firing line refers to described the Line of demarcation between the main area of visual field of one video camera and the main area of visual field of the second video camera, the trigger position refer to described The position that the motion track of target intersects with the firing line;
Locating module carries out the target in the video of second video camera for being based on the trigger position Positioning;
Tracking module, for tracking the target in the video of second video camera.
In a kind of possible design, the locating module is used for: the video based on first video camera, obtains institute Point coordinate of the trigger position in the main area of visual field of first video camera is stated, first coordinate is obtained;Based on described first Point coordinate, obtains point coordinate of the trigger position in the main area of visual field of second video camera, obtains second point coordinate, The point coordinate positioned in the video of second video camera as the target.
In a kind of possible design, the locating module, comprising:
Acquisition submodule is sat for obtaining the firing line in two endpoints of the main area of visual field of first video camera Mark, obtains first end point coordinate and the second extreme coordinates;
The acquisition submodule is also used to obtain the firing line at two of main area of visual field of second video camera Extreme coordinates obtain third extreme coordinates and the 4th extreme coordinates;
Computational submodule, for being sat according to the first end point coordinate, second extreme coordinates, the third endpoint Mark, the 4th extreme coordinates and first coordinate, calculate the second point coordinate.
In a kind of possible design, the locating module, comprising:
Acquisition submodule, for obtaining institute when the target crosses over firing line in the video of first video camera The time point for stating the videograph of the first video camera, as trigger time;
Positioning submodule, for being based on the trigger position and the trigger time, in second video camera The target is positioned in video.
In a kind of possible design, described device further include:
It obtains module and obtains the appearance position of the target for the video according to candidate video camera;
The locating module is also used to belong to the main area of visual field of the candidate video camera when the appearance position of the target When, using the candidate video camera as first video camera;Or,
The locating module is also used to belong to the fringe field of view area of the candidate video camera when the appearance position of the target When domain, the corresponding video camera in the fringe field of view region, as first video camera, the main view of first video camera are determined Wild region includes the fringe field of view region, wherein the fringe field of view region refers in the area of visual field of the candidate video camera Region other than main area of visual field.
In a kind of possible design, described device further include:
Determining module, for the video camera adjacent for any two in multiple video cameras, by the visual field of two video cameras The public string in the region firing line public as described two video cameras.
The third aspect provides a kind of terminal, and the terminal includes processor and memory, is stored in the memory At least one instruction, described instruction are loaded by the processor and are executed to realize such as above-mentioned first aspect and first aspect Method for tracking target in any possible design.
Fourth aspect provides a kind of computer readable storage medium, at least one finger is stored in the storage medium It enables, described instruction is loaded by processor and executed to realize any possible design such as above-mentioned first aspect and first aspect In method for tracking target.
Technical solution provided in an embodiment of the present invention has the benefit that
Method and device provided in an embodiment of the present invention, when target in the video of the first video camera cross over firing line, from And after leaving the main area of visual field of the first video camera, it can be based on trigger position of the target in firing line, in the second video camera Target is positioned in video, continues to track target to automatically switch in the video of the second video camera, without user Target is searched in the video of the second video camera artificially to position target, greatly improves the efficiency of tracking target.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is the schematic diagram of a kind of area of visual field provided in an embodiment of the present invention, main area of visual field and firing line;
Fig. 2 is a kind of schematic diagram of implementation environment provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of implementation environment provided in an embodiment of the present invention;
Fig. 4 is a kind of flow chart of method for tracking target provided in an embodiment of the present invention;
Fig. 5 is a kind of flow chart of method for tracking target provided in an embodiment of the present invention;
Fig. 6 is a kind of schematic diagram of method for tracking target provided in an embodiment of the present invention;
Fig. 7 is a kind of schematic diagram of method for tracking target provided in an embodiment of the present invention;
Fig. 8 is a kind of flow chart that target is tracked during playing back video provided in an embodiment of the present invention;
Fig. 9 be it is provided in an embodiment of the present invention it is a kind of from choose target to tracking target terminate during target movement The schematic diagram of track;
Figure 10 is a kind of structural schematic diagram of target tracker provided in an embodiment of the present invention;
Figure 11 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
In order to make it easy to understand, first to the present embodiments relate to noun explain:
Area of visual field: alternatively referred to as overlay area, area of visual field refer to video camera lay position around a certain size Region, the region that can be farthest taken for video camera.Area of visual field includes main area of visual field and fringe field of view region, the main view It is separated between wild region and fringe field of view region by firing line.Referring to Fig. 1, the area of visual field of video camera A is circle 1, video camera B Area of visual field be circle 2.
Main area of visual field: main area of visual field refers to main region in area of visual field, and the area of main area of visual field is larger, The area of main area of visual field is greater than fringe field of view region in area of visual field.Referring to Fig. 1, the main area of visual field of video camera A be arc ab, The region of line segment bo, line segment oa composition, that is, the main area of visual field of video camera A is biggish region in circle 1.In the present embodiment, meeting Target is tracked in the video of the corresponding video camera of main area of visual field locating for target, when target is in the video of current camera Across the firing line of current camera, to leave the main area of visual field of current camera, the main view into adjacent camera is wild When region, target can be positioned in the video of adjacent camera based on trigger position of the target in firing line, thus Automatic smoothing switches in the video of adjacent camera, and continuation tracks target in the video of adjacent camera.
Fringe field of view region: fringe field of view region refers to the region in area of visual field other than main area of visual field, edge The area of area of visual field is smaller, is less than main area of visual field in the area of area of visual field inward flange area of visual field.Referring to Fig. 1, camera shooting The fringe field of view region of machine A is the region in circle 1 other than main area of visual field, that is, the fringe field of view region of video camera A is circle Lesser region in 1.
Firing line: referring to the line of demarcation between the main area of visual field of video camera and the main area of visual field of adjacent camera, can Think the public string of the area of visual field of video camera and adjacent camera, main area of visual field and edge regard in the area of visual field of video camera Wild region is separated by firing line.Wherein, with current camera for the first video camera, adjacent camera is that the second video camera is Example, firing line is the line of demarcation between the main area of visual field of the first video camera and the main area of visual field of the second video camera.Example Property, referring to Fig. 1, firing line is the line segment of overstriking in Fig. 1, and middle conductor ao is the firing line of video camera A and video camera C, i.e., Line of demarcation between the main area of visual field of video camera A and the main area of visual field of video camera C.Line segment bo is video camera A and video camera B Firing line, i.e. line of demarcation between the main area of visual field of video camera A and the main area of visual field of video camera B.Line segment oe is camera shooting Line of demarcation between the firing line of machine B and video camera C, the i.e. main area of visual field of video camera B and the main area of visual field of video camera C.
Fig. 2 is a kind of schematic diagram of implementation environment provided in an embodiment of the present invention, which includes: 201 He of terminal Multiple video cameras 202.
Network connection is established between terminal 201 and multiple video cameras 202, can be communicated by the network connection, The network connection includes WI-FI (Wireless Fidelity, Wireless Fidelity) connection, mobile network's connection etc..Wherein, the end End 201 can be computer, mobile phone, tablet computer etc., and each video camera 202 can be IPC (IP Camera, web camera). Video can be sent to terminal 201, can also be sent out by relay device (omitting in Fig. 2) by video camera 202 by network connection Terminal 201 is given, terminal 201 can play the video of video camera 202.
Optionally, referring to Fig. 3, which can also include storage equipment 203, terminal 201 and storage equipment 203 it Between establish network connection, while storing and establishing network connection between equipment 203 and multiple video cameras 202.Store equipment 203 Video after storage is sent to terminal 201 and carries out video playback by the video sent for storing multiple video cameras 202.Wherein, Storing equipment 203 can be video storaging equipment, be also possible to cloud storage system, physics of the present embodiment to storage equipment 203 Form is without limitation.
In the present embodiment, video can be directly or indirectly sent to terminal 201 by network connection by video camera 202, Terminal 201 can receive the video of each video camera 202, play the video of video camera 202, in the video of video camera 202 into Row target following obtains trigger position of the target in firing line when target crosses over the firing line of any video camera 202, Based on the trigger position in firing line, target is positioned in the video of adjacent camera 202 automatically, and automatically switches to Target following is carried out in the video of adjacent camera 202.To the mistake mobile in the main area of visual field of multiple video cameras in target Cheng Zhong continuously tracks target.
Fig. 4 is a kind of flow chart of method for tracking target provided in an embodiment of the present invention, referring to fig. 4, is applied in terminal, This method comprises:
401, during tracking target in the video of the first video camera, when the target is in the video of first video camera When middle leap firing line, the trigger position in the firing line is obtained, which refers to the main area of visual field of first video camera Line of demarcation between the main area of visual field of the second video camera, the trigger position refer to the motion track and the firing line of the target The position of intersection.
402, it is based on the trigger position, the target is positioned in the video of the second video camera.
403, the target is tracked in the video of second video camera.
Method provided in an embodiment of the present invention, when target crosses over firing line in the video of the first video camera, to leave It, can be based on trigger position of the target in firing line, in the video of the second video camera after the main area of visual field of first video camera Target is positioned, continues to track target to automatically switch in the video of the second video camera, artificially without user Target is searched in the video of the second video camera to position target, greatly improves the efficiency of tracking target.
In a kind of possible design, should be based on the trigger position, in the video of second video camera to the target into Row positioning, comprising:
Based on the video of first video camera, point of the trigger position in the main area of visual field of first video camera is obtained Coordinate obtains first coordinate;
Based on first coordinate, point coordinate of the trigger position in the main area of visual field of second video camera is obtained, Second point coordinate is obtained, the point coordinate positioned in the video of second video camera as the target.
In a kind of possible design, it should be based on first coordinate, obtain the trigger position in the master of the second video camera Point coordinate in area of visual field, obtains second point coordinate, comprising:
The firing line is obtained in two extreme coordinates of the main area of visual field of first video camera, obtains first end point coordinate With the second extreme coordinates;
The firing line is obtained in two extreme coordinates of the main area of visual field of second video camera, obtains third extreme coordinates With the 4th extreme coordinates;
According to the first end point coordinate, second extreme coordinates, the third extreme coordinates, the 4th extreme coordinates and it is somebody's turn to do First coordinate calculates the second point coordinate.
In a kind of possible design, should be based on the trigger position, in the video of second video camera to the target into Row positioning, comprising:
When the target crosses over firing line in the video of first video camera, the videograph of first video camera is obtained Time point, as trigger time;
Based on the trigger position and the trigger time, the target is determined in the video of second video camera Position.
In a kind of possible design, before the trigger position in the acquisition firing line, this method further include:
According to the video of candidate video camera, the appearance position of the target is obtained;
When the appearance position of the target belongs to the main area of visual field of candidate's video camera, using candidate's video camera as this First video camera;Or,
When the appearance position of the target belongs to the fringe field of view region of candidate's video camera, the fringe field of view region is determined Corresponding video camera, as first video camera, the main area of visual field of first video camera includes the fringe field of view region, wherein The fringe field of view region refers to the region in the area of visual field of candidate's video camera other than main area of visual field.
In a kind of possible design, before acquisition target to be tracked, this method further include:
The video camera adjacent for any two in multiple video cameras makees the public string of the area of visual field of two video cameras For the public firing line of two video cameras.
In the related technology, it when target leaves the area of visual field of current camera, that is, needs manually to search video camera again And artificial selection target, cause the efficiency for tracking target lower.And in the present embodiment, it devises based on target in firing line Trigger position is automatically positioned the mode of target, to realize function automatic and that target is accurately continuously tracked: taking the photograph to be each The area of visual field of camera is provided with firing line, when detecting that target crosses over firing line in the video of current camera, i.e., can obtain Trigger position of the target in firing line is taken, trigger position is based on, it is wild across locating main view after firing line to automatically determine target The corresponding video camera in region, and automatically determine position of the target in the video of the video camera, thus automatic taking the photograph after handover Continue to track target in the video of camera.By crossing over the movement of firing line with target, the video of video camera is carried out in real time Switching, can continuously track target in the moving process of the area of visual field of multiple video cameras, extend the function of target following, Greatly improve the efficiency of target following.
The embodiment of the present invention can be applied in various practical application scenes, and target can be true according to concrete application scene It is fixed, it can be people, vehicle, animal and any other object.The suspect to rob the bank or suspicion are being tracked for example, can apply It doubts the scene of vehicle, the scene for the suspect that tracking is escaped from prison, track in the scene for stealing the suspect of supermarket.Silver is plundered with tracking For the scene of capable suspect, method based on the embodiment of the present invention can lay multiple video cameras in bank, work as suspicion When doubting main area of visual field of the people into any video camera, target is tracked in the video of the video camera, when suspect is moved to phase The main area of visual field of adjacent video camera automatically switches in the video of adjacent camera and continues to track suspect, realizes continuously Suspect is tracked in the moving process of the area of visual field of entire bank.
Further, the embodiment of the present invention can be applied to the scene of real-time tracking target and the process in playback video In the scene of middle tracking target.In the scene for being applied to real-time tracking target, terminal can be clapped currently in video camera in real time Target is tracked in the video taken the photograph, when target crosses over firing line in the video of current shooting, i.e., can currently taken the photograph based on target Trigger position in the firing line of camera switches between the video of multiple video cameras automatically, thus in real time continuously with Moving process of the track target in the main area of visual field of multiple video cameras.When applied to the process tracking mesh aiming field in playback video Jing Zhong, the video of the available multiple video camera history shootings of terminal, when target crosses over firing line in the video that history is shot When, i.e., it can be cut between the video of multiple video cameras automatically based on trigger position of the target histories in the firing line of video camera It changes.To continuously track target histories in the moving process of the main area of visual field of multiple video cameras.
Fig. 5 is a kind of flow chart of method for tracking target provided in an embodiment of the present invention.The execution master of the inventive embodiments Body is terminal, referring to Fig. 5, this method comprises:
501, terminal determines the firing line of multiple video cameras.
In an implementation, user can lay multiple video cameras in target location, and each video camera can be laid in respectively not Same position, each video camera have area of visual field, and each video camera can in real time shoot respective area of visual field, thus By the respective area of visual field of respective video monitoring, covering mesh may be implemented by multiple camera supervised multiple area of visual field Mark the effect in place.Wherein, target location can be exhibition center, national treasury, bank etc..
In the present embodiment, for each video camera in multiple video cameras, terminal can be based on the field of vision of the video camera The area of visual field of the adjacent camera of domain and the video camera determines the firing line of the video camera, and then determines multiple video cameras Firing line, so as to it is subsequent target location track target when, realize target cross over current camera firing line, then cut automatically The function of continuing to track target is shifted in the video of adjacent camera.
For the area of visual field of video camera, terminal can determine the height and tilt angle of video camera, according to this highly and Tilt angle calculates the area of visual field of video camera in conjunction with trigonometric function relationship.Wherein, highly refer to video camera relative to laying ground The height of the reference planes of point, such as video camera lay Mr. Yu warehouse, then reference planes are the ground in warehouse, highly refer to camera shooting Height of the machine relative to the ground in the warehouse.Tilt angle refers to the angle between the direction and vertical direction of video camera direction, For example, 60 °.User can determine the height and tilt angle of video camera when laying video camera, input the height in the terminal Degree, tilt angle, so that terminal gets height, the tilt angle of user's input.
For the detailed process for determining firing line, the area of visual field of each video camera can be with the area of visual field of adjacent camera Part with juxtaposition can take a line segment in the part of juxtaposition, as firing line.Specifically, for multiple The adjacent video camera of any two in video camera, terminal can by the public string between the area of visual field of two video cameras, as The public firing line of two video cameras.Wherein, public firing line refers to, since firing line is the area of visual field of two video cameras Public string, a firing line can correspond to two video cameras simultaneously, and two video cameras can co-own a triggering in other words Line.
In addition, after determining the firing line of video camera, the area of the area of visual field internal trigger line two sides of available video camera Domain obtains two regions, using the biggish region of area in the two regions as the main area of visual field of video camera, by the two areas Fringe field of view region of the lesser region of area as video camera in domain.Illustratively, referring to Fig. 1, the area of visual field of video camera A For circle 1, the area of visual field of video camera B is round 2, and the public string bo of circle 1 and circle 2 is the public triggering of video camera A and video camera B Line.
502, terminal obtains target to be tracked.
Each video camera can be shot in real time, and the video that shooting obtains is sent to terminal by network connection, Terminal can obtain target to be tracked, or can be in live streaming video camera during playing back the history video of video camera Current shooting video during, target to be tracked is obtained, to be tracked for target to be tracked.
The process for obtaining target to be tracked can specifically include following steps one to step 2:
Step 1: terminal determines candidate video camera.
Candidate video camera, which refers to, provides the video camera of target to be tracked.For the process for determining candidate video camera, user It can determine the place that target probably occurs, the video camera for obtaining place laying triggers at the terminal as candidate video camera For the confirmation operation of candidate video camera, then when terminal detects the confirmation operation, it can determine whether candidate video camera.For example, terminal It can show multiple video cameras, user can click the candidate video camera in multiple video camera, and terminal detects clicking operation When, the video camera that can be will click on is as candidate video camera.Alternatively, user can input the mark of candidate video camera at the terminal Know, the mark of the candidate video camera of the available user's input of terminal determines candidate camera shooting according to the mark of candidate's video camera Machine, wherein the mark of candidate video camera can be call number, the title etc. of candidate video camera.
It should be noted that user can determine that target probably occurs when tracking target during playing back video Time point, input the time point at the terminal, at the time point of the available user's input of terminal, obtain candidate video camera at this The video at time point.When real-time tracking target, user does not need to determine time point, and terminal can directly acquire candidate video camera and exist The video of current point in time.
Step 2: video of the terminal according to candidate video camera, obtains the appearance position of target.
Terminal can determine target during playing the video of candidate video camera from the video of candidate video camera, And according to the video of candidate's video camera, the appearance position of target is obtained.Specifically, in the mistake for the video for playing candidate video camera Cheng Zhong, user can trigger target selection operation in the video of candidate video camera, selected from video jobbie as to The target of tracking, terminal can detecte the target selection operation, obtain the target of user's selection, and determine that the target of selection is being waited The position in the video of video camera is selected, the appearance position as target.
503, terminal determines the first video camera.
First video camera refers to the video camera for starting to use when tracking target in video.In the present embodiment, when target When appearance position belongs to the main area of visual field of candidate video camera, it can be imaged using candidate video camera as the first video camera in candidate Start to carry out target tracking in the video of machine.It, can will be with when the appearance position of target belongs to the fringe region of candidate video camera Candidate video camera is adjacent and main area of visual field includes the video camera of the fringe region as the first video camera, and switches to first and take the photograph Start target tracking in the video of camera, to guarantee to track target in main area of visual field when starting and tracking target.Wherein, Determine that the process of the first video camera can specifically include following steps one to step 3:
Step 1: terminal judges whether appearance position belongs to the main area of visual field of candidate video camera.
Terminal can determine the coordinate system of the area of visual field of candidate video camera, obtain the point of appearance position in the coordinate system Coordinate, judges whether the coordinate belongs to the corresponding coordinate range of main area of visual field of candidate video camera, when a coordinate belongs to this Coordinate range, it is determined that appearance position belongs to the main area of visual field of candidate video camera, enters step two.When a coordinate is not belonging to lead The coordinate range of area of visual field, it is determined that appearance position is not belonging to the main area of visual field of candidate video camera, enters step three.
Step 2: being taken the photograph when appearance position belongs to the main area of visual field of candidate video camera using candidate video camera as first Camera.
When appearance position belongs to the main area of visual field of candidate video camera, terminal can know that target is located at candidate video camera Main area of visual field, then directly using candidate video camera as the first video camera, to carry out mesh in the video of candidate video camera Mark tracking.
Step 3: determining that fringe field of view region is corresponding when appearance position belongs to the fringe field of view region of candidate video camera Video camera, as the first video camera.
When appearance position is not belonging to the main area of visual field of candidate video camera, and belong to the fringe field of view region of candidate video camera When, terminal can know that target is not located at the main area of visual field of candidate video camera, and be located at the adjacent camera of candidate video camera Main area of visual field determine the corresponding video camera in fringe field of view region then based on the fringe field of view region of candidate video camera, make For the first video camera, which is the adjacent camera of candidate video camera, and the main area of visual field packet of the first video camera Containing fringe field of view region belonging to appearance position.
Illustratively, referring to Fig. 6, it is assumed that candidate video camera is video camera A, when target going out in the video of video camera A Existing position belongs to the main area of visual field of video camera A, then without switching, directly starts to track target in the video of video camera A.When Appearance position of the target in the video of video camera A belongs to the main area of visual field of video camera B, then is switched to the video of video camera B In, start to track target in the video of video camera B.When appearance position of the target in the video of video camera A belongs to video camera C Main area of visual field, then be switched in the video of video camera C, start in the video of video camera C track target.
Wherein, for the detailed process for determining the first video camera, the edge view of each video camera can be stored in advance in terminal Corresponding relationship between wild region and adjacent camera, when fringe field of view region belonging to appearance position has been determined, Ke Yicha The corresponding relationship between the fringe field of view region and adjacent camera of candidate video camera is ask, it is corresponding to obtain the fringe field of view region Adjacent camera, as the first video camera.
Optionally, terminal can store the topological relation of each video camera and adjacent camera by the way of digraph, It include multiple nodes and multiple sides in digraph, wherein some node indicates that corresponding video camera, remaining node indicate corresponding camera shooting The adjacent camera of machine, each side indicate a firing line.Between the corresponding node of video camera and the node of adjacent camera While indicating the public firing line of this two video cameras.It so, can when terminal determines fringe field of view region belonging to appearance position With the digraph of query candidate video camera, the corresponding side in fringe field of view region in digraph, the video camera which is indicated are determined As the first video camera.
It should be noted that carrying out target when since the video that the video of candidate video camera switches to the first video camera When tracking, terminal can position target in the video of the first video camera in conjunction with positioning time o'clock.In above-mentioned steps 502, when When terminal detects target selection operation during playing the video of candidate video camera, the view of available candidate's video camera The time point that frequency is currently shown obtains the first video camera in the video of the positioning time point, in the video as positioning time point Middle tracking target.In addition, terminal directly using current point in time as positioning time point, obtains first when real-time tracking target Video of the video camera in current point in time.
Further, in order to guarantee that the accuracy for positioning target in the video of the first video camera, terminal can play the The video of one video camera, user can touch in the video of the first video camera during watching the video of the first video camera Target selection operation is sent out, terminal can detecte the target selection operation, and user's selection is obtained from the video of the first video camera Target.
504, terminal tracks target in the video of the first video camera.
For the detailed process for tracking target in the video of the first video camera, terminal can be existed using signature tracking algorithm Target is tracked in video, this feature track algorithm can be pyramid Lukas-Kanade (LK) optical flow method.Specifically, Ke Yigen According to the image of the present frame of the video of the first video camera, the gray value of target is obtained, then is looked for from the image of the next frame of video Gray value to gray value and target is more immediate, using the point as the target occurred in the image of next frame, similarly Ground can determine the target occurred in the image of each frame, track target in video to realize.In an implementation, it can call Opencv is the calcOpticalFlowPyrLK interface that pyramid LK optical flow method provides, and is passed to characteristic value to the interface, is transporting During the row interface, the function of tracking target in video is realized.
Optionally, when tracking target in video, terminal can be carried out the target in video according to default notation methods Mark, and play the video after mark.By label target in video, the target in video can be protruded, user is facilitated to see Observation of eyes target moving process.Wherein, which can surround the box or circle of target, prominent target for addition Edge etc..
505, when detection target crosses over firing line in the video of the first video camera, terminal obtains the triggering in firing line Position, the firing line refer to the line of demarcation between the main area of visual field of the first video camera and the main area of visual field of the second video camera, Trigger position refers to the position that the motion track of target intersects with firing line.
During tracking target in the video of the first video camera, whether terminal can detecte target in the first video camera Video in cross over the firing line of the first video camera and the second video camera and can be obtained when detecting that target not yet crosses over firing line Know that target is still mobile in the first main area of visual field, then continues to track target in the video of the first video camera.When detecting target The firing line of the first video camera and the second video camera has been had passed over, can know that target has been moved off the main view of the first video camera Wild region, is moved to the main area of visual field of the second video camera, then trigger position of the target in firing line can be obtained, to be based on The trigger position positions target in the video of the second video camera.Wherein, trigger position refers to motion track and the triggering of target The position of line intersection, i.e., when target crosses over firing line in video the location of in firing line.
The detailed process of firing line is crossed over for detection target, terminal can obtain target in the view of the first video camera in real time Point coordinate in frequency, judges whether the coordinate belongs to the coordinate range of the main area of visual field of the first video camera, when a coordinate category In the coordinate range of the main area of visual field of the first video camera, it is determined that target not yet crosses over firing line, when a coordinate is not belonging to The coordinate range of the main area of visual field of one video camera, it is determined that target has passed over firing line.
506, terminal is based on trigger position, positions in the video of the second video camera to target.
After terminal obtains trigger position, target can be carried out in the video of the second video camera based on trigger position Positioning continues after target to track target in the video of the second video camera to position.During being somebody's turn to do, manually exist without user Selection target in the video of second video camera realizes the function of automatic positioning target.
For the process for determining the second video camera, the firing line and the second camera shooting of the first video camera can be stored in advance in terminal Corresponding relationship between machine inquires the corresponding relationship, determines the corresponding video camera of firing line after target crosses over firing line, makees For the second video camera.Optionally, terminal can inquire the digraph of the first video camera, determine the corresponding side of firing line in digraph The node of connection, the adjacent camera which is indicated is as the second video camera.
The detailed process that target is positioned in the video of the second video camera may comprise steps of one to step 2:
Step 1: the video based on the first video camera, obtains trigger position in the main area of visual field of the first video camera Point coordinate, obtains first coordinate.
In order to distinguish description, the point coordinate by trigger position in the main area of visual field of the first video camera is known as first herein Point coordinate, is known as second point coordinate for point coordinate of the trigger position in the main area of visual field of the second video camera.Terminal determines touching After sending out position, the coordinate system of the area of visual field of the first video camera can be determined, obtain the coordinate of trigger position in the coordinate system, As first coordinate.
Step 2: being based on first coordinate, point coordinate of the trigger position in the main area of visual field of the second video camera is obtained, Obtain second point coordinate.
In the present embodiment, since trigger position is the shared position of the first video camera and the second video camera, firing line is the The shared line segment of one video camera and the second video camera, then in the main area of visual field of the first video camera trigger position in firing line Relative position, the second video camera main area of visual field in relative position of the trigger position in firing line it is identical.So, in conjunction with The coordinate system of the area of visual field of the coordinate system of the area of visual field of one video camera and the second video camera, in the coordinate system of the first video camera First coordinate is relative to second point in the coordinate system of proportionate relationship, the second video camera between two extreme coordinates of firing line Coordinate is also identical relative to the proportionate relationship between two extreme coordinates of firing line, therefore can be in conjunction with this proportionate relationship Invariance, according to first coordinate in the coordinate system of the first video camera relative to the ratio between two extreme coordinates of firing line Relationship calculates second point coordinate in the coordinate system of the second video camera.Wherein, the detailed process for calculating second point coordinate can wrap Following steps (1) are included to step (3).
(1) firing line is obtained in two extreme coordinates of the main area of visual field of the first video camera, obtains first end point coordinate With the second extreme coordinates.
In order to distinguish description, herein by firing line the main area of visual field of the first video camera two extreme coordinates, that is, touching Starting point coordinate and terminal point coordinate of the hair line in the coordinate system of the first video camera, are referred to as first end point coordinate and the second endpoint Coordinate;By firing line the main area of visual field of the second video camera two extreme coordinates, that is, seat of the firing line in the second video camera Starting point coordinate and terminal point coordinate in mark system, are referred to as third extreme coordinates and the 4th extreme coordinates.
Terminal can establish the coordinate system of the main area of visual field of the first video camera in advance or in real time, and available firing line exists Two extreme coordinates in the coordinate system, obtain first end point coordinate and the second extreme coordinates.For example, it is assumed that the first video camera is The firing line of video camera A, video camera A are line segment ad, and first end point coordinate is coordinate of a point in the coordinate system of the first video camera (Xa, Ya), the second extreme coordinates are coordinate (Xd, Yd) of the d point in the coordinate system of the first video camera.
(2) firing line is obtained in two extreme coordinates of the main area of visual field of the second video camera, obtains third extreme coordinates With the 4th extreme coordinates.
Terminal can establish the coordinate system of the main area of visual field of the second video camera in advance or in real time, and available firing line exists Two coordinates in the coordinate system, obtain third extreme coordinates and the 4th extreme coordinates.For example, it is assumed that the second video camera is camera shooting The firing line of machine C, video camera C are line segment a ' d ', and third extreme coordinates are a ' o'clocks coordinates in the coordinate system of the second video camera (Xa ', Ya '), the 4th extreme coordinates are d ' o'clocks coordinates (Xd ', Yd ') in the coordinate system of the second video camera.
(3) according to first end point coordinate, the second extreme coordinates, third extreme coordinates, the 4th extreme coordinates and first point of seat Mark calculates second point coordinate.
Illustratively, terminal can use following formula, to first coordinate, first end point coordinate, the second extreme coordinates, Third extreme coordinates and the 4th extreme coordinates are calculated, and second point coordinate is obtained.
Wherein, (Xad, Yad) indicates that first coordinate, (Xa ' d ', Ya ' d ') indicate that second point coordinate, (Xa, Ya) indicate First end point coordinate, (Xd, Yd) indicate the second extreme coordinates, (Xa ', Ya ') indicate third extreme coordinates, (Xd ', Yd ') it indicates 4th extreme coordinates.
In conjunction with above step one and step 2, terminal obtains the second point coordinate of trigger position, then can determine second Corresponding position of second point coordinate in the video of video camera, using the corresponding object in the position as target, to be taken the photograph second Target is oriented in the video of camera, and then is continued to track mesh in the video of the second video camera according to the video of the second video camera Mark.Target is positioned in the video of the second video camera by trigger position, it, can without the target selection operation of user Automatically target quickly is matched in the video of the second video camera, greatly improves the efficiency of tracking target, while positioning mesh Target precision is higher, to greatly improve the accuracy of tracking target.
Optionally, for the scene for tracking target during playing back video, terminal can exist in conjunction with trigger time Target is positioned in the video of second video camera.Wherein, trigger time refers to target in the video of the first video camera across touching The time point of hair line.Specifically, when crossing over firing line in the video of the first video camera when target, terminal available first is taken the photograph The time point of the videograph of camera is based on trigger position and the trigger time as trigger time, in the second camera shooting The video of machine positions target.Wherein, terminal is after determining trigger time, and available second video camera is in the triggering The video at time point orients target in the video of the trigger time based on trigger position, to track target.In addition, For the scene of real-time tracking target, terminal can call directly the second video camera in the video of current point in time, based on triggering Position positions target in the video of current point in time.
507, terminal tracks target in the video of the second video camera.
After positioning target in the video in the second video camera, signature tracking algorithm can be used in the view of the second video camera Target is tracked in frequency, and target is automatically continuously tracked in the effect of the moving process of the main area of visual field of two video cameras to realize Fruit.
Optionally, for the process for terminating tracking target, when terminal detects that target spans the visual field of the second video camera When edge (i.e. the outer profile of area of visual field) in region, the process of tracking target can be ended automatically.Wherein, terminal can be real-time Judge the point coordinate of target in video, judge whether the coordinate belongs to the area of visual field of video camera, when this coordinate does not belong to In the area of visual field of video camera, confirm that target spans the edge of the area of visual field of video camera, then ends automatically tracking target Process.
Illustratively, referring to Fig. 7, it is assumed that the first video camera is video camera A, when target is crossed in the video of video camera A Firing line bo, then position target in the video of video camera B automatically and switch in the video of video camera B and track target.Work as mesh It is marked in the video of video camera A across firing line ao, then positions target in the video of video camera C automatically and switches to video camera Target is tracked in the video of C.When target in the video of video camera A bounding edge, then terminate to track in the video of video camera A The process of target.
First point for needing to illustrate be, the relevant technologies usually the headend equipments such as video camera are improved with realize target with The function of track, and it is improved with high costs to headend equipment, cause expense huge.And in the present embodiment, it need to only improve terminal The function of target following can be realized in processing logic, and cost is extremely low, has saved expense.
The second point for needing to illustrate is that the relevant technologies are usually applicable only to the scene of real-time tracking, and can not regard in playback Target following is carried out during frequency, and the present embodiment can be applied and carry out target following during playing back video, referring to Fig. 8, it illustrates a kind of during playing back video tracks the flow chart of target, when target is in the video of current camera When across firing line, according in firing line trigger position and trigger time position target in the video of adjacent camera, And target is tracked, and the function that target is continuously tracked during playing back video automatically may be implemented, it is big without user effort The time that target occurs again is positioned manually in the amount time, improves the efficiency of tracking target.
Need to illustrate is thirdly that the present embodiment is only to be switched to the second video camera from the video of the first video camera It is illustrated for the process of video, in an implementation, when target crosses over firing line in the video of the second video camera, Ke Yizai The secondary video for position and switch again to target video camera.For example, it is assumed that the firing line that target is crossed over is the second video camera Main area of visual field and third video camera main area of visual field between line of demarcation, terminal can be based on the view of the second video camera Frequently, target is obtained in the trigger position of the firing line of the second video camera, based on the trigger position in the video of third video camera Target is positioned, to continue to track target in the video of third video camera.Further, whenever target is when proactive When crossing over the firing line of current camera in the video of camera, terminal can be taken the photograph based on the trigger position in firing line adjacent Target is positioned in the video of camera, and the video for switching to adjacent camera is tracked, to continuously track mesh It is marked on the moving process of the main area of visual field of multiple video cameras.
Referring to Fig. 9, it illustrates from choose target terminate to tracking target during target motion track signal Figure, during target is mobile, choosing target, to during the time 1, terminal can track target in the video of video camera A. In the time 1, target crosses over firing line in the video of video camera A, then terminal can be existed based on target in the trigger position of firing line Target is positioned in the video of video camera C.During time 1 to the time 2, terminal can track target in the video of video camera C.? Time 2, target cross over firing line in the video of video camera C, then terminal can taken the photograph based on target in the trigger position of firing line Target is positioned in the video of camera B.During the time 2 terminates to tracking, terminal can track target in the video of video camera B, When target spans the edge of the area of visual field of video camera B, tracking terminates.
Method provided in this embodiment, when target crosses over firing line in the video of the first video camera, to leave first It, can be based on trigger position of the target in firing line, to mesh in the video of the second video camera after the main area of visual field of video camera Mark is positioned, and continues to track target to automatically switch in the video of the second video camera, without user artificially the Target is searched in the video of two video cameras to position target, greatly improves the efficiency of tracking target.
Figure 10 is a kind of target tracker provided in an embodiment of the present invention, is applied in terminal, referring to Figure 10 device It include: across module 1001, locating module 1002 and tracking module 1003.
Across module 1001, during for tracking target in the video of the first video camera, when the target this When crossing over firing line in the video of one video camera, the trigger position in the firing line is obtained, which refers to first camera shooting Line of demarcation between the main area of visual field of machine and the main area of visual field of the second video camera, the trigger position refer to the movement of the target The position that track is intersected with the firing line;
Locating module 1002 determines the target in the video of second video camera for being based on the trigger position Position;
Tracking module 1003, for tracking the target in the video of second video camera.
In a kind of possible design, which is used for: the video based on first video camera, and obtaining should Point coordinate of the trigger position in the main area of visual field of first video camera, obtains first coordinate;Based on first coordinate, Point coordinate of the trigger position in the main area of visual field of second video camera is obtained, second point coordinate is obtained, as the target The point coordinate positioned in the video of second video camera.
In a kind of possible design, the locating module 1002, comprising:
Acquisition submodule, for obtaining the firing line in two extreme coordinates of the main area of visual field of first video camera, Obtain first end point coordinate and the second extreme coordinates;
The acquisition submodule, two endpoints for being also used to obtain the firing line in the main area of visual field of second video camera are sat Mark, obtains third extreme coordinates and the 4th extreme coordinates;
Computational submodule, for according to the first end point coordinate, second extreme coordinates, the third extreme coordinates, this Four extreme coordinates and first coordinate, calculate the second point coordinate.
In a kind of possible design, the locating module 1002, comprising:
Acquisition submodule, for when the target in the video of first video camera cross over firing line when, obtain this first The time point of the videograph of video camera, as trigger time;
Positioning submodule, for being based on the trigger position and the trigger time, in the video of second video camera The target is positioned.
In a kind of possible design, the device further include:
It obtains module and obtains the appearance position of the target for the video according to candidate video camera;
The locating module 1002 is also used to belong to the main area of visual field of candidate's video camera when the appearance position of the target When, using candidate's video camera as first video camera;Or,
The locating module 1002 is also used to belong to the fringe field of view region of candidate's video camera when the appearance position of the target When, determine the corresponding video camera in fringe field of view region, as first video camera, the main area of visual field packet of first video camera The fringe field of view region is included, wherein the fringe field of view region refers in the area of visual field of candidate's video camera other than main area of visual field Region.
In a kind of possible design, the device further include:
Determining module, for the video camera adjacent for any two in multiple video cameras, by the visual field of two video cameras The public string in the region firing line public as two video cameras.
All the above alternatives can form the alternative embodiment of the disclosure, herein no longer using any combination It repeats one by one.
It should be understood that target tracker provided by the above embodiment is when tracking target, only with above-mentioned each function The division progress of module can according to need and for example, in practical application by above-mentioned function distribution by different function moulds Block is completed, i.e., the internal structure of terminal is divided into different functional modules, to complete all or part of function described above Energy.In addition, target tracker provided by the above embodiment and method for tracking target embodiment belong to same design, it is specific real Existing process is detailed in embodiment of the method, and which is not described herein again.
Figure 11 shows the structural block diagram of the terminal 1100 of an illustrative embodiment of the invention offer.The terminal 1100 can To be: smart phone, tablet computer, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) player, laptop or desktop computer.Terminal 1100 is also Other titles such as user equipment, portable terminal, laptop terminal, terminal console may be referred to as.
In general, terminal 1100 includes: processor 1101 and memory 1102.
Processor 1101 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place Reason device 1101 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field- Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed Logic array) at least one of example, in hardware realize.Processor 1101 also may include primary processor and coprocessor, master Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.? In some embodiments, processor 1101 can be integrated with GPU (Graphics Processing Unit, image processor), GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 1101 can also be wrapped AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning Calculating operation.
Memory 1102 may include one or more computer readable storage mediums, which can To be non-transient.Memory 1102 may also include high-speed random access memory and nonvolatile memory, such as one Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 1102 can Storage medium is read for storing at least one instruction, at least one instruction performed by processor 1101 for realizing this Shen Please in embodiment of the method provide method for tracking target.
In some embodiments, terminal 1100 is also optional includes: peripheral device interface 1103 and at least one periphery are set It is standby.It can be connected by bus or signal wire between processor 1101, memory 1102 and peripheral device interface 1103.It is each outer Peripheral equipment can be connected by bus, signal wire or circuit board with peripheral device interface 1103.Specifically, peripheral equipment includes: In radio circuit 1104, touch display screen 1105, camera 1106, voicefrequency circuit 1107, positioning component 1108 and power supply 1109 At least one.
Peripheral device interface 1103 can be used for I/O (Input/Output, input/output) is relevant outside at least one Peripheral equipment is connected to processor 1101 and memory 1102.In some embodiments, processor 1101, memory 1102 and periphery Equipment interface 1103 is integrated on same chip or circuit board;In some other embodiments, processor 1101, memory 1102 and peripheral device interface 1103 in any one or two can be realized on individual chip or circuit board, this implementation Example is not limited this.
Radio circuit 1104 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal. Radio circuit 1104 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 1104 is by telecommunications Number being converted to electromagnetic signal is sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 1104 include: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, volume solution Code chipset, user identity module card etc..Radio circuit 1104 can by least one wireless communication protocol come with it is other Terminal is communicated.The wireless communication protocol includes but is not limited to: WWW, Metropolitan Area Network (MAN), Intranet, each third generation mobile communication network (2G, 3G, 4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some implementations In example, radio circuit 1104 can also include that NFC (Near Field Communication, wireless near field communication) is related Circuit, the application are not limited this.
Display screen 1105 is for showing UI (User Interface, user interface).The UI may include figure, text, Icon, video and its their any combination.When display screen 1105 is touch display screen, display screen 1105 also there is acquisition to exist The ability of the touch signal on the surface or surface of display screen 1105.The touch signal can be used as control signal and be input to place Reason device 1101 is handled.At this point, display screen 1105 can be also used for providing virtual push button and/or dummy keyboard, it is also referred to as soft to press Button and/or soft keyboard.In some embodiments, display screen 1105 can be one, and the front panel of terminal 1100 is arranged;Another In a little embodiments, display screen 1105 can be at least two, be separately positioned on the different surfaces of terminal 1100 or in foldover design; In still other embodiments, display screen 1105 can be flexible display screen, is arranged on the curved surface of terminal 1100 or folds On face.Even, display screen 1105 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 1105 can be with Using LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) etc. materials preparation.
CCD camera assembly 1106 is for acquiring image or video.Optionally, CCD camera assembly 1106 includes front camera And rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.? In some embodiments, rear camera at least two is that main camera, depth of field camera, wide-angle camera, focal length are taken the photograph respectively As any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide Pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are realized in camera fusion in angle Shooting function.In some embodiments, CCD camera assembly 1106 can also include flash lamp.Flash lamp can be monochromatic temperature flash of light Lamp is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for Light compensation under different-colour.
Voicefrequency circuit 1107 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and It converts sound waves into electric signal and is input to processor 1101 and handled, or be input to radio circuit 1104 to realize that voice is logical Letter.For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 1100 to be multiple. Microphone can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 1101 or radio frequency will to be come from The electric signal of circuit 1104 is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramics loudspeaking Device.When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, can also be incited somebody to action Electric signal is converted to the sound wave that the mankind do not hear to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 1107 may be used also To include earphone jack.
Positioning component 1108 is used for the current geographic position of positioning terminal 1100, to realize navigation or LBS (Location Based Service, location based service).Positioning component 1108 can be the GPS (Global based on the U.S. Positioning System, global positioning system), China dipper system or Russia Galileo system positioning group Part.
Power supply 1109 is used to be powered for the various components in terminal 1100.Power supply 1109 can be alternating current, direct current Electricity, disposable battery or rechargeable battery.When power supply 1109 includes rechargeable battery, which can be line charge Battery or wireless charging battery.Wired charging battery is the battery to be charged by Wireline, and wireless charging battery is to pass through The battery of wireless coil charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 1100 further includes having one or more sensors 1110.One or more sensing Device 1110 includes but is not limited to: acceleration transducer 1111, gyro sensor 1112, pressure sensor 1113, fingerprint sensing Device 1114, optical sensor 1115 and proximity sensor 1116.
Acceleration transducer 1111 can detecte the acceleration in three reference axis of the coordinate system established with terminal 1100 Size.For example, acceleration transducer 1111 can be used for detecting component of the acceleration of gravity in three reference axis.Processor The 1101 acceleration of gravity signals that can be acquired according to acceleration transducer 1111, control touch display screen 1105 with transverse views Or longitudinal view carries out the display of user interface.Acceleration transducer 1111 can be also used for game or the exercise data of user Acquisition.
Gyro sensor 1112 can detecte body direction and the rotational angle of terminal 1100, gyro sensor 1112 Acquisition user can be cooperateed with to act the 3D of terminal 1100 with acceleration transducer 1111.Processor 1101 is according to gyro sensors The data that device 1112 acquires, following function may be implemented: action induction (for example changing UI according to the tilt operation of user) is clapped Image stabilization, game control and inertial navigation when taking the photograph.
The lower layer of side frame and/or touch display screen 1105 in terminal 1100 can be set in pressure sensor 1113.When When the side frame of terminal 1100 is arranged in pressure sensor 1113, user can detecte to the gripping signal of terminal 1100, by Reason device 1101 carries out right-hand man's identification or prompt operation according to the gripping signal that pressure sensor 1113 acquires.Work as pressure sensor 1113 when being arranged in the lower layer of touch display screen 1105, is grasped by processor 1101 according to pressure of the user to touch display screen 1105 Make, realization controls the operability control on the interface UI.Operability control include button control, scroll bar control, At least one of icon control, menu control.
Fingerprint sensor 1114 is used to acquire the fingerprint of user, is collected by processor 1101 according to fingerprint sensor 1114 Fingerprint recognition user identity, alternatively, by fingerprint sensor 1114 according to the identity of collected fingerprint recognition user.Knowing Not Chu the identity of user when being trusted identity, authorize the user to execute relevant sensitive operation by processor 1101, which grasps Make to include solving lock screen, checking encryption information, downloading software, payment and change setting etc..Fingerprint sensor 1114 can be set Set the front, the back side or side of terminal 1100.When being provided with physical button or manufacturer Logo in terminal 1100, fingerprint sensor 1114 can integrate with physical button or manufacturer Logo.
Optical sensor 1115 is for acquiring ambient light intensity.In one embodiment, processor 1101 can be according to light The ambient light intensity that sensor 1115 acquires is learned, the display brightness of touch display screen 1105 is controlled.Specifically, work as ambient light intensity When higher, the display brightness of touch display screen 1105 is turned up;When ambient light intensity is lower, the aobvious of touch display screen 1105 is turned down Show brightness.In another embodiment, the ambient light intensity that processor 1101 can also be acquired according to optical sensor 1115, is moved The acquisition parameters of state adjustment CCD camera assembly 1106.
Proximity sensor 1116, also referred to as range sensor are generally arranged at the front panel of terminal 1100.Proximity sensor 1116 for acquiring the distance between the front of user Yu terminal 1100.In one embodiment, when proximity sensor 1116 is examined When measuring the distance between the front of user and terminal 1100 and gradually becoming smaller, by processor 1101 control touch display screen 1105 from Bright screen state is switched to breath screen state;When proximity sensor 1116 detect the distance between front of user and terminal 1100 by When gradual change is big, touch display screen 1105 is controlled by processor 1101 and is switched to bright screen state from breath screen state.
It, can be with it will be understood by those skilled in the art that the restriction of the not structure paired terminal 1100 of structure shown in Figure 11 Including than illustrating more or fewer components, perhaps combining certain components or being arranged using different components.
In the exemplary embodiment, a kind of computer readable storage medium is additionally provided, the memory for example including instruction, Above-metioned instruction can be executed by the processor of terminal to complete the method for tracking target in above-described embodiment.For example, computer-readable Storage medium can be read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (14)

1. a kind of method for tracking target, which is characterized in that be applied in terminal, which comprises
In the video of the first video camera track target during, when the target in the video of first video camera across More firing line when, obtain the trigger position in the firing line, the firing line refers to the main field of vision of first video camera Line of demarcation between domain and the main area of visual field of the second video camera, the trigger position refer to motion track and the institute of the target State the position of firing line intersection;
Based on the trigger position, the target is positioned in the video of second video camera;
The target is tracked in the video of second video camera.
2. the method according to claim 1, wherein described be based on the trigger position, in second camera shooting The target is positioned in the video of machine, comprising:
Based on the video of first video camera, the trigger position is obtained in the main area of visual field of first video camera Point coordinate, obtains first coordinate;
Based on first coordinate, obtains point of the trigger position in the main area of visual field of second video camera and sit Mark, obtains second point coordinate, the point coordinate positioned in the video of second video camera as the target.
3. according to the method described in claim 2, it is characterized in that, it is described be based on first coordinate, obtain the triggering Point coordinate of the position in the main area of visual field of the second video camera, obtains second point coordinate, comprising:
The firing line is obtained in two extreme coordinates of the main area of visual field of first video camera, obtains first end point coordinate With the second extreme coordinates;
The firing line is obtained in two extreme coordinates of the main area of visual field of second video camera, obtains third extreme coordinates With the 4th extreme coordinates;
According to the first end point coordinate, second extreme coordinates, the third extreme coordinates, the 4th extreme coordinates with And first coordinate, calculate the second point coordinate.
4. the method according to claim 1, wherein described be based on the trigger position, in second camera shooting The target is positioned in the video of machine, comprising:
When the target crosses over firing line in the video of first video camera, the video note of first video camera is obtained The time point of record, as trigger time;
Based on the trigger position and the trigger time, the target is carried out in the video of second video camera Positioning.
5. the method according to claim 1, wherein before the trigger position obtained in the firing line, The method also includes:
According to the video of candidate video camera, the appearance position of the target is obtained;
When the appearance position of the target belongs to the main area of visual field of the candidate video camera, using the candidate video camera as First video camera;Or,
When the appearance position of the target belongs to the fringe field of view region of the candidate video camera, the fringe field of view area is determined The corresponding video camera in domain, as first video camera, the main area of visual field of first video camera includes the fringe field of view Region, wherein the fringe field of view region refers to the region in the area of visual field of the candidate video camera other than main area of visual field.
6. the method according to claim 1, wherein it is described obtain target to be tracked before, the method is also Include:
The video camera adjacent for any two in multiple video cameras, using the public string of the area of visual field of two video cameras as institute State two public firing lines of video camera.
7. a kind of target tracker, which is characterized in that be applied in terminal, described device includes:
Across module, during for tracking target in the video of the first video camera, when the target is taken the photograph described first When crossing over firing line in the video of camera, the trigger position in the firing line is obtained, the firing line refers to that described first takes the photograph Line of demarcation between the main area of visual field of camera and the main area of visual field of the second video camera, the trigger position refer to the target The position intersected with the firing line of motion track;
Locating module positions the target in the video of second video camera for being based on the trigger position;
Tracking module, for tracking the target in the video of second video camera.
8. device according to claim 7, which is characterized in that the locating module is used for: being based on first video camera Video, obtain point coordinate of the trigger position in the main area of visual field of first video camera, obtain first coordinate; Based on first coordinate, point coordinate of the trigger position in the main area of visual field of second video camera is obtained, is obtained To second point coordinate, the point coordinate positioned in the video of second video camera as the target.
9. device according to claim 8, which is characterized in that the locating module, comprising:
Acquisition submodule, for obtaining the firing line in two extreme coordinates of the main area of visual field of first video camera, Obtain first end point coordinate and the second extreme coordinates;
The acquisition submodule is also used to obtain the firing line in two endpoints of the main area of visual field of second video camera Coordinate obtains third extreme coordinates and the 4th extreme coordinates;
Computational submodule, for according to the first end point coordinate, second extreme coordinates, the third extreme coordinates, institute The 4th extreme coordinates and first coordinate are stated, the second point coordinate is calculated.
10. device according to claim 7, which is characterized in that the locating module, comprising:
Acquisition submodule, for obtaining described the when the target crosses over firing line in the video of first video camera The time point of the videograph of one video camera, as trigger time;
Positioning submodule, for being based on the trigger position and the trigger time, in the video of second video camera In the target is positioned.
11. device according to claim 7, which is characterized in that described device further include:
It obtains module and obtains the appearance position of the target for the video according to candidate video camera;
The locating module is also used to when the appearance position of the target belongs to the main area of visual field of the candidate video camera, Using the candidate video camera as first video camera;Or,
The locating module is also used to belong to the fringe field of view region of the candidate video camera when the appearance position of the target When, determine the corresponding video camera in the fringe field of view region, as first video camera, the main view of first video camera is wild Region includes the fringe field of view region, wherein the fringe field of view region refers to master in the area of visual field of the candidate video camera Region other than area of visual field.
12. device according to claim 7, which is characterized in that described device further include:
Determining module, for the video camera adjacent for any two in multiple video cameras, by the area of visual field of two video cameras The public string firing line public as described two video cameras.
13. a kind of terminal, which is characterized in that the terminal includes processor and memory, is stored at least in the memory One instruction, described instruction are loaded as the processor and are executed to realize as described in claim 1 to any one of claim 6 Method for tracking target performed by operation.
14. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, institute in the storage medium Instruction is stated to be loaded by processor and executed to realize such as claim 1 to the described in any item method for tracking target of claim 6 Performed operation.
CN201810213898.9A 2018-03-15 2018-03-15 Target tracking method and device Active CN110276789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810213898.9A CN110276789B (en) 2018-03-15 2018-03-15 Target tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810213898.9A CN110276789B (en) 2018-03-15 2018-03-15 Target tracking method and device

Publications (2)

Publication Number Publication Date
CN110276789A true CN110276789A (en) 2019-09-24
CN110276789B CN110276789B (en) 2021-10-29

Family

ID=67958483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810213898.9A Active CN110276789B (en) 2018-03-15 2018-03-15 Target tracking method and device

Country Status (1)

Country Link
CN (1) CN110276789B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807804A (en) * 2019-11-04 2020-02-18 腾讯科技(深圳)有限公司 Method, apparatus, device and readable storage medium for target tracking
CN110930437A (en) * 2019-11-20 2020-03-27 北京拙河科技有限公司 Target tracking method and device
CN111091584A (en) * 2019-12-23 2020-05-01 浙江宇视科技有限公司 Target tracking method, device, equipment and storage medium
CN111612827A (en) * 2020-05-21 2020-09-01 广州海格通信集团股份有限公司 Target position determining method and device based on multiple cameras and computer equipment
CN111918023A (en) * 2020-06-29 2020-11-10 北京大学 Monitoring target tracking method and device
CN112843732A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Method and device for shooting image, electronic equipment and storage medium
US20210279455A1 (en) * 2020-03-06 2021-09-09 Electronics And Telecommunications Research Institute Object tracking system and object tracking method
CN116600194A (en) * 2023-05-05 2023-08-15 深圳市门钥匙科技有限公司 Switching control method and system for multiple lenses

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074113A (en) * 2010-09-17 2011-05-25 浙江大华技术股份有限公司 License tag recognizing and vehicle speed measuring method based on videos
CN102289822A (en) * 2011-09-09 2011-12-21 南京大学 Method for tracking moving target collaboratively by multiple cameras
CN103607569A (en) * 2013-11-22 2014-02-26 广东威创视讯科技股份有限公司 Method and system for tracking monitored target in process of video monitoring
US20140313343A1 (en) * 2007-11-28 2014-10-23 Flir Systems, Inc. Modular infrared camera systems and methods
CN105551264A (en) * 2015-12-25 2016-05-04 中国科学院上海高等研究院 Speed detection method based on license plate characteristic matching
CN105631418A (en) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 People counting method and device
CN106161941A (en) * 2016-07-29 2016-11-23 深圳众思科技有限公司 Dual camera chases after burnt method, device and terminal automatically

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140313343A1 (en) * 2007-11-28 2014-10-23 Flir Systems, Inc. Modular infrared camera systems and methods
CN102074113A (en) * 2010-09-17 2011-05-25 浙江大华技术股份有限公司 License tag recognizing and vehicle speed measuring method based on videos
CN102289822A (en) * 2011-09-09 2011-12-21 南京大学 Method for tracking moving target collaboratively by multiple cameras
CN103607569A (en) * 2013-11-22 2014-02-26 广东威创视讯科技股份有限公司 Method and system for tracking monitored target in process of video monitoring
CN105631418A (en) * 2015-12-24 2016-06-01 浙江宇视科技有限公司 People counting method and device
CN105551264A (en) * 2015-12-25 2016-05-04 中国科学院上海高等研究院 Speed detection method based on license plate characteristic matching
CN106161941A (en) * 2016-07-29 2016-11-23 深圳众思科技有限公司 Dual camera chases after burnt method, device and terminal automatically

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807804B (en) * 2019-11-04 2023-08-29 腾讯科技(深圳)有限公司 Method, apparatus, device and readable storage medium for target tracking
CN110807804A (en) * 2019-11-04 2020-02-18 腾讯科技(深圳)有限公司 Method, apparatus, device and readable storage medium for target tracking
CN110930437A (en) * 2019-11-20 2020-03-27 北京拙河科技有限公司 Target tracking method and device
CN111091584A (en) * 2019-12-23 2020-05-01 浙江宇视科技有限公司 Target tracking method, device, equipment and storage medium
CN111091584B (en) * 2019-12-23 2024-03-08 浙江宇视科技有限公司 Target tracking method, device, equipment and storage medium
US11869265B2 (en) * 2020-03-06 2024-01-09 Electronics And Telecommunications Research Institute Object tracking system and object tracking method
US20210279455A1 (en) * 2020-03-06 2021-09-09 Electronics And Telecommunications Research Institute Object tracking system and object tracking method
CN111612827A (en) * 2020-05-21 2020-09-01 广州海格通信集团股份有限公司 Target position determining method and device based on multiple cameras and computer equipment
CN111612827B (en) * 2020-05-21 2023-12-15 广州海格通信集团股份有限公司 Target position determining method and device based on multiple cameras and computer equipment
CN111918023B (en) * 2020-06-29 2021-10-22 北京大学 Monitoring target tracking method and device
CN111918023A (en) * 2020-06-29 2020-11-10 北京大学 Monitoring target tracking method and device
CN112843732B (en) * 2020-12-31 2023-01-13 上海米哈游天命科技有限公司 Method and device for shooting image, electronic equipment and storage medium
CN112843732A (en) * 2020-12-31 2021-05-28 上海米哈游天命科技有限公司 Method and device for shooting image, electronic equipment and storage medium
CN116600194A (en) * 2023-05-05 2023-08-15 深圳市门钥匙科技有限公司 Switching control method and system for multiple lenses

Also Published As

Publication number Publication date
CN110276789B (en) 2021-10-29

Similar Documents

Publication Publication Date Title
CN110276789A (en) Method for tracking target and device
US11551726B2 (en) Video synthesis method terminal and computer storage medium
CN110555883A (en) repositioning method and device for camera attitude tracking process and storage medium
CN110148178A (en) Camera localization method, device, terminal and storage medium
CN109947886A (en) Image processing method, device, electronic equipment and storage medium
CN108900858A (en) A kind of method and apparatus for giving virtual present
CN107888968A (en) Player method, device and the computer-readable storage medium of live video
CN113763228B (en) Image processing method, device, electronic equipment and storage medium
CN110166786A (en) Virtual objects transfer method and device
CN108965922A (en) Video cover generation method, device and storage medium
CN109922356A (en) Video recommendation method, device and computer readable storage medium
CN111127509A (en) Target tracking method, device and computer readable storage medium
CN109522863A (en) Ear's critical point detection method, apparatus and storage medium
CN108897597A (en) The method and apparatus of guidance configuration live streaming template
CN108900925A (en) The method and apparatus of live streaming template are set
CN107896337A (en) Information popularization method, apparatus and storage medium
CN110225390A (en) Method, apparatus, terminal and the computer readable storage medium of video preview
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN109189290B (en) Click area identification method and device and computer readable storage medium
CN109547847A (en) Add the method, apparatus and computer readable storage medium of video information
CN111723124B (en) Data collision analysis method and device, electronic equipment and storage medium
CN111611414B (en) Vehicle searching method, device and storage medium
CN114554112B (en) Video recording method, device, terminal and storage medium
CN109299319A (en) Display methods, device, terminal and the storage medium of audio-frequency information
CN110264292A (en) Determine the method, apparatus and storage medium of effective period of time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant