CN107315992A - A kind of tracking and device based on electronic platform - Google Patents

A kind of tracking and device based on electronic platform Download PDF

Info

Publication number
CN107315992A
CN107315992A CN201710323431.5A CN201710323431A CN107315992A CN 107315992 A CN107315992 A CN 107315992A CN 201710323431 A CN201710323431 A CN 201710323431A CN 107315992 A CN107315992 A CN 107315992A
Authority
CN
China
Prior art keywords
image
target
tracking
point
clarification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710323431.5A
Other languages
Chinese (zh)
Inventor
张显志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen AEE Technology Co Ltd
Original Assignee
Shenzhen AEE Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen AEE Technology Co Ltd filed Critical Shenzhen AEE Technology Co Ltd
Priority to CN201710323431.5A priority Critical patent/CN107315992A/en
Publication of CN107315992A publication Critical patent/CN107315992A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of tracking and device based on electronic platform, methods described includes:Determine regional choice frame original coordinate information in current presentation image, according to the original coordinates information and pre- moving direction prediction tracking target target area residing in next frame image to be presented, the target selection frame is moved to the target area;The image to be presented grabbed by the imaging sensor of electronic platform is received, and finally the target area is exported.The present invention can be realized when user is shot using small portable filming apparatus, effectively the very fast target of some travel frequency in picture is carried out to be accurately tracked by shooting, and then preferable image quality video can be shot, the demand of user is met, the experience of user is improved.

Description

A kind of tracking and device based on electronic platform
Technical field
The present invention relates to the field of shooting, more particularly to a kind of tracking and device based on electronic platform.
Background technology
The general for example portable high definition camera of miniature portable filming apparatus or high-resolution shooting mobile phone base at present The demand of this satisfaction major part user.
However, as scientific and technological level is rapidly improved, people to the experience requirements of shooting also more and more higher, such as people outside , it is necessary to the small-sized filming apparatus carried with to the visual field when going out travelling, a certain competitive sports of scene viewing or perform When some interior travel frequency very fast target is taken pictures or recorded a video, during some in such as track and field events is run The star that some in sportsman's either variety performance is being performed, because the target person travel frequency is too fast, typically just Taking filming apparatus can not exactly be positioned to the target, shot the photo come and be likely to jelly effect occur, shooting Video accurately and timely can not be tracked shooting to the target, simultaneously because people are using portable filming apparatus shooting picture When due to the shake of hand can influence shoot when stability cause the photo shot or video quality not good.Although big at present Type video camera such as orbit camera can realize the track up to object exactly, but these large-scale tracking cameras are logical It is often to be supported by mechanical component, often cost costly and is inconvenient to carry, and have impact on Consumer's Experience.
The above is only used for auxiliary and understands technical scheme, does not represent and recognizes that the above is existing skill Art.
The content of the invention
It is a primary object of the present invention to provide a kind of tracking and device based on electronic platform, it is intended to solve user When being shot using small portable filming apparatus, it is impossible to effectively to the very fast target of some travel frequency in picture It is accurately tracked by, causes to shoot the problem of image quality is undesirable.
To achieve the above object, the present invention provides a kind of tracking based on electronic platform, and methods described includes following Step:
Regional choice frame original coordinate information in current presentation image is determined, the original coordinates information is tracking target The coordinate information in residing region in the current presentation image;
According to the original coordinates information and the pre- moving direction prediction tracking target in next frame image to be presented Residing target area, the target area is moved to by the target selection frame;
The image to be presented grabbed by the imaging sensor of electronic platform is received, and the target area is carried out defeated Go out.
Preferably, it is described to be treated according to the original coordinates information and the pre- moving direction prediction tracking target in next frame Show before target area residing in image, methods described also includes:
Obtain in history displaying image and track clarification of objective information, tracking target in image is shown according to the history The pre- moving direction that clarification of objective information determines the tracking target is tracked in characteristic information and the current presentation image.
Preferably, the characteristic information includes feature point set;
Correspondingly, clarification of objective information is tracked in the acquisition history displaying image, image is shown according to the history Tracking clarification of objective information determines the tracking target in middle tracking clarification of objective information and the current presentation image Pre- moving direction, is specifically included:
Obtain tracking clarification of objective point set A in history displaying image;
Determine to track clarification of objective point set B in the current presentation image from the original coordinates information;
The feature point set A is connected with corresponding characteristic point in the feature point set B, with obtain each character pair point it Between line;
Line between each character pair point is subjected to vector weighting calculating, to determine the pre- movement side of the tracking target To.
Preferably, the image to be presented grabbed by the imaging sensor of electronic platform will be received, and to the target area After domain is exported, methods described also includes:
Characteristic point in the tracking clarification of objective point set B is matched with the image to be presented, to determine State and clarification of objective point set C is tracked described in image to be presented;
Each characteristic point in the feature point set C is clustered, cluster centre is regard as tracking in image to be presented The focus point of target;
The central point of the target selection frame is obtained, and the target selection frame is moved in the target selection frame The position that heart point is overlapped with the focus point of the tracking target.
Preferably, the determination regional choice frame is in current presentation image before original coordinate information, and methods described is also Including;
The current presentation image is pre-processed, to remove the noise pixel point in the current presentation image.
In addition, to achieve the above object, the present invention also provides a kind of tracks of device based on electronic platform, and its feature exists In described device includes:
Locating module, for determining regional choice frame original coordinate information in current presentation image, the original coordinates Information is the coordinate information in tracking target residing region in the current presentation image;
Anticipation module, for predicting the tracking target in next frame according to the original coordinates information and pre- moving direction Residing target area, the target area is moved to by the target selection frame in image to be presented;
Output module, the image to be presented that the imaging sensor for receiving by electronic platform is grabbed, and to the mesh Mark region is exported.
Preferably, described device also includes:
Direction determining mould, tracks clarification of objective information, according to the history exhibition for obtaining in history displaying image Tracking clarification of objective information in clarification of objective information and the current presentation image is tracked in diagram picture and determines the tracking The pre- moving direction of target.
Preferably, the characteristic information includes feature point set;
Correspondingly, the direction determining mould, is specifically included:
Characteristic point acquiring unit, for obtaining tracking clarification of objective point set A in history displaying image;And from described original Determine to track clarification of objective point set B in the current presentation image in coordinate information;
Line unit, for the feature point set A to be connected with corresponding characteristic point in the feature point set B, to obtain Line between each character pair point;
Direction-determining unit, it is described to determine for the line between each character pair point to be carried out into vector weighting calculating Track the pre- moving direction of target.
Preferably, described device also includes adjusting module, and the adjusting module is specifically included:
Matching unit, for the characteristic point in the tracking clarification of objective point set B and the image to be presented to be carried out Matching, to determine to track clarification of objective point set C described in the image to be presented;
Cluster cell, for being clustered to each characteristic point in the feature point set C, using cluster centre as waiting to open up The focus point of target is tracked in diagram picture;
Adjustment unit, the central point for obtaining the target selection frame, and the target selection frame is moved to described The position that the central point of target selection frame is overlapped with the focus point of the tracking target.
Preferably, described device also includes:
Pretreatment module, for being pre-processed to the current presentation image, to remove in the current presentation image Noise pixel point.
The present invention is by determining regional choice frame original coordinate information in current presentation image, the original coordinates information For the coordinate information in tracking target residing region in the current presentation image;According to the original coordinates information and pre- movement Target target area residing in next frame image to be presented is tracked described in direction prediction, the target selection frame is moved to The target area;The image to be presented grabbed by the imaging sensor of electronic platform is received, and it is final to the target area Domain is exported.And then can realize when user is shot using small portable filming apparatus, effectively to drawing Some travel frequency in face very fast target carries out being accurately tracked by shooting, and then can shoot preferable image quality video, The demand of user is met, the experience of user is improved.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the tracking first embodiment of the invention based on electronic platform;
Fig. 2 is the optional simulated target choice box position positioning of realization each embodiment one of the invention and mobile signal Figure;
Fig. 3 is the schematic flow sheet of the tracking second embodiment of the invention based on electronic platform;
Fig. 4 is the schematic flow sheet of the tracking 3rd embodiment of the invention based on electronic platform;
Fig. 5 is the schematic flow sheet of the tracking fourth embodiment of the invention based on electronic platform;
Fig. 6 is the high-level schematic functional block diagram of the tracks of device first embodiment of the invention based on electronic platform;
Fig. 7 is the high-level schematic functional block diagram of the tracks of device second embodiment of the invention based on electronic platform;
Fig. 8 is the high-level schematic functional block diagram of the tracks of device 3rd embodiment of the invention based on electronic platform;
Fig. 9 is the high-level schematic functional block diagram of the tracks of device fourth embodiment of the invention based on electronic platform.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Reference picture 1, first embodiment of the invention provides a kind of tracking based on electronic platform, and methods described includes:
S10:Regional choice frame original coordinate information in current presentation image is determined, the original coordinates information is tracking The coordinate information in target residing region in the current presentation image;
It should be noted that the executive agent of the method for the present embodiment is the processor of electronic platform;Processor can be circulated The method of the present embodiment is performed to complete the track up to tracking target;Before the present embodiment step S10, user is shooting A tracking target in picture has been can determine whether during picture first and a selected instruction is inputted, reference picture 2, processor is according to this The coordinate information generation one of individual instruction, i.e. the present position region according to tracking target in current picture (exportable region U) Individual target selection frame R, for being selected to this tracking target, then processor identifies the spy of the tracking target after selecting Reference ceases;Wherein, target selection frame R size user can voluntarily be set according to the preference of oneself.
It will be appreciated that a two field picture of the current presentation picture for user's correspondence moment when shooting current picture, The tracking target is in this two field picture;Current video frame rate can reach 26 frame per second or so, the current presentation picture For the n-th frame image of 26 frame in per second, wherein n is a numeral in 1 to 26;
Implementing, the space that the electronic platform in the present embodiment establishes exportable region U in itself is straight Angular coordinate system, polar coordinate system and plane coordinate system;Wherein rectangular coordinate system in space can be converted into exportable region U (current pictures Face) plane coordinate system, any point in current picture is fixed relative to exportable region U plane coordinate system;Institute It can be that each point that a size has been fixed in virtual square frame, the virtual square frame is put down above-mentioned to state target selection frame R All there is corresponding coordinate, these points constitute the original coordinates information residing for the target selection frame in areal coordinate system;Processing Device will recognise that target selection frame R it is selected after tracking clarification of objective information, the tracking target is determined according to this feature information The coordinate range of pixel in exportable region U plane coordinate system;
S20:According to the original coordinates information and the pre- moving direction prediction tracking target in next frame figure to be presented Residing target area, the target area is moved to by the target selection frame as in;
It should be noted that next frame image to be presented described in step S20 is next two field picture of current presentation image, It is a unknown image (i.e. imaging sensor not yet collects image), and the image to be presented can include same tracking Target (i.e. current presentation image and image to be presented include same tracking target);
Preferably, before step S20, tracking clarification of objective information, root in image can be shown by obtaining history According to tracking clarification of objective information in tracking clarification of objective information in history displaying image and the current presentation image Determine the pre- moving direction of the tracking target;
It will be appreciated that because processor can circulate the method for performing the present embodiment, when processor determines current presentation figure When tracking clarification of objective information as in, history displaying image can be obtained to automatic;In continuous multiple frames image with Track clarification of objective information, determines the current kinetic track of the tracking target, according to this current kinetic trajectory predictions Track output area of the target in next two field picture (image to be presented).
In the specific implementation, preferably, the present embodiment uses the history work rail to tracking target based on Markov model Mark is modeled;Because a series of matrix that Markov model is shifted by states and state is constituted, target n-th is tracked Change the state obtained only relevant with the state of (n-1)th time;According to tracking clarification of objective information, obtain tracking target and stop recently Stay region;The nearest dwell regions of target and the historical act pattern based on the tracking target stored in Markov model will be tracked The matching analysis is carried out, pre- mobile trend is obtained, predicted position and recommended location are generated according to pre- mobile trend.
S30:The image to be presented grabbed by the imaging sensor of electronic platform is received, and the target area is carried out Output.
It will be appreciated that described image sensor is resolution ratio very high imaging sensor, the electronic platform has picture Face electronic stability augmentation function;Described image sensor captures an image as image to be presented, the current presentation image and treats Show that image can be front and rear adjacent two field pictures.
In the specific implementation, reference picture 2, the image grabbed in theory can be exported, i.e., described current presentation image Exportable region with image to be presented is U, and described image is carried out into output display;It can be shone by small portable high-definition digital The display screen of camera or smart mobile phone is shown to current picture, the original image can also be carried out by other displays It has been shown that, the present embodiment is not any limitation as to this.
At least one group transportable zoom constituent element of electronic platform of the present invention, at least two groups transportable zooms Group, compensation group;Zoom group is that the effect of compensation group is supplement pixel difference for moving zooming in or out for sight focal length;Electricity Sub- head can realize the change in shooting focal length and the visual field, to complete putting for image by mobile the zoom group and compensation group Big or diminution;When user is shot using portable photographic device, target can be operated or adjusted by zoom The size of choice box is amplified that the close up fragmentary camera lens become apparent from is presented to tracking target.
The present embodiment determines regional choice frame original coordinate information in current presentation image, is believed according to the original coordinates Breath and pre- moving direction predict tracking target target area residing in next frame image to be presented, and the target is selected Select frame and be moved to the target area;The image to be presented grabbed by the imaging sensor of electronic platform is received, and to described Target area is exported, and can be realized when user is shot using portable photographic device, effectively to picture Some interior travel frequency very fast target is accurately tracked by, and then can shoot preferable image quality video.
Reference picture 3, Fig. 3 is the schematic flow sheet of the tracking second embodiment of the invention based on electronic platform, is based on Embodiment shown in above-mentioned Fig. 1, proposes the second embodiment of the tracking of the invention based on electronic platform.
In the present embodiment, the characteristic information includes feature point set;
Correspondingly, clarification of objective information is tracked in the acquisition history displaying image, image is shown according to the history Tracking clarification of objective information determines the tracking target in middle tracking clarification of objective information and the current presentation image Pre- moving direction, is specifically included:
S101:Obtain tracking clarification of objective point set A in history displaying image;
S102:Determine to track clarification of objective point set B in the current presentation image from the original coordinates information;
Perform the method for the present embodiment to complete to clap the tracking for tracking target it should be noted that processor can be circulated Take the photograph, therefore, can be to the first frame for being gathered by imaging sensor when processor performs the method for the present embodiment for the first time Tracking clarification of objective point in image is identified to obtain the tracking clarification of objective point in the first two field picture;When tracking mesh When target translational speed is very fast, the history displaying image is the previous frame image of the current presentation image;When tracking target Without it is mobile or in it is static when, and 26 frame per second or so can be reached in view of current video frame rate, processor can root According to actual conditions image is shown using the n-th frame image before current presentation image as history;
It will be appreciated that each characteristic point in original coordinates packet B containing feature point set;
In the specific implementation, the Feature point recognition in the present embodiment on image can be based on Scale invariant features transform SIFT algorithms are matched, can be that robust features algorithm is accelerated based on SURF, can be based on FAST Corner Feature extraction algorithms, can To be to be based on HArris Corner Features extraction algorithm or based on BRIEF algorithms, the present embodiment is not any limitation as to this; Exemplified by based on SIFT Scale invariant features transform matching algorithms, this algorithm steps is probably divided into three steps:Step one, initialization behaviour Make, simulate the Analysis On Multi-scale Features of the current presentation image, a gold with linear relationship is constructed with the structure of group and layer The metric space of word tower structure, sets up the Analysis On Multi-scale Features of the current presentation view data, and let us can be continuous high Characteristic point is searched on this core yardstick;Step 2, is sampled to find during characteristic point is searched to each pixel Extreme point, the consecutive points that each sampled point will be all with it compare, see its whether than it image area and scale domain it is adjacent Point is big or small, to ensure all to detect extreme point in the metric space and two dimensional image space, if a sampled point exists When being maximum or minimum value in its image area and scale domain, it is considered as the described collection point and is characterized a little;In addition, it is necessary to illustrate , Local Extremum may not be extreme point truly, and point is planted in real pole can fall the seam in discrete point In gap, therefore to enter row interpolation to these gap positions, the coordinate position of extreme point is then sought again.Step 3, determines characteristic point Direction, the direction of characteristic point asks method to be to carry out statistics with histogram to the gradient direction of the point in feature vertex neighborhood, chooses straight The maximum direction of proportion is characterized principal direction a little in square figure, it is also an option that an auxiliary direction;Calculating the vector of characteristic point When, it is necessary to topography carry out rotated along principal direction, then enter again in neighborhood histogram of gradients statistics.Pass through algorithm above The vector (with directive characteristic point) of the characteristic point in the current presentation image can be accurately identified, by these features The set of point is defined as feature point set B;
The characteristic point in current presentation image is identified based on SIFT Scale invariant features transforms matching algorithm, and then Even if the change anglec of rotation can be realized, in the case of brightness of image or shooting visual angle, remain able to obtain preferably detection and Recognition effect.
S103:The feature point set A is connected with corresponding characteristic point in the feature point set B, it is special to obtain each correspondence Line between levying a little;
It will be appreciated that in above-mentioned steps S101 obtains history displaying image and current presentation image with S102 respectively The set of clarification of objective point is tracked, the tracking target is that shape and size are different;This method can use Euclidean distance To weigh the similarity of characteristic point in feature point set A and feature point set B;For example to history show image in feature point set A certain Individual characteristic point x, looks for the most like point y of feature point set B in the current presentation image as far as possible, and simplest mode is exactly Similarity-rough set is carried out by each pixel in characteristic point x and feature point set B, that of distance minimum is the feature matched Point;
S104:Line between each character pair point is subjected to vector weighting calculating, to determine the pre- of the tracking target Moving direction.
It should be noted that in the slower target of tracking translational speed, due to two field pictures in a short time The motion track (line i.e. between character pair point) of same tracking target is likely to excessive similitude, for example substantially It is parallel, it is closer to the distance, adjacent track point change in displacement is basically identical etc., therefore, also include before the present embodiment step S104:Root According to the excessive similitude (such as substantially parallel, closer to the distance, adjacent track point position etc.) between each character pair point The line of (i.e. motion track) carries out trajectory clustering, the initialized target in several two field pictures that tracking target starts appearance, then The more fresh target in follow-up several two field pictures, constantly addition track and deletion track, so far simplify the arithmetic speed of processor, no Characteristic point that must be every time to latter two field picture carries out vector weighting calculating, improves the response time to target following.
The present embodiment, by obtain history show image in characteristic point and current presentation image in characteristic point between Line, to determine the movement locus of tracking target, and then according to the pre- moving direction of movement locus determination tracking target;According to Target selection frame is moved to described by the pre- moving direction prediction tracking target in target area residing for image to be presented Target area;And can be to the pre- moving direction for effectively reducing the target selection frame by way of weight vectors are matched Calculation error, the algorithmic stability is high, and calculating speed is fast, helps to realize that the quick of tracking target is accurately positioned.
Reference picture 4, Fig. 4 is the schematic flow sheet of the tracking 3rd embodiment of the invention based on electronic platform, is based on Embodiment shown in above-mentioned Fig. 3, proposes the 3rd embodiment of the tracking of the invention based on electronic platform:
In the present embodiment, the image to be presented that reception is grabbed by the imaging sensor of electronic platform, and to the mesh After mark region is exported, methods described also includes:
S401:Characteristic point in the tracking clarification of objective point set B is matched with the image to be presented, with true Tracking clarification of objective point set C described in the fixed image to be presented;
It will be appreciated that in step S101, S102 of second embodiment of the invention, it has been determined that got well the current presentation The direction of characteristic point and characteristic point in image, in the present embodiment step S401, similarly can be used based on SIFT yardsticks not Become eigentransformation matching algorithm the characteristic point in the image to be presented is identified;Can accurately it be known by algorithm above The vector (with directive characteristic point) for the characteristic point not gone out in the image to be presented, the set of these characteristic points is defined as Feature point set C;
The current presentation image and image to be presented include same tracking target, and processor is according to the spy for tracking target Reference breath is contrasted two field pictures (i.e. described current presentation image and image to be presented), you can identify image to be presented The character pair information of middle tracking target;Wherein, each pixel in the tracking target of the image to be presented is above-mentioned All there is corresponding coordinate, these points constitute the coordinates of targets information in plane coordinate system;
The similarity of characteristic point in feature point set B and feature point set C can be weighed using Euclidean distance;For example to current Some characteristic point y of feature point set B in image is shown, the most like of feature point set C in the image to be presented is looked for as far as possible Point z, simplest mode is exactly to take each pixel in characteristic point y and feature point set C to carry out similarity-rough set, and distance is minimum That be the characteristic point matched.
S402:Each characteristic point in the feature point set C is clustered, using cluster centre as in image to be presented Track the focus point of target;
It will be appreciated that the same tracking target in two field pictures is that shape and size are different, therefore according to tracking mesh Target characteristic information is clustered to its characteristic point, obtains a cluster centre, regard cluster centre as the center of gravity for tracking target Point;
S403:The central point of the target selection frame is obtained, and the target selection frame is moved to the target selection The position that the central point of frame is overlapped with the focus point of the tracking target.
In the present embodiment, due to closely spaced between number of image frames, so being not in that target departs from output substantially The situation in region;Have been obtained for tracking the weight of target in step S402 in the picture after the image to be presented is demonstrated The position of target selection frame in heart point, step S403 be before the regional location that is determined by prediction of system, by above-mentioned Step is finely adjusted to target selection frame, you can allow tracking target to maintain the central area of output area (target selection frame).
Reference picture 5, Fig. 5 is the schematic flow sheet of the tracking fourth embodiment based on electronic platform of the invention, Fig. 5 with Based on the embodiment shown in above-mentioned Fig. 1, the 5th embodiment of the tracking of the invention based on electronic platform is proposed:
In the present embodiment, the determination regional choice frame is in current presentation image before original coordinate information, the side Method also includes;
S001:The current presentation image is pre-processed, to remove the noise pixel in the current presentation image Point.
It will be appreciated that generally the image of imaging sensor crawl is after scaling, it is likely that can occur at random Some chequered with black and white bright dim spot noises are produced by imaging sensor, transmission channel, decoding process etc., or are random appearance The point of some improve oneself angle value or black intensity levels, so as to influence the image quality of final output;
In the specific implementation, can be gone using the disposal of gentle filter mode to the above-mentioned picture noise being likely to occur Remove, i.e., the average gray value of pixel goes each of alternate image in neighborhood in the image to be shown determined using filter mask The value of pixel, and then improvement can be played when there is noise jamming composition to current presentation image.
The present embodiment is by using the disposal of gentle filter mode to the pixel that is likely to occur in the current presentation picture Noise is removed, so as to obtain quality more preferably image shows effect.
Reference picture 6, first embodiment of the invention provides a kind of tracks of device based on electronic platform, and described device includes:
Locating module 10, for determining regional choice frame original coordinate information in current presentation image, the original seat Mark coordinate information of the information for tracking target residing region in the current presentation image;
It should be noted that user can determine whether a tracking target in picture in shooting picture and inputted choosing first The processor of electronic platform in a fixed instruction, reference picture 2, the present embodiment device is instructed according to this, i.e., according to tracking mesh The coordinate information in the present position region being marked in current picture (exportable region U) generates a target selection frame R, for pair This tracking target is selected, and then processor identifies the tracking clarification of objective information after selecting;Wherein, target selection Frame R size user can voluntarily be set according to the preference of oneself.
It will be appreciated that a two field picture of the current presentation picture for user's correspondence moment when shooting current picture, The tracking target is in this two field picture;Current video frame rate can reach 26 frame per second or so, the current presentation picture For the n-th frame image of 26 frame in per second, wherein n is a numeral in 1 to 26;
Implementing, the space that the electronic platform in the present embodiment establishes exportable region U in itself is straight Angular coordinate system, polar coordinate system and plane coordinate system;Wherein rectangular coordinate system in space can be converted into exportable region U (current pictures Face) plane coordinate system, any point in current picture is fixed relative to exportable region U plane coordinate system;Institute It can be that each point that a size has been fixed in virtual square frame, the virtual square frame is put down above-mentioned to state target selection frame R All there is corresponding coordinate, these points constitute the original coordinates information residing for the target selection frame in areal coordinate system;Processing Device will recognise that target selection frame R it is selected after tracking clarification of objective information, the tracking target is determined according to this feature information The coordinate range of pixel in exportable region U plane coordinate system;
Direction determining mould 15, tracks clarification of objective information, according to the history for obtaining in history displaying image Show in image in tracking clarification of objective information and the current presentation image tracking clarification of objective information determine it is described with The pre- moving direction of track target.
Anticipation module 20, for predicting the tracking target next according to the original coordinates information and pre- moving direction Residing target area, the target area is moved to by the target selection frame in frame image to be presented;
It should be noted that next frame image to be presented described in step S20 is next two field picture of current presentation image, It is a unknown image (i.e. imaging sensor not yet collects image), and the image to be presented can include same tracking Target (i.e. current presentation image and image to be presented include same tracking target);
It will be appreciated that when processor determines to track clarification of objective information in current presentation image, can be to certainly It is dynamic to obtain history displaying image;Tracking clarification of objective information in continuous multiple frames image, determines the tracking target Current kinetic track, tracks target in next two field picture (image to be presented) according to this current kinetic trajectory predictions Output area.
In the specific implementation, preferably, the present embodiment uses the history work rail to tracking target based on Markov model Mark is modeled;Because a series of matrix that Markov model is shifted by states and state is constituted, target n-th is tracked Change the state obtained only relevant with the state of (n-1)th time;According to tracking clarification of objective information, obtain tracking target and stop recently Stay region;The nearest dwell regions of target and the historical act pattern based on the tracking target stored in Markov model will be tracked The matching analysis is carried out, pre- mobile trend is obtained, predicted position and recommended location are generated according to pre- mobile trend.
Output module 30, the image to be presented that the imaging sensor for receiving by electronic platform is grabbed, and to described Target area is exported.
It will be appreciated that described image sensor is resolution ratio very high imaging sensor, the electronic platform has picture Face electronic stability augmentation function;Described image sensor captures an image as image to be presented, the current presentation image and treats Show that image can be front and rear adjacent two field pictures.
In the specific implementation, reference picture 2, the image grabbed in theory can be exported, i.e., described current presentation image Exportable region with image to be presented is U, and described image is carried out into output display;It can be shone by small portable high-definition digital The display screen of camera or smart mobile phone is shown to current picture, the original image can also be carried out by other displays It has been shown that, the present embodiment is not any limitation as to this.
At least one group transportable zoom constituent element of electronic platform of the present invention, at least two groups transportable zooms Group, compensation group;Zoom group is that the effect of compensation group is supplement pixel difference for moving zooming in or out for sight focal length;Electricity Sub- head can realize the change in shooting focal length and the visual field, to complete putting for image by mobile the zoom group and compensation group Big or diminution;When user is shot using portable photographic device, target can be operated or adjusted by zoom The size of choice box is amplified that the close up fragmentary camera lens become apparent from is presented to tracking target.
The present embodiment determines regional choice frame original coordinate information in current presentation image, is believed according to the original coordinates Breath and pre- moving direction predict tracking target target area residing in next frame image to be presented, and the target is selected Select frame and be moved to the target area;The image to be presented grabbed by the imaging sensor of electronic platform is received, and to described Target area is exported, and can be realized when user is shot using portable photographic device, effectively to picture Some interior travel frequency very fast target is accurately tracked by, and then can shoot preferable image quality video.
Reference picture 7, Fig. 7 is the schematic flow sheet of the tracks of device second embodiment of the invention based on electronic platform, is based on Embodiment shown in above-mentioned Fig. 6, proposes the second embodiment of the tracks of device of the invention based on electronic platform.
In the present embodiment, the characteristic information includes feature point set;
Correspondingly, the direction determining mould 15, is specifically included,
Characteristic point acquiring unit 101, for obtaining tracking clarification of objective point set A in history displaying image;And from described Determine to track clarification of objective point set B in the current presentation image in original coordinates information;
It should be noted that processor meeting circulate operation is to complete the track up to tracking target, therefore, in processor When operation operating for the first time, the tracking clarification of objective in the first two field picture for being gathered by imaging sensor can be clicked through Row recognizes to obtain the tracking clarification of objective point in the first two field picture;It is described to go through when the translational speed for tracking target is very fast History displaying image is the previous frame image of the current presentation image;When track target without it is mobile or in it is static when, And can reach 26 frame per second or so in view of current video frame rate, processor can according to actual conditions by current presentation image it Preceding n-th frame image shows image as history;
It will be appreciated that each characteristic point in original coordinates packet B containing feature point set;
In the specific implementation, the Feature point recognition in the present embodiment on image can be based on Scale invariant features transform SIFT algorithms are matched, can be that robust features algorithm is accelerated based on SURF, can be based on FAST Corner Feature extraction algorithms, can To be to be based on HArris Corner Features extraction algorithm or based on BRIEF algorithms, the present embodiment is not any limitation as to this; Exemplified by based on SIFT Scale invariant features transform matching algorithms, this algorithm steps is probably divided into three steps:Step one, initialization behaviour Make, simulate the Analysis On Multi-scale Features of the current presentation image, a gold with linear relationship is constructed with the structure of group and layer The metric space of word tower structure, sets up the Analysis On Multi-scale Features of the current presentation view data, and let us can be continuous high Characteristic point is searched on this core yardstick;Step 2, is sampled to find during characteristic point is searched to each pixel Extreme point, the consecutive points that each sampled point will be all with it compare, see its whether than it image area and scale domain it is adjacent Point is big or small, to ensure all to detect extreme point in the metric space and two dimensional image space, if a sampled point exists When being maximum or minimum value in its image area and scale domain, it is considered as the described collection point and is characterized a little;In addition, it is necessary to illustrate , Local Extremum may not be extreme point truly, and point is planted in real pole can fall the seam in discrete point In gap, therefore to enter row interpolation to these gap positions, the coordinate position of extreme point is then sought again.Step 3, determines characteristic point Direction, the direction of characteristic point asks method to be to carry out statistics with histogram to the gradient direction of the point in feature vertex neighborhood, chooses straight The maximum direction of proportion is characterized principal direction a little in square figure, it is also an option that an auxiliary direction;Calculating the vector of characteristic point When, it is necessary to topography carry out rotated along principal direction, then enter again in neighborhood histogram of gradients statistics.Pass through algorithm above The vector (with directive characteristic point) of the characteristic point in the current presentation image can be accurately identified, by these features The set of point is defined as feature point set B;
The characteristic point in current presentation image is identified based on SIFT Scale invariant features transforms matching algorithm, and then Even if the change anglec of rotation can be realized, in the case of brightness of image or shooting visual angle, remain able to obtain preferably detection and Recognition effect.
Line unit 102, for the feature point set A to be connected with corresponding characteristic point in the feature point set B, to obtain Take the line between each character pair point;
It will be appreciated that obtaining history displaying image and current presentation figure respectively in features described above point acquiring unit 101 The set of clarification of objective point is tracked as in, the tracking target is that shape and size are different;This method can use European Distance weighs the similarity of characteristic point in feature point set A and feature point set B;Feature point set A in image for example is shown to history Some characteristic point x, the most like point y of feature point set B in the current presentation image, simplest mode are looked for as far as possible It is exactly to take each pixel in characteristic point x and feature point set B to carry out similarity-rough set, is the spy matched apart from that minimum Levy a little;
Direction-determining unit 103, for the line between each character pair point to be carried out into vector weighting calculating, to determine State the pre- moving direction of tracking target.
It should be noted that in the slower target of tracking translational speed, due to two field pictures in a short time The motion track (line i.e. between character pair point) of same tracking target is likely to excessive similitude, for example substantially It is parallel, it is closer to the distance, adjacent track point change in displacement is basically identical etc., therefore, also include before the present embodiment step S104:Root According to the excessive similitude (such as substantially parallel, closer to the distance, adjacent track point position etc.) between each character pair point The line of (i.e. motion track) carries out trajectory clustering, the initialized target in several two field pictures that tracking target starts appearance, then The more fresh target in follow-up several two field pictures, constantly addition track and deletion track, so far simplify the arithmetic speed of processor, no Characteristic point that must be every time to latter two field picture carries out vector weighting calculating, improves the response time to target following.
The present embodiment, by obtain history show image in characteristic point and current presentation image in characteristic point between Line, to determine the movement locus of tracking target, and then according to the pre- moving direction of movement locus determination tracking target;According to Target selection frame is moved to described by the pre- moving direction prediction tracking target in target area residing for image to be presented Target area;And can be to the pre- moving direction for effectively reducing the target selection frame by way of weight vectors are matched Calculation error, the algorithmic stability is high, and calculating speed is fast, helps to realize that the quick of tracking target is accurately positioned.
Reference picture 8, Fig. 8 is the schematic flow sheet of the tracks of device 3rd embodiment of the invention based on electronic platform, is based on Embodiment shown in above-mentioned Fig. 7, proposes the 3rd embodiment of the tracks of device of the invention based on electronic platform:
In the present embodiment, described device also includes adjusting module 40, and the adjusting module 40 is specifically included:
Matching unit 401, for the characteristic point in the tracking clarification of objective point set B to be entered with the image to be presented Row matching, to determine to track clarification of objective point set C described in the image to be presented;
It will be appreciated that in the present embodiment, can similarly use and be based on SIFT Scale invariant features transform matching algorithms pair Characteristic point in the image to be presented is identified;Can accurately it be identified in the image to be presented by algorithm above Characteristic point vector (with directive characteristic point), the set of these characteristic points is defined as feature point set C;
The current presentation image and image to be presented include same tracking target, and processor is according to the spy for tracking target Reference breath is contrasted two field pictures (i.e. described current presentation image and image to be presented), you can identify image to be presented The character pair information of middle tracking target;Wherein, each pixel in the tracking target of the image to be presented is above-mentioned All there is corresponding coordinate, these points constitute the coordinates of targets information in plane coordinate system;
The similarity of characteristic point in feature point set B and feature point set C can be weighed using Euclidean distance;For example to current Some characteristic point y of feature point set B in image is shown, the most like of feature point set C in the image to be presented is looked for as far as possible Point z, simplest mode is exactly to take each pixel in characteristic point y and feature point set C to carry out similarity-rough set, and distance is minimum That be the characteristic point matched.
Cluster cell 402, for being clustered to each characteristic point in the feature point set C, using cluster centre as The focus point of target is tracked in image to be presented;
It will be appreciated that the same tracking target in two field pictures is that shape and size are different, therefore according to tracking mesh Target characteristic information is clustered to its characteristic point, obtains a cluster centre, regard cluster centre as the center of gravity for tracking target Point;
Adjustment unit 403, the central point for obtaining the target selection frame, and the target selection frame is moved to institute State the position that the central point of target selection frame is overlapped with the focus point of the tracking target.
In the present embodiment, due to closely spaced between number of image frames, so being not in that target departs from output substantially The situation in region, is finely adjusted to target selection frame, you can allow tracking target to maintain in output area (target selection frame) Heart district domain.
Reference picture 9, Fig. 8 is the schematic flow sheet of the tracks of device fourth embodiment of the invention based on electronic platform, is based on Embodiment shown in above-mentioned Fig. 6-8 is any, proposes the fourth embodiment of the tracks of device of the invention based on electronic platform:
In the present embodiment, described device also includes:
Pretreatment module 01, for being pre-processed to the current presentation image, to remove the current presentation image In noise pixel point.
It will be appreciated that generally the image of imaging sensor crawl is after scaling, it is likely that can occur at random Some chequered with black and white bright dim spot noises are produced by imaging sensor, transmission channel, decoding process etc., or are random appearance The point of some improve oneself angle value or black intensity levels, so as to influence the image quality of final output;
In the specific implementation, can be gone using the disposal of gentle filter mode to the above-mentioned picture noise being likely to occur Remove, i.e., the average gray value of pixel goes each of alternate image in neighborhood in the image to be shown determined using filter mask The value of pixel, and then improvement can be played when there is noise jamming composition to current presentation image.
The present embodiment is by using the disposal of gentle filter mode to the pixel that is likely to occur in the current presentation picture Noise is removed, so as to obtain quality more preferably image shows effect.
In the specific implementation, can be gone using the disposal of gentle filter mode to the above-mentioned picture noise being likely to occur Remove, i.e., the average gray value of pixel goes each of alternate image in neighborhood in the image to be shown determined using filter mask The value of pixel, and then improvement can be played when there is noise jamming composition to the image to be presented.
The present embodiment is made an uproar by using the disposal of gentle filter mode to the pixel being likely to occur in the picture to be presented Sound is removed, so as to obtain quality more preferably image shows effect.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property is included, so that process, method, article or device including a series of key elements not only include those key elements, and And also including other key elements being not expressly set out, or also include for this process, method, article or device institute inherently Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Also there is other identical element in process, method, article or the device of key element.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Understood based on such, technical scheme is substantially done to prior art in other words Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium In (such as ROM/RAM, magnetic disc, CD), including some instructions are to cause a station terminal equipment (can be mobile phone, computer, clothes It is engaged in device, air conditioner, or network equipment etc.) perform method described in each embodiment of the invention.
The preferred embodiments of the present invention are these are only, are not intended to limit the scope of the invention, it is every to utilize this hair Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of tracking based on electronic platform, it is characterised in that the described method comprises the following steps:
Regional choice frame original coordinate information in current presentation image is determined, the original coordinates information is tracking target in institute State the coordinate information in residing region in current presentation image;
According to residing for the original coordinates information and the pre- moving direction prediction tracking target in next frame image to be presented Target area, the target selection frame is moved to the target area;
The image to be presented grabbed by the imaging sensor of electronic platform is received, and the target area is exported.
2. the method as described in claim 1, it is characterised in that described pre- according to the original coordinates information and pre- moving direction Survey before tracking target target area residing in next frame image to be presented, methods described also includes:
Obtain in history displaying image and track clarification of objective information, shown according to the history in image and track clarification of objective The pre- moving direction that clarification of objective information determines the tracking target is tracked in information and the current presentation image.
3. method as claimed in claim 2, it is characterised in that the characteristic information includes feature point set;
Correspondingly, it is described acquisition history displaying image in track clarification of objective information, according to the history show image in The pre- shifting that clarification of objective information determines the tracking target is tracked in track clarification of objective information and the current presentation image Dynamic direction, is specifically included:
Obtain tracking clarification of objective point set A in history displaying image;
Determine to track clarification of objective point set B in the current presentation image from the original coordinates information;
The feature point set A is connected with corresponding characteristic point in the feature point set B, to obtain between each character pair point Line;
Line between each character pair point is subjected to vector weighting calculating, to determine the pre- moving direction of the tracking target.
4. method as claimed in claim 3, it is characterised in that will receive by treating that the imaging sensor of electronic platform is grabbed Image is shown, and after being exported to the target area, methods described also includes:
Characteristic point in the tracking clarification of objective point set B is matched with the image to be presented, to determine described treat Show tracking clarification of objective point set C described in image;
Each characteristic point in the feature point set C is clustered, using cluster centre as tracking target in image to be presented Focus point;
The central point of the target selection frame is obtained, and the target selection frame is moved to the central point of the target selection frame The position overlapped with the focus point of the tracking target.
5. such as method according to any one of claims 1 to 4, it is characterised in that the determination regional choice frame is opened up currently In diagram picture before original coordinate information, methods described also includes;
The current presentation image is pre-processed, to remove the noise pixel point in the current presentation image.
6. a kind of tracks of device based on electronic platform, it is characterised in that described device includes:
Locating module, for determining regional choice frame original coordinate information in current presentation image, the original coordinates information For the coordinate information in tracking target residing region in the current presentation image;
Anticipation module, for waiting to open up in next frame according to the original coordinates information and the pre- moving direction prediction tracking target Residing target area, the target area is moved to by the target selection frame in diagram picture;
Output module, the image to be presented that the imaging sensor for receiving by electronic platform is grabbed, and to the target area Domain is exported.
7. device as claimed in claim 6, it is characterised in that described device also includes:
Direction determining mould, tracks clarification of objective information for obtaining in history displaying image, is shown and schemed according to the history Tracking clarification of objective information in clarification of objective information and the current presentation image is tracked as in and determines the tracking target Pre- moving direction.
8. device as claimed in claim 7, it is characterised in that the characteristic information includes feature point set;
Correspondingly, the direction determining mould, is specifically included:
Characteristic point acquiring unit, for obtaining tracking clarification of objective point set A in history displaying image;And from the original coordinates Determine to track clarification of objective point set B in the current presentation image in information;
Line unit, it is each right to obtain for the feature point set A to be connected with corresponding characteristic point in the feature point set B Answer the line between characteristic point;
Direction-determining unit, for the line between each character pair point to be carried out into vector weighting calculating, to determine the tracking The pre- moving direction of target.
9. device as claimed in claim 8, it is characterised in that described device also includes adjusting module, the adjusting module tool Body includes:
Matching unit, for the characteristic point in the tracking clarification of objective point set B to be matched with the image to be presented, To determine to track clarification of objective point set C described in the image to be presented;
Cluster cell, for being clustered to each characteristic point in the feature point set C, regard cluster centre as figure to be presented The focus point of target is tracked as in;
Adjustment unit, the central point for obtaining the target selection frame, and the target selection frame is moved to the target The position that the central point of choice box is overlapped with the focus point of the tracking target.
10. the device as any one of claim 6~9, it is characterised in that described device also includes:
Pretreatment module, for being pre-processed to the current presentation image, to remove making an uproar in the current presentation image Acoustic image vegetarian refreshments.
CN201710323431.5A 2017-05-05 2017-05-05 A kind of tracking and device based on electronic platform Pending CN107315992A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710323431.5A CN107315992A (en) 2017-05-05 2017-05-05 A kind of tracking and device based on electronic platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710323431.5A CN107315992A (en) 2017-05-05 2017-05-05 A kind of tracking and device based on electronic platform

Publications (1)

Publication Number Publication Date
CN107315992A true CN107315992A (en) 2017-11-03

Family

ID=60185571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710323431.5A Pending CN107315992A (en) 2017-05-05 2017-05-05 A kind of tracking and device based on electronic platform

Country Status (1)

Country Link
CN (1) CN107315992A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108848304A (en) * 2018-05-30 2018-11-20 深圳岚锋创视网络科技有限公司 A kind of method for tracking target of panoramic video, device and panorama camera
CN109074657A (en) * 2018-07-18 2018-12-21 深圳前海达闼云端智能科技有限公司 Target tracking method and device, electronic equipment and readable storage medium
CN110213611A (en) * 2019-06-25 2019-09-06 宫珉 A kind of ball competition field camera shooting implementation method based on artificial intelligence Visual identification technology
CN114430457A (en) * 2020-10-29 2022-05-03 北京小米移动软件有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN114500839A (en) * 2022-01-25 2022-05-13 青岛根尖智能科技有限公司 Vision holder control method and system based on attention tracking mechanism

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140340524A1 (en) * 2013-05-17 2014-11-20 Leap Motion, Inc. Systems and methods for providing normalized parameters of motions of objects in three-dimensional space
CN104796612A (en) * 2015-04-20 2015-07-22 河南弘金电子科技有限公司 High-definition radar linkage tracking control camera shooting system and linkage tracking method
CN105678809A (en) * 2016-01-12 2016-06-15 湖南优象科技有限公司 Handheld automatic follow shot device and target tracking method thereof
CN106303453A (en) * 2016-08-30 2017-01-04 上海大学 A kind of active tracking based on high-speed ball-forming machine
CN106375682A (en) * 2016-08-31 2017-02-01 深圳市大疆创新科技有限公司 Image processing method and apparatus, mobile device, drone remote controller and drone system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140340524A1 (en) * 2013-05-17 2014-11-20 Leap Motion, Inc. Systems and methods for providing normalized parameters of motions of objects in three-dimensional space
CN104796612A (en) * 2015-04-20 2015-07-22 河南弘金电子科技有限公司 High-definition radar linkage tracking control camera shooting system and linkage tracking method
CN105678809A (en) * 2016-01-12 2016-06-15 湖南优象科技有限公司 Handheld automatic follow shot device and target tracking method thereof
CN106303453A (en) * 2016-08-30 2017-01-04 上海大学 A kind of active tracking based on high-speed ball-forming machine
CN106375682A (en) * 2016-08-31 2017-02-01 深圳市大疆创新科技有限公司 Image processing method and apparatus, mobile device, drone remote controller and drone system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵艳启: "运动目标识别与跟踪算法的研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108848304A (en) * 2018-05-30 2018-11-20 深圳岚锋创视网络科技有限公司 A kind of method for tracking target of panoramic video, device and panorama camera
WO2019228196A1 (en) 2018-05-30 2019-12-05 深圳岚锋创视网络科技有限公司 Method for tracking target in panoramic video, and panoramic camera
CN108848304B (en) * 2018-05-30 2020-08-11 影石创新科技股份有限公司 Target tracking method and device of panoramic video and panoramic camera
JP2021527865A (en) * 2018-05-30 2021-10-14 影石創新科技股▲ふん▼有限公司Arashi Vision Inc. Panorama video target tracking method and panoramic camera
JP7048764B2 (en) 2018-05-30 2022-04-05 影石創新科技股▲ふん▼有限公司 Panorama video target tracking method and panoramic camera
JP7048764B6 (en) 2018-05-30 2022-05-16 影石創新科技股▲ふん▼有限公司 Panorama video target tracking method and panoramic camera
US11509824B2 (en) 2018-05-30 2022-11-22 Arashi Vision Inc. Method for tracking target in panoramic video, and panoramic camera
CN109074657A (en) * 2018-07-18 2018-12-21 深圳前海达闼云端智能科技有限公司 Target tracking method and device, electronic equipment and readable storage medium
CN110213611A (en) * 2019-06-25 2019-09-06 宫珉 A kind of ball competition field camera shooting implementation method based on artificial intelligence Visual identification technology
CN114430457A (en) * 2020-10-29 2022-05-03 北京小米移动软件有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN114430457B (en) * 2020-10-29 2024-03-08 北京小米移动软件有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN114500839A (en) * 2022-01-25 2022-05-13 青岛根尖智能科技有限公司 Vision holder control method and system based on attention tracking mechanism

Similar Documents

Publication Publication Date Title
US20210377460A1 (en) Automatic composition of composite images or videos from frames captured with moving camera
Hayman et al. Statistical background subtraction for a mobile observer
US20220417590A1 (en) Electronic device, contents searching system and searching method thereof
Felsberg et al. The thermal infrared visual object tracking VOT-TIR2015 challenge results
CN107315992A (en) A kind of tracking and device based on electronic platform
CN107481270A (en) Table tennis target following and trajectory predictions method, apparatus, storage medium and computer equipment
US20200380779A1 (en) Embedding complex 3d objects into an augmented reality scene using image segmentation
TWI359387B (en) Robust camera pan vector estimation using iterativ
CN108765394A (en) Target identification method based on quality evaluation
CN114339054B (en) Method and device for generating photographing mode and computer readable storage medium
CN111260687B (en) Aerial video target tracking method based on semantic perception network and related filtering
CN113873166A (en) Video shooting method and device, electronic equipment and readable storage medium
CA3061908C (en) Ball trajectory tracking
Sun et al. Learning adaptive patch generators for mask-robust image inpainting
CN110309721A (en) Method for processing video frequency, terminal and storage medium
CN117336526A (en) Video generation method and device, storage medium and electronic equipment
JP2022060900A (en) Control device and learning device and control method
WO2022061631A1 (en) Optical tracking for small objects in immersive video
Yang et al. Design and implementation of intelligent analysis technology in sports video target and trajectory tracking algorithm
Kaur Background subtraction in video surveillance
Li et al. Human behavior recognition based on attention mechanism
CN114222065A (en) Image processing method, image processing apparatus, electronic device, storage medium, and program product
CN113723168A (en) Artificial intelligence-based subject identification method, related device and storage medium
Wang et al. Research and implementation of the sports analysis system based on 3D image technology
CN116309918B (en) Scene synthesis method and system based on tablet personal computer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171103

RJ01 Rejection of invention patent application after publication