CN106875429B - Shoal of fish three-dimensional tracking and system - Google Patents

Shoal of fish three-dimensional tracking and system Download PDF

Info

Publication number
CN106875429B
CN106875429B CN201710119403.1A CN201710119403A CN106875429B CN 106875429 B CN106875429 B CN 106875429B CN 201710119403 A CN201710119403 A CN 201710119403A CN 106875429 B CN106875429 B CN 106875429B
Authority
CN
China
Prior art keywords
target
mrow
point
main framing
top view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710119403.1A
Other languages
Chinese (zh)
Other versions
CN106875429A (en
Inventor
钱志明
寸天睿
秦海菲
刘晓青
王春林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuxiong Normal University
Original Assignee
Chuxiong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chuxiong Normal University filed Critical Chuxiong Normal University
Priority to CN201710119403.1A priority Critical patent/CN106875429B/en
Publication of CN106875429A publication Critical patent/CN106875429A/en
Application granted granted Critical
Publication of CN106875429B publication Critical patent/CN106875429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a kind of shoal of fish three-dimensional tracking, including:Main framing extraction is carried out respectively to the target in top view direction and v side-looking directions, wherein, main framing includes multiple skeletal points, v=1 or 2;After three skeletal points are screened from main framing as characteristic point, the target in top view direction is represented using bicharacteristic point model according to the apparent complexity of target and the target of v side-looking directions is represented using three characteristic point models;Feature Points Matching is carried out to the target in different visual angles direction by epipolar-line constraint, and/or Feature Points Matching is carried out to the target in different visual angles direction by track uniformity;The target distinguished according to the length of main framing or end points quantity after matching is shelter target and unshielding target, carries out motion association to unshielding target, and carry out matching association to shelter target.Present invention also offers a kind of shoal of fish Three-dimensional tracking systems.The shoal of fish three-dimensional tracking and system greatly improve target following efficiency and accuracy.

Description

Shoal of fish three-dimensional tracking and system
Technical field
The present invention relates to technical field of computer vision, in particular to shoal of fish three-dimensional tracking and system.
Background technology
Fish behavior refers to the external reaction that fish is shown to environmental change, including fish is under natural conditions or experiment condition It is various individual and group behaviors.The research of Fish behavior is evolved to animal behavior and fish production development is respectively provided with important meaning Justice.In the pattern of various researching fish behaviors, based on the pattern of video due to simple and easy, the features such as applicability is wide, gradually As a kind of important model of Fish behavior research.
Fish behavior is analyzed using the pattern based on video, first had to motion shape of the video capture device to fish State is recorded, and is then carried out quantitative analysis to every fish in video image, is obtained their movement locus.Conventional method one As by the way that mark obtains these trace informations manually in every two field picture, not only efficiency is low for this mode, and precision is not high. In recent years, with the development of computer technology, the fish based on computer vision is tracked as researcher and provides new effective way, And it is increasingly becoming the focus of research.
According to the difference of fish local environment, the fish tracking based on computer vision can be divided into two-dimensional tracking and three-dimensional tracking Two ways.Fish is limited in the container equipped with shallow water by two-dimensional tracking, then the motion of fish can be approximated to be plane motion.It is this Although mode can be analyzed the behavior of fish, tracking only limits to be carried out in two-dimensional space, it is difficult to the fortune of description fish comprehensively Dynamic behavior.Motion mode of the three-dimensional trace simulation fish in natural environment, obtained track data can more reflect the true row of fish For, thus receive the attention of more researchers.
The multiple target three-dimensional tracking problem that the movement locus of fish belongs in computer vision is tracked in three dimensions.Its face Face following several difficulties:Fish is a kind of less non-rigid targets of texture, and it can produce the shape of various change in travelling, this Difficulty is brought for the design of detector;In addition, frequently mutual eclipse phenomena occurs in travelling in fish, detection is improved Difficulty;Three-dimensional tracking based on binocular vision can be influenceed by overwater refraction, and the screening occurred during more intractable tracking Gear problem.In order to overcome the shortcomings of binocular vision, it is a kind of to carry out shooting perpendicular to the water surface from different directions using more cameras Preferable scheme.But the program can cause Stereo matching difficulty because of apparent change of the target between different visual angles direction again Increase;The division that target occurs during blocking brings great difficulty with merging phenomenon for target association;In addition, target The mistake occurred during detection and Stereo matching can also directly influence the accuracy of target association.
Although current tracking can obtain the 3 D motion trace of fish under some experimental conditions, incomplete Solves problem above.Therefore, the problem of fish tracking in three dimensions is still one extremely challenging.
The content of the invention
The present invention is based on above mentioned problem, it is proposed that shoal of fish three-dimensional tracking and system, can improve target following Accuracy.
In view of this, an aspect of of the present present invention proposes shoal of fish three-dimensional tracking, including:
Main framing extraction is carried out respectively to the target in top view direction and v side-looking directions, wherein, the main framing includes more Individual skeletal point, v=1 or 2;
After three skeletal points are screened from the main framing as characteristic point, used according to the apparent complexity of target double Characteristic point model is represented the target in top view direction and the target of v side-looking directions is represented using three characteristic point models;
Feature Points Matching is carried out to the target in different visual angles direction by epipolar-line constraint, and/or passes through track uniformity pair The target in different visual angles direction carries out Feature Points Matching;
The target distinguished according to the length of the main framing or end points quantity after matching is shelter target and unshielding target, Motion association is carried out to the unshielding target, and matching association is carried out to the shelter target.
Further, described " carrying out main framing extraction respectively to the target in top view direction and v side-looking directions " includes:
Target is obtained respectively in top view direction and the image of v side-looking directions;
Image based on background subtraction from acquired target in different visual angles direction is partitioned into the moving region of target, Wherein, the moving region includes multiple picture points;
Based on fast marching algorithms from target the image zooming-out target in different visual angles direction main framing.
Further, the characteristic point includes:Central feature point, head feature point and tail feature point;It is described " from institute State after three skeletal points are screened in main framing as characteristic point, represented according to the apparent complexity of target using bicharacteristic point model The target in top view direction and the target that two side-looking directions are represented using three characteristic point models " is included:
The skeletal point that can most represent target shape feature is screened from the main framing as central feature point;
In v side-looking directions, three characteristic point moulds of two end points and central feature the point composition of the main framing are used Type represents the target of v side-looking directions;
On top view direction, the width of each skeletal point is obtained according to the beeline of skeletal point to the moving region edge Degree;The average value of the width for all skeletal points that the central feature point arrives between described two end points of main framing respectively is calculated, The first mean breadth and the second mean breadth are denoted as respectively;Compare first mean breadth and second mean breadth, root It is rigid region and non-rigid area to distinguish the main framing according to comparative result, demarcates the main framing respectively in the rigid region End points in domain and non-rigid area is head feature point and tail feature point;Formed using head feature point and central feature point Bicharacteristic point model represent the target in top view direction.
Further, described " Feature Points Matching is carried out to the target in different visual angles direction by epipolar-line constraint, and/or it is logical Cross track uniformity and Feature Points Matching carried out to the target in different visual angles direction " include:
Respectively at target, same position selects eight matchings in the image in top view direction and the image of v side-looking directions Point, basis matrix is calculated based on 8 methods;
The pole according to corresponding to the central feature o'clock of the target in basis matrix acquisition top view direction in v side-looking directions Line, and central feature Point matching is carried out to the target in top view direction and the target of v side-looking directions using the polar curve;
When the distance of central feature point to the polar curve of the target of v side-looking directions is less than the Stereo matching threshold value of setting When, the Stereo matching success under the conditions of epipolar-line constraint, conversely, the Stereo matching failure under the conditions of epipolar-line constraint;
When the Stereo matching failure under the conditions of epipolar-line constraint, target and v using track uniformity to top view direction The target of side-looking direction carries out Stereo matching.
Further, it is described " target after matching to be distinguished according to the length of the main framing or end points quantity to block mesh Mark and unshielding target, motion association is carried out to the unshielding target, and matching association is carried out to the shelter target " bag Include:
Judge whether the skeletal point quantity of a target is more than the main framing middle skeleton point quantity maximum of storage, and judge to be somebody's turn to do Whether the main framing end points quantity of target is more than 2;When any result of determination for when being, it is shelter target to demarcate the target, conversely, It is unshielding target to demarcate the target;
When a target is unshielding target, according to the target i of present frametWith respect to the target i of former framet-1The position of generation Put change pc (it-1,it) and direction change dc (it-1,it), structure target itAssociation cost function:
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w difference For direction change with change in location in the shared weight in associating cost function, and according to the association cost function, it is based on Greedy algorithm carries out suboptimization association to the unshielding target;
When a target is shelter target, to each target i in top view directiontEstablish Status Flag:
As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame Matching association is carried out to blocking the target it that front and rear all status indicators are -1, to ensure the target in top view direction before and after blocking It is respectively positioned in the identical strip path curve fragment of v side-looking directions.
Another invention of the present invention provides a kind of shoal of fish Three-dimensional tracking systems, including:
Main framing extraction module, for carrying out main framing extraction respectively to the target in top view direction and v side-looking directions, its In, the main framing includes multiple skeletal points, v=1 or 2;
Feature point extraction module, for from the main framing screen three skeletal points as characteristic point after, according to mesh Apparent complexity is marked to represent the target in top view direction using bicharacteristic point model and represent v side views using three characteristic point models The target in direction;
Stereo matching module, for carrying out Feature Points Matching to the target in different visual angles direction by epipolar-line constraint, and/or Feature Points Matching is carried out to the target in different visual angles direction by track uniformity;
Target association module, the target after matching is distinguished for the length according to the main framing or end points quantity to block Target and unshielding target, motion association is carried out to the unshielding target, and matching association is carried out to the shelter target.
Further, the main framing extraction module includes:
Image acquisition unit, for obtaining target respectively in top view direction and the image of v side-looking directions;
Region segmentation unit, split for the image based on background subtraction from acquired target in different visual angles direction Go out the moving region of target, wherein, the moving region includes multiple picture points;
Skeletal extraction unit, for the image zooming-out target based on fast marching algorithms from target in different visual angles direction Main framing.
Further, the characteristic point includes:Central feature point, head feature point and tail feature point;The characteristic point Extraction module includes:
Screening unit, the skeletal point that target shape feature can be most represented for being screened from the main framing are special as center Sign point;
Three characteristic point model units, in v side-looking directions, using two end points of the main framing and center special Three characteristic point models of sign point composition represent the target of v side-looking directions;
Bicharacteristic point model unit, in top view direction, according to skeletal point to the most short of the moving region edge Distance obtains the width of each skeletal point;Calculate all bones that the central feature point arrives between described two end points of main framing respectively The average value of the width of frame point, is denoted as the first mean breadth and the second mean breadth respectively;Compare first mean breadth and Second mean breadth, it is rigid region and non-rigid area to distinguish the main framing according to comparative result, demarcates institute respectively It is head feature point and tail feature point to state end points of the main framing in the rigid region and non-rigid area;It is special using head The bicharacteristic point model of sign point and central feature point composition represents the target in top view direction.
Further, the stereo matching module includes:
Epipolar-line constraint matching unit, for respectively at target in the image in top view direction and the image of v side-looking directions Same position selects eight match points, calculates basis matrix based on 8 methods, and obtain top view direction according to the basis matrix Target central feature o'clock in polar curve corresponding to v side-looking directions, and utilize target and v side of the polar curve to top view direction The target of apparent direction carries out central feature Point matching;When v side-looking directions target central feature point to the polar curve away from During from Stereo matching threshold value less than setting, the Stereo matching success under the conditions of epipolar-line constraint, conversely, under the conditions of epipolar-line constraint Stereo matching fails;
Track uniformity matching unit, for when under the conditions of epipolar-line constraint Stereo matching failure when, it is consistent using track Property Stereo matching is carried out to the target of the target in top view direction and v side-looking directions.
Further, the target association module includes:
Whether shadowing unit, the skeletal point quantity for judging a target are more than the main framing middle skeleton points of storage Maximum is measured, and judges whether the main framing end points quantity of the target is more than 2;When any result of determination is to be, the mesh is demarcated Shelter target is designated as, conversely, it is unshielding target to demarcate the target;
Motion association unit, for when a target is unshielding target, according to the target i of present frametWith respect to former frame Target it-1Change in location pc (the i of generationt-1,it) and direction change dc (it-1,it), structure target itAssociation cost letter Number:
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w difference For direction change with change in location in the shared weight in associating cost function, and according to the association cost function, it is based on Greedy algorithm carries out suboptimization association to the unshielding target;
Associative cell is matched, for when a target is shelter target, to each target i in top view directiontEstablish shape State mark:
As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame Matching association is carried out to blocking the target it that front and rear all status indicators are -1, to ensure the target in top view direction before and after blocking It is respectively positioned in the identical strip path curve fragment of v side-looking directions.
Shoal of fish three-dimensional tracking provided by the invention and system, in target detection, represented using feature point model Target, effectively reduce the difficulty of tracking;Epipolar-line constraint condition is eliminated using track uniformity of the target in stereo-picture The uncertainty of lower Stereo matching, improve the accuracy of tracking;Propose based on being tracked with top view, the plan supplemented by side view tracking Slightly solve most difficult occlusion issue during tracking, significantly improve and block disposal ability.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate Appended accompanying drawing, is described in detail below.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by embodiment it is required use it is attached Figure is briefly described, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, therefore be not construed as pair The restriction of scope, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to this A little accompanying drawings obtain other related accompanying drawings.
Fig. 1 shows the schematic flow sheet for the shoal of fish three-dimensional tracking that the embodiment of the present invention proposes;
Fig. 2 shows the schematic device of target image in the shoal of fish three-dimensional tracking obtained shown in Fig. 1;
Fig. 3 shows the distribution schematic diagram of the shoal of fish three-dimensional tracking middle main bone frame shown in Fig. 1;
Fig. 4 shows the structural representation for the shoal of fish Three-dimensional tracking systems that the embodiment of the present invention proposes.
Main element symbol description:
100- shoal of fish Three-dimensional tracking systems;10- main framing extraction modules;20- feature point extraction modules;30- Stereo matchings Module;40- target association modules;1- skeletal points;2- justifies;3- central features point;4- rigid regions;5- non-rigid areas;6- heads Portion's characteristic point;7- tail features point.
Embodiment
For the ease of understanding the present invention, shoal of fish three-dimensional tracking and system are carried out below with reference to relevant drawings more clear Chu, it is fully described by.Shoal of fish three-dimensional tracking and the preferred embodiment of system are given in accompanying drawing.Shoal of fish three-dimensional tracking And system can be realized by many different forms, however it is not limited to embodiment described herein.Therefore, below to attached The detailed description of the embodiments of the invention provided in figure is not intended to limit the scope of claimed invention, but only Represent the selected embodiment of the present invention.Based on embodiments of the invention, those skilled in the art are not making creative work On the premise of the every other embodiment that is obtained, belong to the scope of protection of the invention.
It should be noted that the shoal of fish three-dimensional tracking that the present embodiment provides mainly includes Object Detecting and Tracking Two parts.Further, target detection includes moving region segmentation, main framing extraction and feature point extraction three parts;Target with Track includes Stereo matching and target association two parts.It is worth noting that, the shoal of fish three-dimensional tracking that the present embodiment provides is adopted Based on the apparent tracking for changing less top view direction of target, the strategy supplemented by the tracking of side-looking direction carries out three-dimensional to target Tracking.Referring specifically to the description of embodiment 1.
Embodiment 1
Fig. 1 shows the schematic flow sheet for the shoal of fish three-dimensional tracking that the embodiment of the present invention proposes.
As shown in figure 1, the shoal of fish three-dimensional tracking that the embodiment of the present invention proposes, including:
Step S1, main framing extraction is carried out respectively to the target in top view direction and v side-looking directions, wherein, the main bone Frame includes multiple skeletal points, v=1 or 2.
Specifically, the image capture device that environment is given birth to by being arranged on fish obtains image of the fish in different visual angles.Please In the lump as shown in fig.2, in the present embodiment, environment is given birth in fish, the top of such as fish jar or fishpond and two sidepieces are set respectively One image capture device, so as to obtain fish respectively in top view direction and the image of v side-looking directions, wherein, v=1 or 2.Can be with Understand, in the present embodiment, obtain image of the target in two side-looking directions respectively, it is of course also possible to only obtain target at one The image of side-looking direction.In another embodiment, image of the fish in top view direction and v side-looking directions passes through same image collector Acquisition order is put to obtain.
It should be noted that in the present embodiment, the target for carrying out three-dimensional tracking is fish, and present specification is in general designation fish " target ", when needing to track a plurality of fish, the i.e. shoal of fish simultaneously, then tracked for multiple target is three-dimensional.
Further, the motor area of target is partitioned into from image of the target in different visual angles direction based on background subtraction Domain.Wherein, the moving region refers to the circumference of the fish obtained according to the shape facility and volume of fish body, including multiple Picture point.The picture point is the minimum unit of pie graph picture.Generally, the amplified continuous tone that can find image of image be in fact by Many small side's point compositions of color, each small side's point is exactly a picture point.More specifically, according to
The moving region of target is partitioned into from image of the target in different visual angles direction.Wherein, Rt(x, y) ties for segmentation The moving region of fruit, i.e. target, ft(x, y) is target in the image in different visual angles direction, fb(x, y) is background image, TgTo divide Cut threshold value.Because the edge of obtained moving region is more coarse, be unfavorable for main framing extraction and analysis, it is necessary to it carry out Pretreatment.Preferably, morphological operation is carried out to the moving region being divided into, fills the hole inside moving region and delete face The less interference region of product, then does smoothing processing using medium filtering to moving region.
Further, based on fast marching algorithms from target the image zooming-out target in different visual angles direction main framing. Wherein, the main framing includes multiple skeletal points.More specifically, according to
S=(i, j) | max (| ux|, | uy|) > Tu,
ux=U (i+1, j)-U (i, j),
uy=U (i, j+1)-U (i, j)
Extract the main framing of the target in different visual angles direction.Wherein, s is a skeletal point, uxAnd uyRespectively described motor area U value difference of the point (i, j) in x directions and y directions is different in domain, and U (i, j) is the arrival time of picture point (i, j);TuFor main skeletal extraction Threshold value.Main framing extraction threshold value Tu effect is similar to scale factor, when main framing extract threshold value Tu it is smaller when, obtained main bone Frame details is more, and when main framing extraction threshold value Tu is larger, obtained main framing details is less.As main framing extracts threshold value Tu increase, the details of main framing gradually tail off, and main framing gradually strengthens the descriptive power of the agent structure of target.
Step S2, after three skeletal points are screened from the main framing as characteristic point, according to the apparent complexity of target Degree is represented the target in top view direction using bicharacteristic point model and the target of v side-looking directions is represented using three characteristic point models.
Specifically, after the main framing of target is extracted, target shape feature can most be represented by being screened from the main framing Skeletal point as central feature point, further simplify the structure of target.The main framing obtained generally, based on fast marching algorithms In, maximum U value points are located at the center of main framing, can preferably reflect the center of target shape, therefore by the bone Frame point is defined as the central feature point of target.
Further, it is the center of circle and with the skeletal point with a skeletal point 1 also referring to shown in Fig. 3 on top view direction The beeline at 1 to the moving region edge makees circle 2 for radius, obtains width of the diameter as the skeletal point of the circle 2.In Main framing is divided into two parts by heart characteristic point 3, wherein, central feature point 3 to main framing end point is Part I, and center is special Sign point 3 to another end points of main framing be Part II.The average value of the width of all skeletal points of two parts is calculated respectively, is obtained To the mean breadth of each part.In other words, all skeletal points that central feature point 3 is arrived between main framing end point are calculated The average value of width, and the average value of the width for all skeletal points that central feature point 3 is arrived between another end points of main framing is calculated, Respectively as the mean breadth of two parts of main framing.Compare the mean breadth of two parts of main framing, according to comparative result area It is rigid region 4 and non-rigid area 5 to divide the main framing.It is readily appreciated that, the average width of all skeletal points 1 in rigid region 4 Degree is more than the mean breadth of all skeletal points 1 in non-rigid area 5.The main framing is demarcated respectively in the He of rigid region 4 End points in non-rigid area 5 is head feature point 6 and tail feature point 7;Use 3 groups of head feature point 6 and central feature point Into bicharacteristic point model represent the target in top view direction.The end points is the skeletal point in main framing end.
It is readily appreciated that, is difficult to estimate fish body width in v side-looking directions, therefore can not be from two ends of the main framing Head feature point and tail feature point are distinguished in point.Therefore, in v side-looking directions, different from bicharacteristic point model, use Three characteristic point models of two end points and central feature the point composition of the main framing represent the target of v side-looking directions.
Step S3, Feature Points Matching is carried out to the target in different visual angles direction by epipolar-line constraint, and/or pass through track one Cause property carries out Feature Points Matching to the target in different visual angles direction.
Specifically, the same position selection eight in the image in top view direction and the image of v side-looking directions respectively at target Individual match point, basis matrix is calculated based on 8 methods.The basis matrix is that same three-dimensional scenic obtains at two different points of view Two width two dimensional images between geometrical relationship, corresponding to the embodiment, then the image for top view direction and v side-looking directions Geometrical relationship between image.
Further, the central feature o'clock of the target in top view direction is obtained according to the basis matrix in v side-looking directions Corresponding polar curve, and central feature Point matching is carried out to the target in top view direction and the target of v side-looking directions using the polar curve. In other words, the stereo-picture formed according to image of the target in the 1st side-looking direction and the image in top view direction calculates the mesh Basis matrix corresponding to mark, top view direction can be obtained to the polar curve of the 1st side-looking direction according to the basis matrix, so as to right The target of top view direction and the 1st side-looking direction carries out Stereo matching.Similarly, according to target the image of the 2nd side-looking direction and The stereo-picture of the image composition in top view direction calculates basis matrix corresponding to the target, can be pushed up according to the basis matrix Apparent direction to the 2nd side-looking direction polar curve, so as to the target of top view direction and the 2nd side-looking direction carry out Stereo matching. The purpose of Stereo matching is to determine which target that the target in top view direction corresponds in v side-looking directions, same so as to realize Matching of the target in one group of stereo-picture.
More specifically, use respectivelyWithRepresent the target i in top view directiontWith the target j of v side-looking directionstIn Heart characteristic point, central feature point is obtained according to the basis matrixIn polar curve corresponding to v side-looking directionsFurther According to
To the target i in top view directiontWith the target j of v side-looking directionstCentral feature point carry out Stereo matching, wherein, em(it,jt) be epipolar-line constraint under the conditions of stereo matching results, distance () represent central feature point between polar curve away from From Tm is Stereo matching threshold value.When the distance of central feature point to the polar curve of the target of v side-looking directions is less than setting During Stereo matching threshold value, represent that the target in the top view direction under the conditions of epipolar-line constraint only has a matching pair in v side-looking directions As Stereo matching success;Conversely, represent that the target in top view direction there are multiple in v side-looking directions under the conditions of epipolar-line constraint With object, Stereo matching failure.
When under the conditions of epipolar-line constraint Stereo matching failure when, it is necessary under using epipolar-line constraint target track uniformity come Solves the uncertain problem of Stereo matching, i.e., using track uniformity to the target in top view direction and the mesh of v side-looking directions Mark carries out Stereo matching.Specifically, T is used respectivelyi topWithRepresent the target i in top view directiontWith the target of v side-looking directions jtIn the movement locus of t.Assuming that the target i in top view directiontK matching mesh be present in v side-looking directions by epipolar-line constraint Mark, then select the n frame movement locus nearest from frame t from the movement locus of this k matching target and matched.According to
The target of target and v side-looking directions to top view direction carries out Stereo matching.Wherein, sm (it,jt) it is track one Stereo matching results under the conditions of cause property, ml () represent that under the conditions of epipolar-line constraint target is in top view direction and v side view sides To two movement locus matching length.When the target it and k in top view direction match some in target have preferably Track uniformity, represent under the condition for consistence of track Stereo matching success.It is readily appreciated that, due to the movement locus of target With uniqueness, it generally need to only choose that less several frames can the match is successful.When Stereo matching success, three feature point models can To be directly reduced to bicharacteristic point model.
Step S4, it is shelter target and non-screening according to the target that the length of the main framing or end points quantity are distinguished after matching Target is kept off, motion association is carried out to the unshielding target, and matching association is carried out to the shelter target.
It should be noted that when target eclipse phenomena occurs during exercise, detection mistake inevitably occurs, this When risk is brought to target association.In order to ensure the accuracy of target association, target is divided into shelter target and non-screening first Keep off target two types.Specifically, target is in motion process, if do not blocked, only existed in main framing head and Two end points of tail.Once blocking, then the length of main framing or the quantity of end points will increase therewith, can judge accordingly be It is no to block.More specifically, according to
Judge whether the target after Stereo matching blocks.Wherein, sp is skeletal point quantity, and ep is main framing End points quantity, MAL are main framing middle skeleton point quantity maximum.It is readily appreciated that, when the skeletal point quantity of a target is more than storage Main framing middle skeleton point quantity maximum, or when the main framing end points quantity of the target is more than 2, demarcate the target to block mesh Mark;Conversely, it is unshielding target to demarcate the target.
When a target is unshielding target, motion association is carried out to the unshielding target.Specifically, according to present frame Target itWith respect to the target i of former framet-1Change in location pc (the i of generationt-1,it) and direction change dc (it-1,it), build mesh Mark itAssociation cost function:
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w difference For direction change with change in location in the shared weight in associating cost function.According to the association cost function, based on greedy Center algorithm carries out suboptimization association to the unshielding target.The greedy algorithm (also known as greedy algorithm) refers to asking When topic solves, always make and currently appearing to be best selection, in other words, do not taken in from total optimization, its institute What is made is locally optimal solution in some sense.
It is worth noting that, on top view direction, because bicharacteristic point model has directional information, it is possible to directly enter Row association;And in v side-looking directions, then by three characteristic point model simplifications need to be double according to the relative position relation between target It is associated after feature point model.Preferably, in the present embodiment, in order to improve association efficiency, in association process, target is worked as Between distance be more than pcmaxWhen, abandon associating.
When a target is shelter target, matching association is carried out to the shelter target.Specifically, in top view direction Each target itEstablish Status Flag:
When the target it of present frame status indicator is+1, according to the v side-looking directions n two field picture nearest from present frame Matching association is carried out to blocking the target it that front and rear all status indicators are -1, to ensure the target in top view direction before and after blocking It is respectively positioned in the identical strip path curve fragment of v side-looking directions.
Easily find, the shoal of fish three-dimensional tracking that the present embodiment provides is using the apparent less top view direction of change of target Tracking based on, the strategy supplemented by the tracking of side-looking direction carries out three-dimensional tracking to target.It is this tracked with top view based on, side view Strategy supplemented by tracking can effectively solve the frequent occlusion issue in multiple target tracking.
Embodiment 2
Fig. 4 shows the structural representation for the shoal of fish Three-dimensional tracking systems that the embodiment of the present invention proposes.
As shown in figure 4, the shoal of fish Three-dimensional tracking systems 100 that the embodiment of the present invention proposes, including main framing extraction module 10, Feature point extraction module 20, stereo matching module 30 and target association module 40.
Main framing extraction module 10 is used to carry out main framing extraction respectively to the target in top view direction and v side-looking directions, Wherein, the main framing includes multiple skeletal points, v=1 or 2.In the present embodiment, the main framing extraction module 10 includes:
Image acquisition unit, for obtaining target respectively in top view direction and the image of v side-looking directions;
Region segmentation unit, target is partitioned into for the image based on background subtraction from target in different visual angles direction Moving region, wherein, the moving region includes multiple picture points;
Skeletal extraction unit, for the image zooming-out target based on fast marching algorithms from target in different visual angles direction Main framing.
Feature point extraction module 20 in from the main framing screen three skeletal points as characteristic point after, according to target Apparent complexity represents the target in top view direction using bicharacteristic point model and represents v side view sides using three characteristic point models To target.In the present embodiment, the feature point extraction module 20 includes:
Screening unit, the skeletal point that target shape feature can be most represented for being screened from the main framing are special as center Sign point;
Three characteristic point model units, in v side-looking directions, using two end points of the main framing and center special Three characteristic point models of sign point composition represent the target of v side-looking directions;
Bicharacteristic point model unit, in top view direction, according to skeletal point to the most short of the moving region edge Distance obtains the width of each skeletal point;Calculate all bones that the central feature point arrives between described two end points of main framing respectively The average value of the width of frame point, is denoted as the first mean breadth and the second mean breadth respectively;Compare first mean breadth and Second mean breadth, it is rigid region and non-rigid area to distinguish the main framing according to comparative result, demarcates institute respectively It is head feature point and tail feature point to state end points of the main framing in the rigid region and non-rigid area;It is special using head The bicharacteristic point model of sign point and central feature point composition represents the target in top view direction.
Stereo matching module 30 is used to carry out the target in different visual angles direction Feature Points Matching by epipolar-line constraint, and/ Or Feature Points Matching is carried out to the target in different visual angles direction by track uniformity.In the present embodiment, the Stereo matching mould Block 30 includes:
Epipolar-line constraint matching unit, for respectively at target in the image in top view direction and the image of v side-looking directions Same position selects eight match points, calculates basis matrix based on 8 methods, and obtain top view direction according to the basis matrix Target central feature o'clock in polar curve corresponding to v side-looking directions, and utilize target and v side of the polar curve to top view direction The target of apparent direction carries out central feature Point matching;When v side-looking directions target central feature point to the polar curve away from During from Stereo matching threshold value less than setting, the Stereo matching success under the conditions of epipolar-line constraint, conversely, under the conditions of epipolar-line constraint Stereo matching fails;
Track uniformity matching unit, for when under the conditions of epipolar-line constraint Stereo matching failure when, it is consistent using track Property Stereo matching is carried out to the target of the target in top view direction and v side-looking directions.
Target association module 40 is used to distinguish the target after matching according to the length or end points quantity of the main framing to hide Target and unshielding target are kept off, motion association is carried out to the unshielding target, and matching association is carried out to the shelter target. In the present embodiment, the target association module 40 includes:
Whether shadowing unit, the skeletal point quantity for judging a target are more than the main framing middle skeleton points of storage Maximum is measured, and judges whether the main framing end points quantity of the target is more than 2;When any result of determination is to be, the mesh is demarcated Shelter target is designated as, conversely, it is unshielding target to demarcate the target;
Motion association unit, for when a target is unshielding target, according to the target i of present frametWith respect to former frame Target it-1Change in location pc (the i of generationt-1,it) and direction change dc (it-1,it), structure target itAssociation cost letter Number:
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w difference For direction change with change in location in the shared weight in associating cost function, and according to the association cost function, it is based on Greedy algorithm carries out suboptimization association to the unshielding target;
Associative cell is matched, for when a target is shelter target, to each target i in top view directiontEstablish shape State mark:
As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame Matching association is carried out to blocking the target it that front and rear all status indicators are -1, to ensure the target in top view direction before and after blocking It is respectively positioned in the identical strip path curve fragment of v side-looking directions.
Shoal of fish three-dimensional tracking provided by the invention and system, in target detection, represented using feature point model Target, effectively reduce the difficulty of tracking;Epipolar-line constraint condition is eliminated using track uniformity of the target in stereo-picture The uncertainty of lower Stereo matching, improve the accuracy of tracking;Propose based on being tracked with top view, the plan supplemented by side view tracking Slightly solve most difficult occlusion issue during tracking, significantly improve and block disposal ability.
The system that the embodiment of the present invention is provided, its realization principle and caused technique effect and preceding method embodiment phase Together, to briefly describe, system embodiment part does not refer to part, refers to corresponding contents in preceding method embodiment.
In all examples being illustrated and described herein, any occurrence should be construed as merely exemplary, without It is that therefore, other examples of exemplary embodiment can have different values to limit.It should be noted that:Similar label and word Mother represents similar terms in following accompanying drawing, therefore, once it is defined in a certain Xiang Yi accompanying drawing, then in subsequent accompanying drawing It further need not be defined and explained.
In several embodiments provided herein, it should be understood that disclosed device can be by others side Formula is realized.Device embodiment described above is only schematical, for example, the division of the unit, only one kind are patrolled Function division is collected, there can be other dividing mode when actually realizing, in another example, multiple units or component can combine or can To be integrated into another system, or some features can be ignored, or not perform.Another, shown or discussed is mutual Coupling or direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some communication interfaces, device or unit Connect, can be electrical, mechanical or other forms.
The unit that illustrates for separating component can be or may not be physically separate, be shown for unit Part can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple networks On unit.Some or all of unit therein can be selected to realize the purpose of this embodiment scheme according to the actual needs.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained Cover within protection scope of the present invention.Therefore, protection scope of the present invention described should be defined by scope of the claims.

Claims (10)

  1. A kind of 1. shoal of fish three-dimensional tracking, it is characterised in that including:
    Main framing extraction is carried out respectively to the target in top view direction and v side-looking directions, wherein, the main framing includes multiple bones Frame point, v=1 or 2;
    After three skeletal points are screened from the main framing as characteristic point, bicharacteristic is used according to the apparent complexity of target Point model is represented the target in top view direction and the target of v side-looking directions is represented using three characteristic point models;
    Feature Points Matching is carried out to the target in different visual angles direction by epipolar-line constraint, and/or by track uniformity to difference The target of view directions carries out Feature Points Matching;
    It is shelter target and unshielding target according to the target that the length of the main framing or end points quantity are distinguished after matching, to institute State unshielding target and carry out motion association, and matching association is carried out to the shelter target.
  2. 2. shoal of fish three-dimensional tracking according to claim 1, it is characterised in that described " to top view direction and v side views The target in direction carries out main framing extraction respectively " include:
    Target is obtained respectively in top view direction and the image of v side-looking directions;
    Image based on background subtraction from acquired target in different visual angles direction is partitioned into the moving region of target, its In, the moving region includes multiple picture points;
    Based on fast marching algorithms from target the image zooming-out target in different visual angles direction main framing.
  3. 3. shoal of fish three-dimensional tracking according to claim 2, it is characterised in that the characteristic point includes:Central feature Point, head feature point and tail feature point;
    It is described " after three skeletal points are screened from the main framing as characteristic point, to be used according to the apparent complexity of target Bicharacteristic point model is represented the target in top view direction and the target of v side-looking directions is represented using three characteristic point models " include:
    The skeletal point that can most represent target shape feature is screened from the main framing as central feature point;
    In v side-looking directions, three characteristic point model tables of two end points and central feature the point composition of the main framing are used Show the target of v side-looking directions;
    On top view direction, the width of each skeletal point is obtained according to the beeline of skeletal point to the moving region edge;Meter The average value of the width for all skeletal points that the central feature point arrives between described two end points of main framing respectively is calculated, is remembered respectively Make the first mean breadth and the second mean breadth;Compare first mean breadth and second mean breadth, according to comparing As a result it is rigid region and non-rigid area to distinguish the main framing, demarcates the main framing respectively in the rigid region and non- End points in rigid region is head feature point and tail feature point;Formed using head feature point and central feature point double special Levy the target that point model represents top view direction.
  4. 4. shoal of fish three-dimensional tracking according to claim 1, it is characterised in that described " by epipolar-line constraint to difference The target of view directions carries out Feature Points Matching, and/or carries out feature to the target in different visual angles direction by track uniformity Point matching " includes:
    Respectively at target, same position selects eight match points, base in the image in top view direction and the image of v side-looking directions Basis matrix is calculated in 8 methods;
    The polar curve according to corresponding to the central feature o'clock of the target in basis matrix acquisition top view direction in v side-looking directions, and Central feature Point matching is carried out to the target in top view direction and the target of v side-looking directions using the polar curve;
    When the distance of central feature point to the polar curve of the target of v side-looking directions is less than the Stereo matching threshold value of setting, Stereo matching success under the conditions of epipolar-line constraint, conversely, the Stereo matching failure under the conditions of epipolar-line constraint;
    When the Stereo matching failure under the conditions of epipolar-line constraint, target of the track uniformity to top view direction and v side views are utilized The target in direction carries out Stereo matching.
  5. 5. shoal of fish three-dimensional tracking according to claim 1, it is characterised in that described " according to the length of the main framing The target that degree or end points quantity are distinguished after matching is shelter target and unshielding target, and motion pass is carried out to the unshielding target Connection, and matching association is carried out to the shelter target " include:
    Judge whether the skeletal point quantity of a target is more than the main framing middle skeleton point quantity maximum of storage, and judge the target Main framing end points quantity whether be more than 2;When any result of determination for when being, it is shelter target to demarcate the target, conversely, demarcation The target is unshielding target;
    When a target is unshielding target, according to the target i of present frametWith respect to the target i of former framet-1The change in location of generation pc(it-1,it) and direction change dc (it-1,it), structure target itAssociation cost function:
    <mrow> <mi>c</mi> <mi>v</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>i</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>i</mi> <mi>t</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mi>w</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>d</mi> <mi>c</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>i</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>i</mi> <mi>t</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>dc</mi> <mi>max</mi> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <mi>w</mi> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>p</mi> <mi>c</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>i</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>i</mi> <mi>t</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>pc</mi> <mi>max</mi> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>
    Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w are respectively side To change with change in location in the shared weight in associating cost function, and according to the association cost function, based on greed Algorithm carries out suboptimization association to the unshielding target;
    When a target is shelter target, to each target i in top view directiontEstablish Status Flag:
    As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame to hide All status indicators are -1 target i before and after geartMatching association is carried out, to ensure that the target in top view direction is blocking front and rear equal position In the identical strip path curve fragment of v side-looking directions.
  6. A kind of 6. shoal of fish Three-dimensional tracking systems, it is characterised in that including:
    Main framing extraction module, for carrying out main framing extraction respectively to the target in top view direction and v side-looking directions, wherein, The main framing includes multiple skeletal points, v=1 or 2;
    Feature point extraction module, for from the main framing screen three skeletal points as characteristic point after, according to object table The complexity of sight represents the target in top view direction using bicharacteristic point model and represents v side-looking directions using three characteristic point models Target;
    Stereo matching module, for carrying out Feature Points Matching to the target in different visual angles direction by epipolar-line constraint, and/or pass through Track uniformity carries out Feature Points Matching to the target in different visual angles direction;
    Target association module, the target distinguished for the length according to the main framing or end points quantity after matching is shelter target With unshielding target, motion association is carried out to the unshielding target, and matching association is carried out to the shelter target.
  7. 7. shoal of fish Three-dimensional tracking systems according to claim 6, it is characterised in that the main framing extraction module includes:
    Image acquisition unit, for obtaining target respectively in top view direction and the image of v side-looking directions;
    Region segmentation unit, mesh is partitioned into for the image based on background subtraction from acquired target in different visual angles direction Target moving region, wherein, the moving region includes multiple picture points;
    Skeletal extraction unit, for based on fast marching algorithms from target the image zooming-out target in different visual angles direction main bone Frame.
  8. 8. shoal of fish Three-dimensional tracking systems according to claim 7, it is characterised in that the characteristic point includes:Central feature Point, head feature point and tail feature point;The feature point extraction module includes:
    Screening unit, the skeletal point of target shape feature can be most represented as central feature for being screened from the main framing Point;
    Three characteristic point model units, in v side-looking directions, using two end points and central feature point of the main framing Three characteristic point models of composition represent the target of v side-looking directions;
    Bicharacteristic point model unit, in top view direction, the beeline according to skeletal point to the moving region edge Obtain the width of each skeletal point;Calculate all skeletal points that the central feature point arrives between described two end points of main framing respectively Width average value, be denoted as the first mean breadth and the second mean breadth respectively;Compare first mean breadth and described Second mean breadth, it is rigid region and non-rigid area to distinguish the main framing according to comparative result, demarcates the master respectively End points of the skeleton in the rigid region and non-rigid area is head feature point and tail feature point;Use head feature point The target in top view direction is represented with the bicharacteristic point model of central feature point composition.
  9. 9. shoal of fish Three-dimensional tracking systems according to claim 6, it is characterised in that the stereo matching module includes:
    Epipolar-line constraint matching unit, for respectively at target in the image in top view direction and the image of v side-looking directions it is identical Position selects eight match points, calculates basis matrix based on 8 methods, and the mesh in top view direction is obtained according to the basis matrix Target central feature o'clock utilizes target and v side view side of the polar curve to top view direction in polar curve corresponding to v side-looking directions To target carry out central feature Point matching;When the distance of central feature point to the polar curve of the target of v side-looking directions is low When the Stereo matching threshold value of setting, the Stereo matching success under the conditions of epipolar-line constraint, conversely, the solid under the conditions of epipolar-line constraint It fails to match;
    Track uniformity matching unit, for when the Stereo matching failure under the conditions of epipolar-line constraint, utilizing track uniformity pair The target in top view direction and the target of v side-looking directions carry out Stereo matching.
  10. 10. shoal of fish Three-dimensional tracking systems according to claim 6, it is characterised in that the target association module includes:
    Shadowing unit, for judging whether the skeletal point quantity of a target is more than the main framing middle skeleton point quantity of storage most Big value, and judge whether the main framing end points quantity of the target is more than 2;When any result of determination for when being, demarcating the target is Shelter target, conversely, it is unshielding target to demarcate the target;
    Motion association unit, for when a target is unshielding target, according to the target i of present frametWith respect to the target of former frame it-1Change in location pc (the i of generationt-1,it) and direction change dc (it-1,it), structure target itAssociation cost function:
    <mrow> <mi>c</mi> <mi>v</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>i</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>i</mi> <mi>t</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mi>w</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>d</mi> <mi>c</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>i</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>i</mi> <mi>t</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>dc</mi> <mi>max</mi> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <mi>w</mi> </mrow> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>p</mi> <mi>c</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>i</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>i</mi> <mi>t</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>pc</mi> <mi>max</mi> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> </mrow>
    Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w are respectively side To change with change in location in the shared weight in associating cost function, and according to the association cost function, based on greed Algorithm carries out suboptimization association to the unshielding target;
    Associative cell is matched, for when a target is shelter target, to each target i in top view directiontEstablish state mark Will:
    As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame to hide All status indicators are -1 target i before and after geartMatching association is carried out, to ensure that the target in top view direction is blocking front and rear equal position In the identical strip path curve fragment of v side-looking directions.
CN201710119403.1A 2017-03-02 2017-03-02 Shoal of fish three-dimensional tracking and system Active CN106875429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710119403.1A CN106875429B (en) 2017-03-02 2017-03-02 Shoal of fish three-dimensional tracking and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710119403.1A CN106875429B (en) 2017-03-02 2017-03-02 Shoal of fish three-dimensional tracking and system

Publications (2)

Publication Number Publication Date
CN106875429A CN106875429A (en) 2017-06-20
CN106875429B true CN106875429B (en) 2018-02-02

Family

ID=59168420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710119403.1A Active CN106875429B (en) 2017-03-02 2017-03-02 Shoal of fish three-dimensional tracking and system

Country Status (1)

Country Link
CN (1) CN106875429B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818574B (en) * 2017-09-21 2021-08-27 楚雄师范学院 Fish shoal three-dimensional tracking method based on skeleton analysis
JP7287430B2 (en) * 2021-09-27 2023-06-06 日本電気株式会社 Fish detection device, fish detection method and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236893A (en) * 2010-04-30 2011-11-09 中国人民解放军装备指挥技术学院 Space-position-forecast-based corresponding image point matching method for lunar surface image
CN103955688A (en) * 2014-05-20 2014-07-30 楚雄师范学院 Zebra fish school detecting and tracking method based on computer vision
CN104766346A (en) * 2015-04-15 2015-07-08 楚雄师范学院 Zebra fish tracking method based on video images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236893A (en) * 2010-04-30 2011-11-09 中国人民解放军装备指挥技术学院 Space-position-forecast-based corresponding image point matching method for lunar surface image
CN103955688A (en) * 2014-05-20 2014-07-30 楚雄师范学院 Zebra fish school detecting and tracking method based on computer vision
CN104766346A (en) * 2015-04-15 2015-07-08 楚雄师范学院 Zebra fish tracking method based on video images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《An effective and robust method for tracking multiple fish in video image based on fish head detection》;Zhi-Ming Qian 等;《BMC Bioinformatics(2016)》;20161231;第1-11页 *
《一种基于图像的机器鱼动态跟踪算法》;方非,谢广明;《机器人技术与应用》;20090730(第4期);第22-25页 *

Also Published As

Publication number Publication date
CN106875429A (en) 2017-06-20

Similar Documents

Publication Publication Date Title
CN104766058B (en) A kind of method and apparatus for obtaining lane line
CN105427314B (en) SAR image object detection method based on Bayes&#39;s conspicuousness
CN104599275B (en) The RGB-D scene understanding methods of imparametrization based on probability graph model
CN112733812B (en) Three-dimensional lane line detection method, device and storage medium
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
CN104715251B (en) A kind of well-marked target detection method based on histogram linear fit
CN110110797B (en) Water surface target training set automatic acquisition method based on multi-sensor fusion
CN111340855A (en) Road moving target detection method based on track prediction
CN110111283A (en) The reminding method and system of infrared suspected target under a kind of complex background
CN107025657A (en) A kind of vehicle action trail detection method based on video image
CN110288659A (en) A kind of Depth Imaging and information acquisition method based on binocular vision
CN109871829A (en) A kind of detection model training method and device based on deep learning
CN106875429B (en) Shoal of fish three-dimensional tracking and system
CN104050685A (en) Moving target detection method based on particle filtering visual attention model
CN111913177A (en) Method and device for detecting target object and storage medium
CN111126459A (en) Method and device for identifying fine granularity of vehicle
CN107704867A (en) Based on the image characteristic point error hiding elimination method for weighing the factor in a kind of vision positioning
CN113313047A (en) Lane line detection method and system based on lane structure prior
CN109063549A (en) High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN116071283A (en) Three-dimensional point cloud image fusion method based on computer vision
CN111259733A (en) Point cloud image-based ship identification method and device
CN115685102A (en) Target tracking-based radar vision automatic calibration method
CN116245949A (en) High-precision visual SLAM method based on improved quadtree feature point extraction
CN101572770A (en) Method for testing motion available for real-time monitoring and device thereof
CN113096184A (en) Diatom positioning and identifying method under complex background

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant