CN106875429A - Shoal of fish three-dimensional tracking and system - Google Patents

Shoal of fish three-dimensional tracking and system Download PDF

Info

Publication number
CN106875429A
CN106875429A CN201710119403.1A CN201710119403A CN106875429A CN 106875429 A CN106875429 A CN 106875429A CN 201710119403 A CN201710119403 A CN 201710119403A CN 106875429 A CN106875429 A CN 106875429A
Authority
CN
China
Prior art keywords
target
point
main framing
top view
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710119403.1A
Other languages
Chinese (zh)
Other versions
CN106875429B (en
Inventor
钱志明
寸天睿
秦海菲
刘晓青
王春林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuxiong Normal University
Original Assignee
Chuxiong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chuxiong Normal University filed Critical Chuxiong Normal University
Priority to CN201710119403.1A priority Critical patent/CN106875429B/en
Publication of CN106875429A publication Critical patent/CN106875429A/en
Application granted granted Critical
Publication of CN106875429B publication Critical patent/CN106875429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a kind of shoal of fish three-dimensional tracking, including:Target to top view direction and v side-looking directions carries out main framing extraction respectively, wherein, main framing includes multiple skeletal points, v=1 or 2;After three skeletal points are screened from main framing as characteristic point, the target in top view direction using bicharacteristic point model is represented according to the apparent complexity of target and the target of v side-looking directions is represented using three characteristic point models;Feature Points Matching is carried out to the target in different visual angles direction by epipolar-line constraint, and/or Feature Points Matching is carried out to the target in different visual angles direction by track uniformity;It is shelter target and unshielding target that length or end points quantity according to main framing distinguish the target after matching, motion association is carried out to unshielding target, and carry out matching association to shelter target.Present invention also offers a kind of shoal of fish Three-dimensional tracking systems.Shoal of fish three-dimensional tracking and system greatly improve target following efficiency and accuracy.

Description

Shoal of fish three-dimensional tracking and system
Technical field
The present invention relates to technical field of computer vision, in particular to shoal of fish three-dimensional tracking and system.
Background technology
Fish behavior refers to the external reaction that fish is shown to environmental change, including fish is under natural conditions or experiment condition Various individual and group behaviors.The research of Fish behavior is evolved to animal behavior and fish production development is respectively provided with important meaning Justice.In the pattern of various researching fish behaviors, the pattern based on video due to simple and easy to apply, the features such as applicability is wide, gradually A kind of important model as Fish behavior research.
Fish behavior is analyzed using the pattern based on video, first has to the motion shape to fish with video capture device State is recorded, and then carries out quantitative analysis to every fish in video image, obtains their movement locus.Conventional method one As by every two field picture, mark obtains these trace informations manually, not only efficiency is low for this mode, and precision is not high. In recent years, with the development of computer technology, the fish based on computer vision is tracked as researcher there is provided new effective way, And it is increasingly becoming the focus of research.
According to the difference of fish local environment, the fish tracking based on computer vision can be divided into two-dimensional tracking and three-dimensional tracking Two ways.Be limited in fish in the container equipped with shallow water by two-dimensional tracking, then the motion of fish can be approximated to be plane motion.It is this Although mode can be analyzed to the behavior of fish, tracking is only limited and carried out in two-dimensional space, it is difficult to describe the fortune of fish comprehensively Dynamic behavior.Motion mode of the three-dimensional trace simulation fish in natural environment, the track data for obtaining can more reflect the true row of fish For, thus receive the attention of more researchers.
The movement locus of tracking fish belongs to the multiple target three-dimensional tracking problem in computer vision in three dimensions.Its face Face following several difficulties:Fish is a kind of less non-rigid targets of texture, and it can produce the shape of various change in travelling, this For the design of detector brings difficulty;In addition, fish occurs frequently mutual eclipse phenomena in travelling, detection is improve Difficulty;Three-dimensional tracking based on binocular vision can be influenceed by overwater refraction, and the screening occurred during more intractable tracking Gear problem.In order to overcome the shortcomings of binocular vision, it is a kind of to carry out shooting perpendicular to the water surface from different directions using many cameras Preferable scheme.But the program can cause Stereo matching difficulty because of apparent change of the target between different visual angles direction again Increase;The division that target occurs during blocking with merge phenomenon for target association brings great difficulty;Additionally, target The mistake occurred during detection and Stereo matching can also directly influence the accuracy of target association.
Although current tracking can under some experimental conditions obtain the 3 D motion trace of fish, incomplete Solve problem above.Therefore, the fish tracking in three dimensions is still an extremely challenging problem.
The content of the invention
The present invention is based on above mentioned problem, it is proposed that shoal of fish three-dimensional tracking and system, can improve target following Accuracy.
In view of this, an aspect of of the present present invention proposes shoal of fish three-dimensional tracking, including:
Target to top view direction and v side-looking directions carries out main framing extraction respectively, wherein, the main framing includes many Individual skeletal point, v=1 or 2;
After three skeletal points are screened from the main framing as characteristic point, used according to the apparent complexity of target double Characteristic point model is represented the target in top view direction and the target of v side-looking directions is represented using three characteristic point models;
Feature Points Matching is carried out to the target in different visual angles direction by epipolar-line constraint, and/or by track uniformity pair The target in different visual angles direction carries out Feature Points Matching;
It is shelter target and unshielding target that length according to the main framing or end points quantity distinguish the target after matching, Motion association is carried out to the unshielding target, and matching association is carried out to the shelter target.
Further, described " target to top view direction and v side-looking directions carries out main framing extraction respectively " includes:
Target is obtained respectively in top view direction and the image of v side-looking directions;
Image segmentation based on background subtraction from acquired target in different visual angles direction goes out the moving region of target, Wherein, the moving region includes multiple picture points;
Based on fast marching algorithms from target the image zooming-out target in different visual angles direction main framing.
Further, the characteristic point includes:Central feature point, head feature point and tail feature point;It is described " from institute Stating after screen in main framing three skeletal points as characteristic point, being represented using bicharacteristic point model according to the apparent complexity of target The target in top view direction and represent two targets of side-looking direction using three characteristic point models " include:
The skeletal point that can most represent target shape feature is screened from the main framing as central feature point;
In v side-looking directions, the three characteristic point moulds constituted using two end points and central feature point of the main framing Type represents the target of v side-looking directions;
On top view direction, the beeline according to skeletal point to the moving region edge obtains the width of each skeletal point Degree;The average value of the width of all skeletal points that the central feature point is arrived between two end points of the main framing respectively is calculated, The first mean breadth and the second mean breadth are denoted as respectively;Compare first mean breadth and second mean breadth, root It is rigid region and non-rigid area to distinguish the main framing according to comparative result, and the main framing is demarcated respectively in the rigid region End points in domain and non-rigid area is head feature point and tail feature point;Constituted using head feature point and central feature point Bicharacteristic point model represent the target in top view direction.
Further, described " Feature Points Matching is carried out to the target in different visual angles direction by epipolar-line constraint, and/or it is logical Cross track uniformity carries out Feature Points Matching to the target in different visual angles direction " include:
Respectively at target, same position selects eight matchings in the image in top view direction and the image of v side-looking directions Point, basis matrix is calculated based on 8 methods;
The central feature o'clock of target in top view direction is obtained according to the basis matrix in the corresponding pole of v side-looking directions Line, and central feature Point matching is carried out to the target in top view direction and the target of v side-looking directions using the polar curve;
When the central feature point of the target of v side-looking directions is less than the Stereo matching threshold value of setting to the distance of the polar curve When, the Stereo matching success under the conditions of epipolar-line constraint, conversely, the Stereo matching failure under the conditions of epipolar-line constraint;
When the Stereo matching under the conditions of epipolar-line constraint fails, using track uniformity to the target and v in top view direction The target of side-looking direction carries out Stereo matching.
It is further, described that " length or end points quantity according to the main framing distinguish the target after matching to block mesh Mark and unshielding target, carry out motion association, and carry out matching association to the shelter target to the unshielding target " bag Include:
Whether the skeletal point quantity of a target is judged more than the main framing middle skeleton point quantity maximum for storing, and judges to be somebody's turn to do Whether the main framing end points quantity of target is more than 2;When any result of determination is to be, the target is demarcated for shelter target, conversely, The target is demarcated for unshielding target;
When a target is unshielding target, the target i according to present frametWith respect to the target i of former framet-1The position of generation Put change pc (it-1,it) and direction change dc (it-1,it), build target itAssociation cost function:
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w distinguish For direction change and change in location it is described associate cost function in shared weight, and according to the association cost function, be based on Greedy algorithm carries out suboptimization association to the unshielding target;
When a target is shelter target, to each target i in top view directiontSet up Status Flag:
As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame Matching association is carried out to blocking the target it that front and rear all status indicators are -1, to ensure the target in top view direction before and after blocking It is respectively positioned in the identical strip path curve fragment of v side-looking directions.
Another invention of the present invention provides a kind of shoal of fish Three-dimensional tracking systems, including:
Main framing extraction module, main framing extraction is carried out for the target to top view direction and v side-looking directions respectively, its In, the main framing includes multiple skeletal points, v=1 or 2;
Feature point extraction module, for after three skeletal points are screened from the main framing as characteristic point, according to mesh The apparent complexity of mark represents the target in top view direction and using three characteristic point models represents v side-lookings using bicharacteristic point model The target in direction;
Stereo matching module, for carrying out Feature Points Matching to the target in different visual angles direction by epipolar-line constraint, and/or Feature Points Matching is carried out to the target in different visual angles direction by track uniformity;
Target association module, the target after matching is distinguished to block for the length according to the main framing or end points quantity Target and unshielding target, carry out motion association, and carry out matching association to the shelter target to the unshielding target.
Further, the main framing extraction module includes:
Image acquisition unit, for obtaining target respectively in top view direction and the image of v side-looking directions;
Region segmentation unit, for based on background subtraction from acquired target different visual angles direction image segmentation Go out the moving region of target, wherein, the moving region includes multiple picture points;
Skeletal extraction unit, for based on fast marching algorithms from target different visual angles direction image zooming-out target Main framing.
Further, the characteristic point includes:Central feature point, head feature point and tail feature point;The characteristic point Extraction module includes:
Screening unit, the skeletal point that target shape feature can be most represented for being screened from the main framing is special as center Levy a little;
Three characteristic point model units, in v side-looking directions, using two end points of the main framing and center spy Levy the target that a three characteristic point model of composition represents v side-looking directions;
Bicharacteristic point model unit, in top view direction, according to skeletal point to the most short of the moving region edge Distance obtains the width of each skeletal point;Calculate all bones that the central feature point is arrived between two end points of the main framing respectively The average value of the width of frame point, is denoted as the first mean breadth and the second mean breadth respectively;Compare first mean breadth and Second mean breadth, it is rigid region and non-rigid area to distinguish the main framing according to comparative result, and institute is demarcated respectively End points of the main framing in the rigid region and non-rigid area is stated for head feature point and tail feature point;It is special using head Levy the target that top view direction is a little represented with the bicharacteristic point model of central feature point composition.
Further, the stereo matching module includes:
Epipolar-line constraint matching unit, for respectively at target in the image in top view direction and the image of v side-looking directions Same position selects eight match points, calculates basis matrix based on 8 methods, and obtain top view direction according to the basis matrix Target central feature o'clock in the corresponding polar curve of v side-looking directions, and target and v sides using the polar curve to top view direction The target of apparent direction carries out central feature Point matching;When v side-looking directions target central feature point to the polar curve away from During from less than the Stereo matching threshold value for setting, the Stereo matching success under the conditions of epipolar-line constraint, conversely, under the conditions of epipolar-line constraint Stereo matching fails;
Track uniformity matching unit, it is consistent using track for when the Stereo matching under the conditions of epipolar-line constraint fails Property carries out Stereo matching to the target in top view direction and the target of v side-looking directions.
Further, the target association module includes:
Whether shadowing unit, the skeletal point quantity for judging a target counts more than the main framing middle skeleton of storage Amount maximum, and judge the main framing end points quantity of the target whether more than 2;When any result of determination is to be, the mesh is demarcated Shelter target is designated as, conversely, demarcating the target for unshielding target;
Motion association unit, for when a target be unshielding target when, the target i according to present frametWith respect to former frame Target it-1Change in location pc (the i of generationt-1,it) and direction change dc (it-1,it), build target itAssociation cost letter Number:
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w distinguish For direction change and change in location it is described associate cost function in shared weight, and according to the association cost function, be based on Greedy algorithm carries out suboptimization association to the unshielding target;
Matching associative cell, for when a target is shelter target, to each target i in top view directiontSet up shape State mark:
As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame Matching association is carried out to blocking the target it that front and rear all status indicators are -1, to ensure the target in top view direction before and after blocking It is respectively positioned in the identical strip path curve fragment of v side-looking directions.
Shoal of fish three-dimensional tracking and system that the present invention is provided, in target detection, are represented using feature point model Target, effectively reduces the difficulty of tracking;Track uniformity using target in stereo-picture eliminates epipolar-line constraint condition The uncertainty of lower Stereo matching, improves the accuracy of tracking;Propose based on top view tracking, the plan supplemented by side-looking tracking Slightly solve most difficult occlusion issue during tracking, significantly improve and block disposal ability.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate Appended accompanying drawing, is described in detail below.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be attached to what is used needed for embodiment Figure is briefly described, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, thus be not construed as it is right The restriction of scope, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to this A little accompanying drawings obtain other related accompanying drawings.
Fig. 1 shows the schematic flow sheet of the shoal of fish three-dimensional tracking that the embodiment of the present invention is proposed;
Fig. 2 shows the schematic device of target image in the shoal of fish three-dimensional tracking obtained shown in Fig. 1;
Fig. 3 show shown in Fig. 1 the shoal of fish three-dimensional tracking in main framing distribution schematic diagram;
Fig. 4 shows the structural representation of the shoal of fish Three-dimensional tracking systems that the embodiment of the present invention is proposed.
Main element symbol description:
100- shoal of fish Three-dimensional tracking systems;10- main framing extraction modules;20- feature point extraction modules;30- Stereo matchings Module;40- target association modules;1- skeletal points;2- justifies;3- central features point;4- rigid regions;5- non-rigid areas;6- heads Portion's characteristic point;7- tail features point.
Specific embodiment
For the ease of understanding the present invention, shoal of fish three-dimensional tracking and system are carried out below with reference to relevant drawings more clear Chu, it is fully described by.The preferred embodiment of shoal of fish three-dimensional tracking and system is given in accompanying drawing.Shoal of fish three-dimensional tracking And system can be realized by many different forms, however it is not limited to embodiment described herein.Therefore, below to attached The detailed description of the embodiments of the invention provided in figure is not intended to limit the scope of claimed invention, but only Represent selected embodiment of the invention.Based on embodiments of the invention, those skilled in the art are not making creative work On the premise of the every other embodiment that is obtained, belong to the scope of protection of the invention.
It should be noted that the shoal of fish three-dimensional tracking that the present embodiment is provided mainly includes Object Detecting and Tracking Two parts.Further, target detection includes that moving region segmentation, main framing are extracted and the part of feature point extraction three;Target with Track includes Stereo matching and target association two parts.It is worth noting that, the shoal of fish three-dimensional tracking that the present embodiment is provided is adopted Based on the apparent tracking for changing less top view direction of target, the strategy supplemented by the tracking of side-looking direction carries out three-dimensional to target Tracking.Specifically refer to the description of embodiment 1.
Embodiment 1
Fig. 1 shows the schematic flow sheet of the shoal of fish three-dimensional tracking that the embodiment of the present invention is proposed.
As shown in figure 1, the shoal of fish three-dimensional tracking that the embodiment of the present invention is proposed, including:
Step S1, the target to top view direction and v side-looking directions carries out main framing extraction respectively, wherein, the main bone Frame includes multiple skeletal points, v=1 or 2.
Specifically, the image capture device that environment is given birth to by being arranged on fish obtains image of the fish in different visual angles.Please In the lump refering to shown in Fig. 2, in the present embodiment, environment is given birth in fish, such as top of fish jar or fishpond and two sidepieces is respectively provided with One image capture device, so that fish is obtained respectively in top view direction and the image of v side-looking directions, wherein, v=1 or 2.Can be with Understand, in the present embodiment, target is obtained respectively in two images of side-looking direction, it is of course also possible to only obtain target at one The image of side-looking direction.In another embodiment, fish passes through same image collector in the image of top view direction and v side-looking directions Acquisition order is put to obtain.
It should be noted that in the present embodiment, carrying out the target of three-dimensional tracking for fish, present specification is in general designation fish " target ", when needing to track a plurality of fish, the i.e. shoal of fish simultaneously, then tracks for multiple target is three-dimensional.
Further, the motor area of target is partitioned into from image of the target in different visual angles direction based on background subtraction Domain.Wherein, the moving region refers to the circumference of the fish obtained according to the shape facility and volume of fish body, including multiple Picture point.The picture point is the minimum unit of pie graph picture.Generally, the amplified continuous tone that can find image of image be in fact by Many small side's point compositions of color, each small side's point is exactly a picture point.More specifically, according to
The moving region of target is partitioned into from image of the target in different visual angles direction.Wherein, Rt(x, y) is tied for segmentation Really, i.e. the moving region of target, ft(x, y) is image of the target in different visual angles direction, fb(x, y) is background image, TgTo divide Cut threshold value.Because the edge of the moving region for obtaining is more coarse, it is unfavorable for the extraction and analysis of main framing, it is necessary to be carried out to it Pretreatment.Preferably, morphological operation is carried out to the moving region being divided into, the hole inside filling moving region simultaneously deletes face The less interference region of product, then does smoothing processing using medium filtering to moving region.
Further, based on fast marching algorithms from target the image zooming-out target in different visual angles direction main framing. Wherein, the main framing includes multiple skeletal points.More specifically, according to
S=(i, j) | max (| ux|, | uy|) > Tu,
ux=U (i+1, j)-U (i, j),
uy=U (i, j+1)-U (i, j)
Extract the main framing of the target in different visual angles direction.Wherein, s is a skeletal point, uxAnd uyRespectively described motor area U value difference of the point (i, j) in x directions and y directions is different in domain, and U (i, j) is the arrival time of picture point (i, j);TuIt is main skeletal extraction Threshold value.Main framing extracts the effect of threshold value Tu similar to scale factor, when main framing extraction threshold value Tu is smaller, the main bone for obtaining Frame details is more, and when main framing extraction threshold value Tu is larger, the main framing details for obtaining is less.As main framing extracts threshold value The increase of Tu, the details of main framing gradually tails off, and main framing gradually strengthens the descriptive power of the agent structure of target.
Step S2, after three skeletal points are screened from the main framing as characteristic point, according to the apparent complexity of target Spend and represent the target in top view direction using bicharacteristic point model and represent using three characteristic point models the target of v side-looking directions.
Specifically, after the main framing for extracting target, being screened from the main framing can most represent target shape feature Skeletal point as central feature point, further simplify the structure of target.Generally, based on the main framing that fast marching algorithms are obtained In, maximum U value points are located at the center of main framing, can preferably reflect the center of target shape, therefore by the bone Frame point is defined as the central feature point of target.
Further, on top view direction, also referring to shown in Fig. 3, with a skeletal point 1 for the center of circle and with the skeletal point The beeline at 1 to the moving region edge is radius work circle 2, obtains the diameter of the circle 2 as the width of the skeletal point.In Main framing is divided into two parts by heart characteristic point 3, wherein, central feature point 3 to main framing end point is Part I, and center is special Levy and a little 3 arrive another end points of main framing for Part II.Two average values of the width of all skeletal points of part are calculated respectively, are obtained To the mean breadth of each part.In other words, central feature point 3 to all skeletal points between main framing end point is calculated The average value of width, and central feature point 3 to the average value of the width of all skeletal points between another end points of main framing is calculated, Respectively as two mean breadths of part of main framing.Compare two mean breadths of part of main framing, according to comparative result area The main framing is divided to be rigid region 4 and non-rigid area 5.It is readily appreciated that, the average width of all skeletal points 1 in rigid region 4 Mean breadth of the degree more than all skeletal points 1 in non-rigid area 5.The main framing is demarcated respectively in the He of the rigid region 4 End points in non-rigid area 5 is head feature point 6 and tail feature point 7;Use 3 groups of head feature point 6 and central feature point Into bicharacteristic point model represent the target in top view direction.The end points is the skeletal point in main framing end.
It is readily appreciated that, is difficult to estimate fish body width in v side-looking directions, therefore can not be from the two of the main framing ends Head feature point and tail feature point are distinguished in point.Therefore, in v side-looking directions, different from bicharacteristic point model, use Three characteristic point models of two end points and central feature the point composition of the main framing represent the target of v side-looking directions.
Step S3, Feature Points Matching is carried out by epipolar-line constraint to the target in different visual angles direction, and/or by track one Cause property carries out Feature Points Matching to the target in different visual angles direction.
Specifically, the same position selection eight in the image in top view direction and the image of v side-looking directions respectively at target Individual match point, basis matrix is calculated based on 8 methods.The basis matrix is obtained for same three-dimensional scenic at two different points of view Two width two dimensional images between geometrical relationship, be then the image and v side-looking directions in top view direction corresponding to the embodiment Geometrical relationship between image.
Further, the central feature o'clock of target in top view direction is obtained according to the basis matrix in v side-looking directions Corresponding polar curve, and central feature Point matching is carried out to the target in top view direction and the target of v side-looking directions using the polar curve. The stereo-picture of image in other words, according to target in the 1st side-looking direction and the image composition in top view direction calculates the mesh Corresponding basis matrix is marked, top view direction to the polar curve of the 1st side-looking direction can be obtained according to the basis matrix, such that it is able to right The target of top view direction and the 1st side-looking direction carries out Stereo matching.Similarly, according to target the image of the 2nd side-looking direction and The stereo-picture of the image composition in top view direction calculates the corresponding basis matrix of the target, can be pushed up according to the basis matrix Apparent direction carries out Stereo matching to the polar curve of the 2nd side-looking direction such that it is able to the target to top view direction and the 2nd side-looking direction. The purpose of Stereo matching is to determine the target in top view direction corresponding to which target in v side-looking directions, same so as to realize Matching of the target in one group of stereo-picture.
More specifically, using respectivelyWithRepresent the target i in top view directiontWith the target j of v side-looking directionstIn Heart characteristic point, central feature point is obtained according to the basis matrixIn the corresponding polar curve of v side-looking directionsFurther According to
To the target i in top view directiontWith the target j of v side-looking directionstCentral feature point carry out Stereo matching, wherein, em(it,jt) be the stereo matching results under the conditions of epipolar-line constraint, distance () represent central feature point between polar curve away from From Tm is Stereo matching threshold value.When the central feature point of the target of v side-looking directions is less than setting to the distance of the polar curve During Stereo matching threshold value, represent that the target in the top view direction under the conditions of epipolar-line constraint is right in the matching of v side-looking directions only one of which As Stereo matching success;Conversely, represent the target in the top view direction under the conditions of epipolar-line constraint has multiple in v side-looking directions With object, Stereo matching failure.
When under the conditions of epipolar-line constraint Stereo matching fail when, it is necessary under using epipolar-line constraint target track uniformity come The uncertain problem of Stereo matching is solved, i.e., using track uniformity to the target in top view direction and the mesh of v side-looking directions Mark carries out Stereo matching.Specifically, T is used respectivelyi topWithRepresent the target i in top view directiontWith the target of v side-looking directions jtIn the movement locus of t.Assuming that the target i in top view directiontThere is k matching mesh in v side-looking directions by epipolar-line constraint Mark, then select the n frame movement locus nearest from frame t and matched from this k movement locus of matching target.According to
The target of target and v side-looking directions to top view direction carries out Stereo matching.Wherein, sm (it,jt) it is track one Stereo matching results under the conditions of cause property, ml () represents that under the conditions of epipolar-line constraint target is in top view direction and v side-looking sides To two matching lengths of movement locus.Some in matching target when target it and k of top view direction has preferably Track uniformity, represent under the condition for consistence of track Stereo matching success.It is readily appreciated that, due to the movement locus of target With uniqueness, generally need to only choose by less several frames that the match is successful.When Stereo matching success, three feature point models can To be directly reduced to bicharacteristic point model.
Step S4, it is shelter target and non-screening that length or end points quantity according to the main framing distinguish the target after matching Gear target, carries out motion association, and carry out matching association to the shelter target to the unshielding target.
It should be noted that when target eclipse phenomena occurs during exercise, detection mistake inevitably occurs, this When risk is brought to target association.In order to ensure the accuracy of target association, target is divided into shelter target and non-screening first Gear target two types.Specifically, target is in motion process, if blocked, only existed in main framing head and Two end points of tail.Once occurring blocking, then the quantity of the length of main framing or end points will increase therewith, can judge accordingly be It is no to block.More specifically, according to
Judge whether the target after Stereo matching blocks.Wherein, sp is skeletal point quantity, and ep is main framing End points quantity, MAL is main framing middle skeleton point quantity maximum.It is readily appreciated that, when the skeletal point quantity of a target is more than storage Main framing middle skeleton point quantity maximum, or the main framing end points quantity of the target is when being more than 2, demarcates the target to block mesh Mark;Conversely, demarcating the target for unshielding target.
When a target is unshielding target, motion association is carried out to the unshielding target.Specifically, according to present frame Target itWith respect to the target i of former framet-1Change in location pc (the i of generationt-1,it) and direction change dc (it-1,it), build mesh Mark itAssociation cost function:
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w distinguish For direction change and change in location it is described associate cost function in shared weight.According to the association cost function, based on greedy Center algorithm carries out suboptimization association to the unshielding target.The greedy algorithm (also known as greedy algorithm) refers to asking When topic is solved, always make and currently appearing to be best selection, in other words, do not taken in from total optimization, its institute What is made is locally optimal solution in some sense.
It is worth noting that, on top view direction, because bicharacteristic point model has directional information, it is possible to directly enter Row association;And then by three characteristic point model simplifications need to be double according to the relative position relation between target in v side-looking directions It is associated after feature point model.Preferably, in the present embodiment, in order to improve association efficiency, in association process, target is worked as Between distance be more than pcmaxWhen, abandon association.
When a target is shelter target, matching association is carried out to the shelter target.Specifically, in top view direction Each target itSet up Status Flag:
When the status indicator of the target it of present frame is+1, according to the v side-looking directions n two field picture nearest from present frame Matching association is carried out to blocking the target it that front and rear all status indicators are -1, to ensure the target in top view direction before and after blocking It is respectively positioned in the identical strip path curve fragment of v side-looking directions.
Easily find, the shoal of fish three-dimensional tracking that the present embodiment is provided is using the apparent less top view direction of change of target Tracking based on, supplemented by the tracking of side-looking direction strategy three-dimensional tracking is carried out to target.It is this with top view tracking based on, side-looking Strategy supplemented by tracking can effectively solve the frequent occlusion issue in multiple target tracking.
Embodiment 2
Fig. 4 shows the structural representation of the shoal of fish Three-dimensional tracking systems that the embodiment of the present invention is proposed.
As shown in figure 4, the shoal of fish Three-dimensional tracking systems 100 that the embodiment of the present invention is proposed, including main framing extraction module 10, Feature point extraction module 20, stereo matching module 30 and target association module 40.
Main framing extraction module 10 is used to carry out main framing extraction respectively to the target in top view direction and v side-looking directions, Wherein, the main framing includes multiple skeletal points, v=1 or 2.In the present embodiment, the main framing extraction module 10 includes:
Image acquisition unit, for obtaining target respectively in top view direction and the image of v side-looking directions;
Region segmentation unit, target is gone out for the image segmentation based on background subtraction from target in different visual angles direction Moving region, wherein, the moving region includes multiple picture points;
Skeletal extraction unit, for based on fast marching algorithms from target different visual angles direction image zooming-out target Main framing.
Feature point extraction module 20 in from the main framing screen three skeletal points as characteristic point after, according to target Apparent complexity represents the target in top view direction and using three characteristic point models represents v side-looking sides using bicharacteristic point model To target.In the present embodiment, the feature point extraction module 20 includes:
Screening unit, the skeletal point that target shape feature can be most represented for being screened from the main framing is special as center Levy a little;
Three characteristic point model units, in v side-looking directions, using two end points of the main framing and center spy Levy the target that a three characteristic point model of composition represents v side-looking directions;
Bicharacteristic point model unit, in top view direction, according to skeletal point to the most short of the moving region edge Distance obtains the width of each skeletal point;Calculate all bones that the central feature point is arrived between two end points of the main framing respectively The average value of the width of frame point, is denoted as the first mean breadth and the second mean breadth respectively;Compare first mean breadth and Second mean breadth, it is rigid region and non-rigid area to distinguish the main framing according to comparative result, and institute is demarcated respectively End points of the main framing in the rigid region and non-rigid area is stated for head feature point and tail feature point;It is special using head Levy the target that top view direction is a little represented with the bicharacteristic point model of central feature point composition.
Stereo matching module 30 is used to carry out the target in different visual angles direction Feature Points Matching by epipolar-line constraint, and/ Or Feature Points Matching is carried out to the target in different visual angles direction by track uniformity.In the present embodiment, the Stereo matching mould Block 30 includes:
Epipolar-line constraint matching unit, for respectively at target in the image in top view direction and the image of v side-looking directions Same position selects eight match points, calculates basis matrix based on 8 methods, and obtain top view direction according to the basis matrix Target central feature o'clock in the corresponding polar curve of v side-looking directions, and target and v sides using the polar curve to top view direction The target of apparent direction carries out central feature Point matching;When v side-looking directions target central feature point to the polar curve away from During from less than the Stereo matching threshold value for setting, the Stereo matching success under the conditions of epipolar-line constraint, conversely, under the conditions of epipolar-line constraint Stereo matching fails;
Track uniformity matching unit, it is consistent using track for when the Stereo matching under the conditions of epipolar-line constraint fails Property carries out Stereo matching to the target in top view direction and the target of v side-looking directions.
Target association module 40 is used to distinguish the target after matching to hide according to the length or end points quantity of the main framing Gear target and unshielding target, carry out motion association, and carry out matching association to the shelter target to the unshielding target. In the present embodiment, the target association module 40 includes:
Whether shadowing unit, the skeletal point quantity for judging a target counts more than the main framing middle skeleton of storage Amount maximum, and judge the main framing end points quantity of the target whether more than 2;When any result of determination is to be, the mesh is demarcated Shelter target is designated as, conversely, demarcating the target for unshielding target;
Motion association unit, for when a target be unshielding target when, the target i according to present frametWith respect to former frame Target it-1Change in location pc (the i of generationt-1,it) and direction change dc (it-1,it), build target itAssociation cost letter Number:
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w and 1-w distinguish For direction change and change in location it is described associate cost function in shared weight, and according to the association cost function, be based on Greedy algorithm carries out suboptimization association to the unshielding target;
Matching associative cell, for when a target is shelter target, to each target i in top view directiontSet up shape State mark:
As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame Matching association is carried out to blocking the target it that front and rear all status indicators are -1, to ensure the target in top view direction before and after blocking It is respectively positioned in the identical strip path curve fragment of v side-looking directions.
Shoal of fish three-dimensional tracking and system that the present invention is provided, in target detection, are represented using feature point model Target, effectively reduces the difficulty of tracking;Track uniformity using target in stereo-picture eliminates epipolar-line constraint condition The uncertainty of lower Stereo matching, improves the accuracy of tracking;Propose based on top view tracking, the plan supplemented by side-looking tracking Slightly solve most difficult occlusion issue during tracking, significantly improve and block disposal ability.
The technique effect and preceding method embodiment phase of the system that the embodiment of the present invention is provided, its realization principle and generation Together, to briefly describe, system embodiment part does not refer to part, refers to corresponding contents in preceding method embodiment.
In all examples being illustrated and described herein, any occurrence should be construed as merely exemplary, without It is limitation to be, therefore, other examples of exemplary embodiment can have different values.It should be noted that:Similar label and word Mother represents similar terms in following accompanying drawing, therefore, once be defined in a certain Xiang Yi accompanying drawing, then in subsequent accompanying drawing It need not further be defined and is explained.
In several embodiments provided herein, it should be understood that disclosed device can be by other sides Formula is realized.Device embodiment described above is only schematical, for example, the division of the unit, only one kind are patrolled Collect function to divide, there can be other dividing mode when actually realizing, but for example, multiple units or component can combine or can To be integrated into another system, or some features can be ignored, or not perform.It is another, it is shown or discussed each other Coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication link of device or unit by some communication interfaces Connect, can be electrical, mechanical or other forms.
The unit for separating component explanation can be or may not be physically separate, be what unit showed Part can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple networks On unit.Some or all of unit therein can be according to the actual needs selected to realize the purpose of this embodiment scheme.
In addition, during each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.
The above, specific embodiment only of the invention, but protection scope of the present invention is not limited thereto, and it is any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all contain Cover within protection scope of the present invention.Therefore, protection scope of the present invention described should be defined by scope of the claims.

Claims (10)

1. a kind of shoal of fish three-dimensional tracking, it is characterised in that including:
Target to top view direction and v side-looking directions carries out main framing extraction respectively, wherein, the main framing includes multiple bones Frame point, v=1 or 2;
After three skeletal points are screened from the main framing as characteristic point, bicharacteristic is used according to the complexity that target is apparent Point model is represented the target in top view direction and the target of v side-looking directions is represented using three characteristic point models;
Feature Points Matching is carried out to the target in different visual angles direction by epipolar-line constraint, and/or by track uniformity to difference The target of view directions carries out Feature Points Matching;
It is shelter target and unshielding target that length or end points quantity according to the main framing distinguish the target after matching, to institute Stating unshielding target carries out motion association, and carries out matching association to the shelter target.
2. the shoal of fish according to claim 1 three-dimensional tracking, it is characterised in that described " to top view direction and v side-lookings The target in direction carries out main framing extraction respectively " include:
Target is obtained respectively in top view direction and the image of v side-looking directions;
Image segmentation based on background subtraction from acquired target in different visual angles direction goes out the moving region of target, its In, the moving region includes multiple picture points;
Based on fast marching algorithms from target the image zooming-out target in different visual angles direction main framing.
3. the shoal of fish according to claim 2 three-dimensional tracking, it is characterised in that the characteristic point includes:Central feature Point, head feature point and tail feature point;
It is described " after three skeletal points are screened from the main framing as characteristic point, to be used according to the complexity that target is apparent Bicharacteristic point model represents the target in top view direction and represents two targets of side-looking direction using three characteristic point models " include:
The skeletal point that can most represent target shape feature is screened from the main framing as central feature point;
In v side-looking directions, the three characteristic point model tables constituted using two end points and central feature point of the main framing Show the target of v side-looking directions;
On top view direction, the beeline according to skeletal point to the moving region edge obtains the width of each skeletal point;Meter The average value of the width of all skeletal points that the central feature point is arrived between two end points of the main framing respectively is calculated, is remembered respectively Make the first mean breadth and the second mean breadth;Compare first mean breadth and second mean breadth, according to comparing It is rigid region and non-rigid area that result distinguishes the main framing, and the main framing is demarcated respectively in the rigid region and non- End points in rigid region is head feature point and tail feature point;The double spies constituted using head feature point and central feature point Levy the target that point model represents top view direction.
4. the shoal of fish according to claim 2 three-dimensional tracking, it is characterised in that described " by epipolar-line constraint to difference The target of view directions carries out Feature Points Matching, and/or carries out feature to the target in different visual angles direction by track uniformity Point matching " includes:
Respectively at target, same position selects eight match points, base in the image in top view direction and the image of v side-looking directions Basis matrix is calculated in 8 methods;
The central feature o'clock of target in top view direction is obtained according to the basis matrix in the corresponding polar curve of v side-looking directions, and Central feature Point matching is carried out to the target in top view direction and the target of v side-looking directions using the polar curve;
When the central feature point of the target of v side-looking directions is less than the Stereo matching threshold value of setting to the distance of the polar curve, Stereo matching success under the conditions of epipolar-line constraint, conversely, the Stereo matching failure under the conditions of epipolar-line constraint;
When the Stereo matching under the conditions of epipolar-line constraint fails, the target to top view direction and v side-lookings using track uniformity The target in direction carries out Stereo matching.
5. the shoal of fish according to claim 1 three-dimensional tracking, it is characterised in that described " according to the length of the main framing It is shelter target and unshielding target that degree or end points quantity distinguish the target after matching, and motion pass is carried out to the unshielding target Connection, and matching association is carried out to the shelter target " include:
Whether the skeletal point quantity of a target is judged more than the main framing middle skeleton point quantity maximum for storing, and judges the target Main framing end points quantity whether be more than 2;When any result of determination is to be, the target is demarcated for shelter target, conversely, demarcating The target is unshielding target;
When a target is unshielding target, the target i according to present frametWith respect to the target i of former framet-1The change in location of generation pc(it-1,it) and direction change dc (it-1,it), build target itAssociation cost function:
c v ( i t - 1 , i t ) = w ( d c ( i t - 1 , i t ) dc max ) + ( 1 - w ) ( p c ( i t - 1 , i t ) pc m a x )
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w the and 1-w sides of being respectively To change and change in location it is described associate cost function in shared weight, and according to the association cost function, based on greed Algorithm carries out suboptimization association to the unshielding target;
When a target is shelter target, to each target i in top view directiontSet up Status Flag:
As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame to hide Keeping off the target it that front and rear all status indicators are -1 carries out matching association, and front and rear position is being blocked with the target for ensureing top view direction In the identical strip path curve fragment of v side-looking directions.
6. a kind of shoal of fish Three-dimensional tracking systems, it is characterised in that including:
Main framing extraction module, main framing extraction is carried out for the target to top view direction and v side-looking directions respectively, wherein, The main framing includes multiple skeletal points, v=1 or 2;
Feature point extraction module, for after three skeletal points are screened from the main framing as characteristic point, according to object table The complexity of sight represents the target in top view direction and using three characteristic point models represents v side-looking directions using bicharacteristic point model Target;
Stereo matching module, for carrying out Feature Points Matching to the target in different visual angles direction by epipolar-line constraint, and/or passes through Track uniformity carries out Feature Points Matching to the target in different visual angles direction;
Target association module, it is shelter target that the target after matching is distinguished for the length according to the main framing or end points quantity With unshielding target, motion association is carried out to the unshielding target, and matching association is carried out to the shelter target.
7. shoal of fish Three-dimensional tracking systems according to claim 6, it is characterised in that the main framing extraction module includes:
Image acquisition unit, for obtaining target respectively in top view direction and the image of v side-looking directions;
Region segmentation unit, mesh is gone out for the image segmentation based on background subtraction from acquired target in different visual angles direction Target moving region, wherein, the moving region includes multiple picture points;
Skeletal extraction unit, for based on fast marching algorithms from target the image zooming-out target in different visual angles direction main bone Frame.
8. shoal of fish Three-dimensional tracking systems according to claim 7, it is characterised in that the characteristic point includes:Central feature Point, head feature point and tail feature point;The feature point extraction module includes:
Screening unit, can most represent the skeletal point of target shape feature as central feature for being screened from the main framing Point;
Three characteristic point model units, in v side-looking directions, using two end points and central feature point of the main framing Three characteristic point models of composition represent the target of v side-looking directions;
Bicharacteristic point model unit, in top view direction, according to the beeline of skeletal point to the moving region edge Obtain the width of each skeletal point;Calculate all skeletal points that the central feature point is arrived between two end points of the main framing respectively Width average value, the first mean breadth and the second mean breadth are denoted as respectively;Compare first mean breadth and described Second mean breadth, it is rigid region and non-rigid area to distinguish the main framing according to comparative result, and the master is demarcated respectively End points of the skeleton in the rigid region and non-rigid area is head feature point and tail feature point;Use head feature point The bicharacteristic point model constituted with central feature point represents the target in top view direction.
9. shoal of fish Three-dimensional tracking systems according to claim 7, it is characterised in that the stereo matching module includes:
Epipolar-line constraint matching unit, for identical in the image in top view direction and the image of v side-looking directions respectively at target Position selects eight match points, and basis matrix is calculated based on 8 methods, and the mesh in top view direction is obtained according to the basis matrix Target central feature o'clock is in the corresponding polar curve of v side-looking directions, and target and v side-looking sides using the polar curve to top view direction To target carry out central feature Point matching;When the central feature point of the target of v side-looking directions is low to the distance of the polar curve When the Stereo matching threshold value of setting, the Stereo matching success under the conditions of epipolar-line constraint, conversely, the solid under the conditions of epipolar-line constraint It fails to match;
Track uniformity matching unit, for when the Stereo matching under the conditions of epipolar-line constraint fails, using track uniformity pair The target in top view direction and the target of v side-looking directions carry out Stereo matching.
10. shoal of fish Three-dimensional tracking systems according to claim 6, it is characterised in that the target association module includes:
Whether shadowing unit, the skeletal point quantity for judging a target is more than the main framing middle skeleton point quantity for storing most Big value, and judge the main framing end points quantity of the target whether more than 2;When any result of determination is to be, demarcating the target is Shelter target, conversely, demarcating the target for unshielding target;
Motion association unit, for when a target be unshielding target when, the target i according to present frametWith respect to the target of former frame it-1Change in location pc (the i of generationt-1,it) and direction change dc (it-1,it), build target itAssociation cost function:
c v ( i t - 1 , i t ) = w ( d c ( i t - 1 , i t ) dc max ) + ( 1 - w ) ( p c ( i t - 1 , i t ) pc m a x )
Wherein, dcmaxAnd pcmaxMaximum direction change and the maximum position change of respectively adjacent interframe, w the and 1-w sides of being respectively To change and change in location it is described associate cost function in shared weight, and according to the association cost function, based on greed Algorithm carries out suboptimization association to the unshielding target;
Matching associative cell, for when a target is shelter target, to each target i in top view directiontSet up state mark Will:
As the target i of present frametStatus indicator be+1 when, according to the v side-looking directions n two field picture nearest from present frame to hide Keeping off the target it that front and rear all status indicators are -1 carries out matching association, and front and rear position is being blocked with the target for ensureing top view direction In the identical strip path curve fragment of v side-looking directions.
CN201710119403.1A 2017-03-02 2017-03-02 Shoal of fish three-dimensional tracking and system Active CN106875429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710119403.1A CN106875429B (en) 2017-03-02 2017-03-02 Shoal of fish three-dimensional tracking and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710119403.1A CN106875429B (en) 2017-03-02 2017-03-02 Shoal of fish three-dimensional tracking and system

Publications (2)

Publication Number Publication Date
CN106875429A true CN106875429A (en) 2017-06-20
CN106875429B CN106875429B (en) 2018-02-02

Family

ID=59168420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710119403.1A Active CN106875429B (en) 2017-03-02 2017-03-02 Shoal of fish three-dimensional tracking and system

Country Status (1)

Country Link
CN (1) CN106875429B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818574A (en) * 2017-09-21 2018-03-20 楚雄师范学院 Shoal of fish three-dimensional tracking based on skeleton analysis
JP2023047371A (en) * 2021-09-27 2023-04-06 日本電気株式会社 Fish detection device, fish detection method and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236893A (en) * 2010-04-30 2011-11-09 中国人民解放军装备指挥技术学院 Space-position-forecast-based corresponding image point matching method for lunar surface image
CN103955688A (en) * 2014-05-20 2014-07-30 楚雄师范学院 Zebra fish school detecting and tracking method based on computer vision
CN104766346A (en) * 2015-04-15 2015-07-08 楚雄师范学院 Zebra fish tracking method based on video images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236893A (en) * 2010-04-30 2011-11-09 中国人民解放军装备指挥技术学院 Space-position-forecast-based corresponding image point matching method for lunar surface image
CN103955688A (en) * 2014-05-20 2014-07-30 楚雄师范学院 Zebra fish school detecting and tracking method based on computer vision
CN104766346A (en) * 2015-04-15 2015-07-08 楚雄师范学院 Zebra fish tracking method based on video images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHI-MING QIAN 等: "《An effective and robust method for tracking multiple fish in video image based on fish head detection》", 《BMC BIOINFORMATICS(2016)》 *
方非,谢广明: "《一种基于图像的机器鱼动态跟踪算法》", 《机器人技术与应用》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818574A (en) * 2017-09-21 2018-03-20 楚雄师范学院 Shoal of fish three-dimensional tracking based on skeleton analysis
CN107818574B (en) * 2017-09-21 2021-08-27 楚雄师范学院 Fish shoal three-dimensional tracking method based on skeleton analysis
JP2023047371A (en) * 2021-09-27 2023-04-06 日本電気株式会社 Fish detection device, fish detection method and program
JP7287430B2 (en) 2021-09-27 2023-06-06 日本電気株式会社 Fish detection device, fish detection method and program

Also Published As

Publication number Publication date
CN106875429B (en) 2018-02-02

Similar Documents

Publication Publication Date Title
CN104766058B (en) A kind of method and apparatus for obtaining lane line
CN106650640B (en) Negative obstacle detection method based on laser radar point cloud local structure characteristics
CN108399643A (en) A kind of outer ginseng calibration system between laser radar and camera and method
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
CN112598713A (en) Offshore submarine fish detection and tracking statistical method based on deep learning
CN110110797B (en) Water surface target training set automatic acquisition method based on multi-sensor fusion
WO2002058008A1 (en) Two and three dimensional skeletonization
CN107220647A (en) Crop location of the core method and system under a kind of blade crossing condition
CN109871829A (en) A kind of detection model training method and device based on deep learning
CN113012157B (en) Visual detection method and system for equipment defects
CN110111283A (en) The reminding method and system of infrared suspected target under a kind of complex background
CN116071283B (en) Three-dimensional point cloud image fusion method based on computer vision
CN109063549A (en) High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN107704867A (en) Based on the image characteristic point error hiding elimination method for weighing the factor in a kind of vision positioning
CN106875429B (en) Shoal of fish three-dimensional tracking and system
CN110189390A (en) A kind of monocular vision SLAM method and system
CN113313047A (en) Lane line detection method and system based on lane structure prior
CN111913177A (en) Method and device for detecting target object and storage medium
CN110197206A (en) The method and device of image procossing
CN113096184A (en) Diatom positioning and identifying method under complex background
CN115685102A (en) Target tracking-based radar vision automatic calibration method
CN113222889B (en) Industrial aquaculture counting method and device for aquaculture under high-resolution image
CN112464933B (en) Intelligent identification method for weak and small target through foundation staring infrared imaging
CN110163103A (en) A kind of live pig Activity recognition method and apparatus based on video image
CN117173743A (en) Time sequence-related self-adaptive information fusion fish population tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant