CN108305273B - A kind of method for checking object, device and storage medium - Google Patents

A kind of method for checking object, device and storage medium Download PDF

Info

Publication number
CN108305273B
CN108305273B CN201711206483.0A CN201711206483A CN108305273B CN 108305273 B CN108305273 B CN 108305273B CN 201711206483 A CN201711206483 A CN 201711206483A CN 108305273 B CN108305273 B CN 108305273B
Authority
CN
China
Prior art keywords
subgraph
present frame
image
frame
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711206483.0A
Other languages
Chinese (zh)
Other versions
CN108305273A (en
Inventor
陈超
吴伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201711206483.0A priority Critical patent/CN108305273B/en
Publication of CN108305273A publication Critical patent/CN108305273A/en
Application granted granted Critical
Publication of CN108305273B publication Critical patent/CN108305273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses method for checking object, device and storage mediums, are applied to technical field of information processing.When object of the object test equipment in the present frame left image and present frame right image shot to binocular camera detects, the present frame left image that needs that first treated by present frame left image and present frame right image or at least once and present frame right image are divided into present frame subgraph;Then tracking and matching is carried out between corresponding subgraph, obtains the first motion information and actual position information of multiple present frame subgraphs;Finally multiple present frame subgraphs are clustered again, present frame subgraph can indicate an object in obtained each cluster, so as to identify corresponding object according to the present frame subgraph in each cluster.In this way, the reliability of the tracking and matching between subgraph is higher, the object in image can be accurately identified, and the calculation amount for carrying out tracking and matching is smaller.

Description

A kind of method for checking object, device and storage medium
Technical field
The present invention relates to technical field of information processing, in particular to a kind of method for checking object, device and storage medium.
Background technique
The object detection of view-based access control model is primarily referred to as the picture or video shot according to camera, is calculated using certain program Method identifies the target object (including but not limited to pedestrian, vehicle, trees etc.) in picture or video.View-based access control model Object detection technique is widely used in the numerous areas such as robot, unmanned vehicle, safety monitoring.
Common method for checking object is the image based on binocular camera shooting at present, and a kind of method is worked as to binocular camera The two images of preceding shooting carry out Stereo matching, obtain depth information, are then detected according to depth information each in image Object, this detection effect are poor.
Another method is to extract the feature in four width images for two frame of the front and back totally four width image of binocular camera shooting Point carries out three-dimensional reconstruction, to calculate the scene flows of image, finally according to scene flows by the feature points clustering with similar movement, from And obtain the object in scene.Compared with former approach, detection effect is promoted this method, but this method is main It is the processing based on single pixel point, since the stability of single pixel is not strong, therefore the robustness of this algorithm is poor;Furthermore this Each frame image is required to carry out the feature extraction and matching of a large amount of pixels in kind method, it is computationally intensive.
Summary of the invention
The embodiment of the present invention provides a kind of method for checking object, device and storage medium, realizes according to multiple present frames Corresponding first motion information of subgraph and actual position information, are identified in present frame left image and present frame right image Each object.
First aspect of the embodiment of the present invention provides a kind of method for checking object, comprising:
Obtain the present frame left image and present frame right image of binocular camera;
The present frame left image and present frame right image are split respectively to obtain corresponding present frame subgraph;
Tracking and matching is carried out between former frame subgraph and the present frame subgraph, obtains multiple present frame Corresponding the first motion information based on image of image;The former frame subgraph be present frame former frame left image and The corresponding subgraph of former frame right image;
Tracking and matching is carried out between the first present frame subgraph and the second present frame subgraph, is obtained the multiple current The corresponding actual position information of frame subgraph;The first present frame subgraph, which is that the present frame left image is corresponding, works as Previous frame subgraph, the second present frame subgraph are the corresponding present frame subgraph of the present frame right image;
According to corresponding first motion information of the multiple present frame subgraph and actual position information, to described more A present frame subgraph is clustered, and the present frame subgraph that obtained each cluster includes indicates an object;
Identify object represented by present frame subgraph in each cluster.
Second aspect of the embodiment of the present invention provides a kind of object test equipment, comprising:
Image acquisition unit, for obtaining the present frame left image and present frame right image of binocular camera;
Cutting unit, for being split to obtain corresponding work as to the present frame left image and present frame right image respectively Previous frame subgraph;
Tracking and matching unit is obtained for carrying out tracking and matching between former frame subgraph and the present frame subgraph To corresponding the first motion information based on image of multiple present frame subgraphs;In the first present frame subgraph and Tracking and matching is carried out between two present frame subgraphs, obtains the corresponding actual bit confidence of the multiple present frame subgraph Breath;Wherein, the former frame subgraph is the former frame left image and the corresponding subgraph of former frame right image of present frame;It is described First present frame subgraph is the corresponding present frame subgraph of the present frame left image, and the second present frame subgraph is institute State the corresponding present frame subgraph of present frame right image;
Cluster cell, for according to corresponding first motion information of the multiple present frame subgraph and physical location Information clusters the multiple present frame subgraph, and the present frame subgraph that obtained each cluster includes indicates one Object;
Object identification unit, for identification object represented by present frame subgraph in each cluster.
The third aspect of the embodiment of the present invention provides a kind of storage medium, and the storage medium stores a plurality of instruction, the finger It enables and being suitable for as processor loads and executes the method for checking object as described in first aspect of the embodiment of the present invention.
Fourth aspect of the embodiment of the present invention provides a kind of terminal device, including pocessor and storage media, the processor, For realizing each instruction;
The storage medium is for storing a plurality of instruction, and described instruction is for being loaded by processor and being executed as of the invention real Apply method for checking object described in a first aspect.
As it can be seen that the object test equipment of the embodiment of the present invention is in the present frame left image and present frame shot to binocular camera When object in right image is detected, need first that treated by present frame left image and present frame right image or at least once Present frame left image and present frame right image are divided into present frame subgraph;Then in former frame subgraph and present frame subgraph Between and the first present frame subgraph and the second present frame subgraph between carry out tracking and matching, obtain multiple present frame subgraphs The first motion information and actual position information of picture;Finally multiple present frame subgraphs are clustered again, each of are obtained poly- Present frame subgraph can indicate an object in class, so as to be identified pair according to the present frame subgraph in each cluster The object answered.When carrying out tracking and matching in this way between subgraph, the information as included in subgraph is much larger than single The information of pixel, therefore the reliability of tracking and matching is higher, makes it possible to accurately identify the object in image;In addition, The quantity of pixel is usually in hundreds of left and right in subgraph, relative to the pixel of nearly million quantity in a frame image, for The calculation amount that tracking and matching is carried out between subgraph is smaller.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention without any creative labor, may be used also for those of ordinary skill in the art To obtain other drawings based on these drawings.
Fig. 1 is a kind of flow chart of method for checking object provided by one embodiment of the present invention;
Fig. 2 is the method flow diagram that the first motion information is obtained in one embodiment of the invention;
Fig. 3 is the method flow diagram that actual position information is obtained in one embodiment of the invention;
Fig. 4 is a kind of flow chart for method for checking object that Application Example of the present invention provides;
Fig. 5 a is the schematic diagram of present frame left image and present frame right image in Application Example of the present invention;
Fig. 5 b is the schematic diagram for the present frame subgraph divided in Application Example of the present invention;
Fig. 6 is the schematic diagram for the former frame subgraph divided in Application Example of the present invention;
Fig. 7 is a kind of structural schematic diagram of object test equipment provided in an embodiment of the present invention;
Fig. 8 is the structural schematic diagram of another object test equipment provided in an embodiment of the present invention;
Fig. 9 is a kind of structural schematic diagram of terminal device provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Description and claims of this specification and term " first ", " second ", " third " " in above-mentioned attached drawing The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage The data that solution uses in this way are interchangeable under appropriate circumstances, so that the embodiment of the present invention described herein for example can be to remove Sequence other than those of illustrating or describe herein is implemented.In addition, term " includes " and " having " and theirs is any Deformation, it is intended that cover not exclusively include, for example, containing the process, method of a series of steps or units, system, production Product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include be not clearly listed or for this A little process, methods, the other step or units of product or equipment inherently.
The embodiment of the present invention provides a kind of method for checking object, mainly can be applied to robot, unmanned vehicle, safety monitoring Equal numerous areas, are applied particularly to specific application apparatus (such as robot, unmanned vehicle, safety defense monitoring system etc.).These are special It all include binocular camera and object test equipment in fixed application apparatus, after such binocular camera shoots image, by object detection Device detects object included in the image of binocular camera shooting.Specifically, object test equipment is when carrying out object detection:
Obtain the present frame left image and present frame right image of binocular camera;It is right to present frame left image and present frame respectively Image is split to obtain corresponding present frame subgraph, or respectively to treated at least once present frame left image and current Frame right image is split to obtain corresponding present frame subgraph;Carried out between former frame subgraph and present frame subgraph with Track matching, obtains corresponding the first motion information based on image of multiple present frame subgraphs, former frame subgraph is to work as The former frame left image and the corresponding subgraph of former frame right image of previous frame;In the first present frame subgraph and the second present frame Tracking and matching is carried out between image, obtains the corresponding actual position information of multiple present frame subgraphs, the first present frame Image is the corresponding present frame subgraph of present frame left image, and the second present frame subgraph is that present frame right image is corresponding current Frame subgraph;According to corresponding first motion information of multiple present frame subgraphs and actual position information, to multiple current Frame subgraph is clustered, and the present frame subgraph that obtained each cluster includes indicates an object;It identifies in each cluster Object represented by present frame subgraph.
When carrying out tracking and matching in this way between subgraph, the information as included in subgraph is much larger than single picture The information of vegetarian refreshments, therefore the reliability of tracking and matching is higher, makes it possible to accurately identify the object in image;In addition, sub The quantity of pixel is usually in hundreds of left and right in image, relative to the pixel of nearly million quantity in a frame image, for son The calculation amount that tracking and matching is carried out between image is smaller.
The embodiment of the present invention provides a kind of method for checking object, side performed by mainly above-mentioned object test equipment Method, flow chart are as shown in Figure 1, comprising:
Step 101, the present frame left image and present frame right image of binocular camera are obtained, totally two images.
Binocular camera generally comprise two for imaging monocular cam, referred to as left camera and right camera, this two The same plane of binocular camera is arranged in a monocular cam, and mutual distance is greater than certain value.In practical applications, Binocular camera will be generally applied to robot, and the fields such as unmanned vehicle or safety monitoring, specifically, binocular camera can be according to certain Time interval shoots image, the image of a certain moment shooting be include in binocular camera left camera and right camera shoot respectively Left image and right image, i.e., a certain frame left image and a certain frame right image.
When binocular camera takes a frame (i.e. present frame) left image and the frame right image, then object test equipment can initiate The process of the present embodiment.
Step 102, present frame left image and present frame right image are split respectively to obtain corresponding present frame subgraph Picture, or respectively treated at least once present frame left image and present frame right image are split to obtain corresponding present frame Subgraph.
Here, object test equipment is carried out to some image (such as present frame left image or present frame right image) When segmentation, it may not be necessary to carry out fine dividing processing to image, only generate the result of over-segmentation, wherein over-segmentation Refer to that a complete object can be divided into the subgraph in one or more regions, and an obtained subgraph cannot be same When cross over more than two objects.Relative to fine image segmentation, the requirement to image over-segmentation is lower, and required algorithm is multiple Miscellaneous degree substantially reduces with runing time.
For example, a pedestrian image can be divided into a complete humanoid image, can also be divided into above the waist Image and lower part of the body image, or it is divided into trunk image and four limbs image etc., but two pedestrians cannot be divided into simultaneously In the same subgraph.
Specifically, object test equipment can be split using Area generation method, that is, select initial subregion, and from Initial subregion starts, and the adjacent pixel (or other subregions) with same property is integrated into initial subregion, To gradually growth region, until can not be until the pixel of merger or other subregions;Object test equipment can be with It is split using the methods of split degree.
It should be noted that the image that object test equipment can directly obtain above-mentioned steps 101 is split, may be used also First to be carried out after handling at least once to present frame left image and present frame right image respectively, then respectively to treated present frame Left image and present frame right image are split.It wherein, mainly include but is not limited to such as to the processing at least once of a certain image Lower processing:
(1) deformation correction is carried out to present frame left image and present frame right image respectively.
Here deformation is carried out to a certain image and corrects the image section referred in the image, deformation occurs compared in kind It is corrected, specifically, object test equipment can be according to the internal reference of binocular camera respectively to present frame left image and present frame Right image carries out deformation correction.Wherein, the internal reference of binocular camera refers to that the left and right camera in the binocular camera is being shot respectively When image, the deformation index between image and material object shot, i.e. distortion parameter.
It is rectified for example, deformation can be carried out to present frame left image according to the distortion parameter of the left camera in binocular camera Just, deformation correction is carried out to present frame right image according to the distortion parameter of the right camera in binocular camera.Specifically, it can incite somebody to action The camber line of image border is corrected to straight line, enables image actual response after correction in kind.
(2) present frame left image and present frame right image are corrected respectively according to the outer ginseng of binocular camera, wherein double The outer ginseng of mesh camera refers to the information in binocular camera between the camera of left and right, such as spacing, and whether left and right camera has rotation, Angle etc. between the line and vertical line of left and right camera, vertical line is the line with horizontal line here.
For example, a certain camera in the camera of left and right has rotation, and rotating angle is β, then object test equipment is right It, can be another in present frame left image and present frame right image when present frame left image and present frame right image are corrected On the basis of the image (such as present frame left image) of camera, by the image (such as present frame right image) of the camera to one Direction rotates by a certain angle β;
Angle in binocular camera between the line and vertical line of left and right camera is α, if angle α is not 90 degree, is said One in front and one in back, not in same plane, then object test equipment is to a present frame left side for the left and right camera installed in bright binocular camera When image and present frame right image are corrected, present frame right image can be amplified certain on the basis of present frame left image Multiple, the multiple and angle α of amplification have certain functional relation, alternatively, on the basis of present frame right image, by present frame left figure As reducing certain multiple, the multiple and angle α of diminution have certain functional relation.
Further, if present frame left image and present frame right image are first frame left image and first frame right image, i.e., double The image that mesh camera is shot for the first time after unlatching, object test equipment can store segmentation and obtain after having executed this step 102 Present frame subgraph, and terminate process;If present frame left image and the non-first frame left image of present frame right image and non-first frame Right image, object test equipment also need to be implemented following steps 103 to 106.
Step 103, it is tracked between each present frame subgraph that former frame subgraph and above-mentioned steps 102 obtain Matching obtains corresponding first based on image of multiple present frame subgraphs in the present frame subgraph of step 102 segmentation Motion information.Wherein, former frame subgraph is the former frame left image and the corresponding subgraph of former frame right image of present frame, is Former frame left image and former frame right image are handled according to the method for above-mentioned steps 102.
The tracking and matching of this step mainly carries out each former frame subgraph with corresponding present frame subgraph respectively Information matches obtain in multiple present frame subgraphs, first movement of each present frame subgraph compared with former frame subgraph Information.Wherein, the information matches of any two subgraph can be the information for the characteristic point extracted between the two subgraphs Matching, is also possible to the matching of pixel point value between the two subgraphs.
And the first motion information of some above-mentioned present frame subgraph is primarily referred to as a certain present frame subgraph in image In motion information, can specifically include orientation information and the velocity amplitude based on image, for example, the of a certain present frame subgraph One motion information are as follows: moved horizontally to the right with a pixel/second speed.
It should be noted that in one case, object test equipment may can't obtain after this step 103 Corresponding first motion information of all present frame subgraphs that above-mentioned steps 102 are divided, but obtain a portion and work as Corresponding first motion information of previous frame subgraph (i.e. multiple present frame subgraphs).
For example, former frame left image and former frame right image are at a time a certain section of road image, wherein including 3 The image of vehicle 1,2 and 3;Present frame left image and present frame right image are the image of this section of road of subsequent time, wherein including 4 The image of a vehicle 1,2,3 and 4.Wherein, vehicle 4 is that do not have in former frame left image and former frame right image, then object is examined Device is surveyed after this step 103, the corresponding each present frame subgraph corresponding first of vehicle 4 can not be obtained and move letter Breath.
In another case, object test equipment is likely to be obtained the segmentation of above-mentioned steps 102 after this step 103 Corresponding first motion information of all present frame subgraphs (i.e. multiple present frame subgraphs).
For example, former frame left image and former frame right image are at a time a certain section of road image, wherein including 3 The image of vehicle 1,2 and 3, and present frame left image and present frame right image are the image of this section of road of subsequent time, wherein wrapping Image containing 3 vehicles 1,2 and 3.Wherein, present frame left image and present frame right image and former frame left image and former frame are right Image is compared, and the image of new vehicle does not occur, then object test equipment obtains above-mentioned steps 102 and divide after this step 103 Corresponding first motion information of all present frame subgraphs cut.
Step 104, tracking and matching is carried out between the first present frame subgraph and the second present frame subgraph, obtained multiple The corresponding actual position information of present frame subgraph, wherein the first present frame subgraph is that present frame left image is corresponding Present frame subgraph, the second present frame subgraph are the corresponding present frame subgraph of the present frame right image.
First present frame subgraph and the second present frame subgraph are mainly carried out information by the tracking and matching of this step Match, obtains object represented by multiple present frame subgraphs and above-mentioned binocular camera opposite actual position information respectively.Wherein, appoint Anticipate two subgraphs information matches can be the characteristic point extracted between the two subgraphs information matching, be also possible to The matching of pixel point value between the two subgraphs.
And the actual position information of some above-mentioned present frame subgraph be primarily referred to as a certain present frame subgraph with it is above-mentioned The opposite location information of binocular camera, can specifically include location coordinate information, for example, the actual bit of a certain present frame subgraph Confidence breath is (x, y).
It should be noted that object test equipment is after this step 104, it may not be necessary to obtain above-mentioned steps 102 and divide Corresponding first motion information of all present frame subgraphs cut, but obtain a portion present frame subgraph (i.e. Multiple present frame subgraphs) corresponding first motion information.
For example, present frame left image and present frame right image are the image of a certain section of road of a certain moment, wherein including 4 The image of vehicle 1,2,3 and 4, wherein vehicle 4 is that do not have in former frame left image and former frame right image, then object detection Device is after this step 104, it may not be necessary to obtain the corresponding actual position information of each present frame subgraph of vehicle 4.
Step 105, according to corresponding first motion information of multiple present frame subgraphs and actual position information, to more A present frame subgraph is clustered, and the present frame subgraph that obtained each cluster includes indicates an object.
Specifically, in the first scenario, object test equipment is needed according to each present frame subgraph corresponding first Motion information and actual position information obtain the corresponding actual second operation letter of subobject represented by each present frame subgraph Breath, including orientation information and values for actual speed in the second operation information, in values for actual speed and the first motion information here Velocity amplitude based on image is different, for example, values for actual speed is t meter per second.
Then object test equipment is further according to corresponding second motion information of multiple present frame subgraphs and actual bit confidence Breath, clusters multiple present frame subgraphs, the cluster result obtained in this way is more accurate.Specifically, if certain two are worked as The irrelevance of orientation information between previous frame subgraph is in the difference in presetting range and between values for actual speed in presetting range It is interior, and the difference between the actual position information of the two present frame subgraphs is in presetting range, then by the two present frames Subgraph merges into same cluster.
For example, corresponding second motion information of present frame subgraph 1 is to be moved horizontally to the left with the speed of 10 meter per seconds, it is real Border location information is (20,10), and corresponding first motion information of present frame subgraph 2 is to move so that the speed of 9 meter per seconds is horizontal to the left Dynamic, actual position information is (22,12), corresponding first motion information of present frame subgraph 3 be with the speed of 13 meter per seconds to the left Upper movement, and be 10 degree with horizontal line angle, actual position information is (23,13), then can be by present frame subgraph 1,2 and 3 Merge into same cluster.
In the latter case, object test equipment can not need to obtain second with information, and directly according to multiple The first motion information and physical location of present frame subgraph cluster multiple present frame subgraphs.Specifically, if certain The irrelevance of orientation information between two present frame subgraphs is in the difference in presetting range and between the velocity amplitude based on image Not in presetting range, and the difference between the actual position information of the two present frame subgraphs is in presetting range, by this Two present frame subgraphs merge into same cluster.
It should be noted that due to for certain an object with values for actual speed 1, each subgraph of the object Velocity amplitude based on image is not necessarily identical, for example, in a certain frame left image and a certain frame right image, a certain subgraph of the object 1 velocity amplitude based on image of picture is a1 pixel/second, and another subgraph 2 of the object is a2 based on the velocity amplitude of image Pixel/second.Therefore, it is clustered according to corresponding second motion information of multiple present frame subgraphs and actual position information Effect it is relatively good.
Step 106, object represented by present frame subgraph in each cluster is identified.
Specifically, object test equipment can use support vector machines (the Support Vector that training obtains in advance Machines, SVM), random forest (Random Forest) model or convolutional neural networks (convolution neural Network, CNN) etc. classifiers, type identification is carried out respectively to each present frame subgraph for including in each cluster, then The type combination that each present frame subgraph is identified can be obtained at an object such as vehicle, pedestrian, trees etc. pair As.
It should be noted that object test equipment can also be deleted after executing above-mentioned steps 102 and obtaining present frame subgraph Except locally-stored former frame subgraph, obtained present frame subgraph is divided in storage.In this way, in the next frame for being directed to present frame Left image and next frame right image when executing the step 103 of the present embodiment, can use locally-stored present frame subgraph.And In order to save locally-stored space, the former frame subgraph of storage before can deleting.
As it can be seen that the object test equipment of the embodiment of the present invention is in the present frame left image and present frame shot to binocular camera When object in right image is detected, need first that treated by present frame left image and present frame right image or at least once Present frame left image and present frame right image are divided into present frame subgraph;Then by the tracking and matching between corresponding subgraph Obtain the first motion information and actual position information of multiple present frame subgraphs;Finally further according to the first motion information and reality Location information clusters multiple present frame subgraphs, and present frame subgraph can indicate one in obtained each cluster Object, so as to identify corresponding object according to the present frame subgraph in each cluster.In this way by institute in subgraph The information for including is much larger than the information of single pixel, therefore the reliability of the tracking and matching between subgraph is higher, makes it possible to Accurately identify the object in image;In addition, in subgraph the quantity of pixel usually in hundreds of left and right, relative to The pixel of nearly million quantity in one frame image is smaller for the calculation amount for carrying out tracking and matching between subgraph.
Refering to what is shown in Fig. 2, in a specific embodiment, method for checking object is when executing above-mentioned steps 103, specifically It can be implemented by the following steps:
Step 201, in the first present frame subgraph, determine that corresponding with each first former frame subgraph first is candidate Subgraph, wherein the first former frame subgraph is the corresponding former frame subgraph of former frame left image.It thus can be a certain In the next frame left image of frame, the region that each subgraph is likely to occur in a certain frame left image, i.e., the first candidate son are predicted Image.
Step 202, in the second present frame subgraph, determine that corresponding with each second former frame subgraph second is candidate Subgraph, wherein the second former frame subgraph is the corresponding former frame subgraph of former frame right image.It thus can be a certain In the next frame right image of frame, the region that each subgraph is likely to occur in a certain frame right image, i.e., the second candidate son are predicted Image.
Specifically, tracking and matching of the object test equipment in step 201 and step 202, can be using based on Kalman (Kalman) filtering algorithm carries out tracking and matching, can also carry out tracking and matching using based on particle filter scheduling algorithm.
For example, the present frame subgraph that above-mentioned steps 102 obtain is n1, n2 ... ..., nm, and former frame subgraph is k1, K2 ... ..., kp, wherein the first present frame subgraph is n1, n2 ... ..., ni, and the second present frame subgraph is ni+1, ni+ 2 ... ..., nm, the first former frame subgraph be k1, k2 ... ..., kj, the second former frame subgraph be kj+1, kj+2 ... ..., kp.Then object test equipment can obtain corresponding first candidate subimage of the first former frame subgraph k1 according to step 201 and be N3, n4 and n5 ... ..., corresponding first candidate subimage of the first former frame subgraph kj are ni-1, ni-2 and ni-4;According to step Rapid 202 obtain corresponding second candidate subimage of the second former frame subgraph kj+1 be ni+3, ni+4 and ni+5 ... ..., second Corresponding second candidate subimage of former frame subgraph kp is nm-1, nm-2 and nm-4.
It should be noted that if former frame is the initial frame that certain an object occurs, and present frame is the second of the object Frame, due to according to former frame subgraph, predicting what the object was likely to occur in present frame left image and present frame right image In region process, without reference to information, then the first candidate subimage and the second candidate subimage that object test equipment determines can To be present frame left image and the corresponding present frame subgraph of present frame right image in larger range.
If former frame is the i-th frame (i is greater than or equal to 2) that certain an object occurs, and present frame is the i+1 of the object Frame, then object test equipment, can be according to previous where the object determined for former frame left image when executing step 201 First motion information of frame subgraph, to determine the first candidate subimage;It, can be according to for previous when executing step 202 First motion information of former frame subgraph where the object that frame right image determines, to determine the second candidate subimage.
For example, the first motion information of former frame subgraph a where certain an object determined for former frame left image are as follows: It is moved to the left with 10 pixel/second speed levels, and the time interval between present frame and former frame is 1 second, former frame Position of the image a in former frame left image is (x, y).Then object test equipment can be first by former frame subgraph a the past Current location (x, y) in one frame left image, level are moved to the left 10 pixels, obtain a location information (x1, y1);Then It determines in present frame left image, the present frame subgraph b of the corresponding position location information (x1, y1);Finally determining is worked as Previous frame subgraph b and its n neighbouring present frame subgraph, are determined as corresponding first candidate subimage of former frame subgraph a.
Step 203, the first best subgraph of corresponding first former frame subgraph, tool are chosen from the first candidate subimage Body, object test equipment can calculate each subgraph in the first candidate subimage, respectively between the first former frame subgraph Information matches value, if the information matches value of the first subgraph and the first former frame subgraph is most in the first candidate subimage It is small, and in presetting range, then select the first subgraph for the first best subgraph.Illustrate represented by the first best subgraph Object is consistent with object represented by the first former frame subgraph.
Wherein, the information matches value in the first candidate subimage between a certain subgraph and the first former frame subgraph, tool Body refers to the pixel information of a certain subgraph and the distance between the pixel information of the first former frame subgraph, or refers to certain The distance between the feature extraction value of the feature extraction value of one subgraph and the first former frame subgraph.
Object test equipment can also choose the second best subgraph of the second former frame subgraph from the second candidate subimage Picture specifically calculates each subgraph in the second candidate subimage, respectively the information matches between the second former frame subgraph Value;If the information matches value of the second subgraph and the second former frame subgraph is minimum in second candidate subimage, and In presetting range, then select second subgraph for the second best subgraph.
Wherein, the information matches value in the second candidate subimage between another subgraph and the second former frame subgraph, tool Body refers to the pixel information of another subgraph and the distance between the pixel information of the second former frame subgraph, or refers to another The distance between the feature extraction value of the feature extraction value of one subgraph and the second former frame subgraph.
Step 204, according to the first best subgraph and corresponding first former frame subgraph and the second best subgraph and Corresponding second former frame subgraph determines that the first best subgraph and the second best subgraph corresponding first move letter Breath is to get corresponding first operation information of multiple present frame subgraphs has been arrived.
Since the corresponding first best subgraph of the first former frame subgraph is the first present frame subgraph, the second former frame The corresponding second best subgraph of subgraph is the second present frame subgraph, then object test equipment, can when executing this step With according to position 1 (x1, y1) of the first best subgraph in present frame left image and the first former frame subgraph in former frame Position 2 (x2, y2) in left image determines the corresponding orientation information of the first best subgraph and the velocity amplitude based on image.Tool Body, orientation information is the direction of position 2 (x2, y2) relative to position 1 (x1, y1), and the velocity amplitude based on image is position 1 Pixel separation between (x1, y1) and position 2 (x2, y2).
Refering to what is shown in Fig. 3, method for checking object is when executing above-mentioned steps 104, tool in another specific embodiment Body can be implemented by the following steps:
Step 301, in the second present frame subgraph, third candidate subgraph corresponding with the first present frame subgraph is determined Picture in an image (such as right image) of a certain frame, can predict another image (such as left figure of the frame in this way Picture) in the region that is likely to occur of each subgraph, i.e. third candidate subimage.
For example, the present frame subgraph that above-mentioned steps 102 obtain is n1, n2 ... ..., nm, wherein the first present frame subgraph As being n1, n2 ... ..., ni, the second present frame subgraph is ni+1, n i+2 ... ..., nm.Then object test equipment can basis This step 301, obtaining the corresponding third candidate subimage of the first present frame subgraph n1 is ni+3, ni+4 and ni+5 ... ..., the The corresponding third candidate subimage of one present frame subgraph ni is nm-1, nm-3 and nm-5.
It should be noted that if former frame is the initial frame that certain an object occurs, and present frame is the second of the object Frame, due to according to the first present frame subgraph, predicting region process that the object is likely to occur in present frame right image, Without reference to information, then the third candidate subimage that object test equipment determines can be the present frame right image in larger range Corresponding present frame subgraph.
If former frame is the i-th frame (i is greater than or equal to 2) that certain an object occurs, and present frame is the i+1 of the object Frame, then object test equipment, can be according to determining for former frame left image and former frame right image when executing step 301 The actual position information of former frame subgraph where the object, to determine third candidate subimage.
For example, former frame subgraph a where certain an object determined for former frame left image and former frame right image Actual position information is (x, y), and the first present frame subgraph where the object is denoted as b.Then object test equipment can be first According to actual position information (x, y), location information (x1, y1) of the first present frame subgraph b in present frame left image and double The distance between left and right camera in mesh camera calculates object represented by the first present frame subgraph b in former frame right image In location information (x2, y2);Then it determines in present frame right image, the second of the corresponding position location information (x2, y2) Present frame subgraph c;Finally by the second determining present frame subgraph c and its neighbouring n the second present frame subgraphs, determine For the corresponding third candidate subimage of the first present frame subgraph b.
Step 302, the corresponding best subgraph of third of the first present frame subgraph is chosen from third candidate subimage.
Specifically, object test equipment can first calculate each subgraph in third candidate subimage, current with first respectively Information matches value between frame subgraph;If third subgraph and the first present frame subgraph in the third candidate subimage Information matches values it is minimum, and in presetting range, then select the third subgraph for the best subgraph of third.
Wherein, the information matches value in third candidate subimage between a certain subgraph and the first present frame subgraph, tool Body refers to the pixel information of a certain subgraph and the distance between the pixel information of the first present frame subgraph, or refers to certain The distance between the feature extraction value of the feature extraction value of one subgraph and the first present frame subgraph.
Step 303, according to the best subgraph of third and corresponding first present frame subgraph, the best subgraph of third is determined Actual position information corresponding with the first present frame subgraph.
Under normal circumstances, binocular camera shooting a certain frame left image and a certain frame right image in same target position not It is identical, and in a certain frame left image certain an object first position, the second position with the object in the frame right image, binocular phase The actual position information of the distance between left and right camera and object place present frame subgraph is that have certain function in machine Relationship, as long as first position, the distance between the second position and left and right camera are worked as where it is known that the object can be calculated The actual position information of previous frame subgraph.
Since the corresponding best subgraph of third of the first present frame subgraph is the second present frame subgraph, then object detection Device when executing this step, can according to position 3 (x3, y3) of the best subgraph of third in present frame right image, first Present frame subgraph in present frame left image position 4 (x4, y4) and binocular camera in the distance between left and right camera, Determine the best subgraph of third, actual position information corresponding with the first present frame subgraph.
The method for checking object for illustrating the present embodiment with a specific embodiment below, refering to what is shown in Fig. 4, of the invention The method of embodiment includes:
Step 401, binocular camera shoots image according to certain time interval, which includes left camera and the right side Camera, then the image of binocular camera a certain moment shooting includes left image and right image, such as present frame left image and current Frame right image, totally two images.
Step 402, present frame left image and the progress of present frame right image that object test equipment obtains above-mentioned steps 401 Correction specifically carries out deformation correction to present frame left image according to the distortion parameter of the left camera of binocular camera, such as will The camber line at left image edge is straight line according to certain distortion parameter correction, according to the distortion parameter of the right camera of binocular camera Deformation correction is carried out to present frame right image.
Further, object test equipment is also needed according to left camera and the line of right camera and the folder of vertical line Angle scales another image on the basis of image a certain in left image and right image.It, will be a certain if a certain camera has rotation On the basis of image, another image is rotated.
Step 403, object test equipment by after correction present frame left image and present frame right image be split respectively Obtained present frame subgraph are as follows: the first present frame subgraph and the second present frame subgraph.
Such as shown in Fig. 5 a, the present frame left image and present frame right image of binocular camera shooting are left image n1 and right figure As n2, all comprising the image of object a and object b in left image n1 and right image n2.
As shown in Figure 5 b, by the segmentation of object test equipment, the corresponding first present frame subgraph of left image n1 includes Subgraph n11, n12, n13 ... ..., n18, right image n2 corresponding second present frame sub-picture pack enclosed tool image n21, n22, N23 ... ..., n28.As it can be seen that object a is all divided into 3 subgraphs in present frame left image and present frame right image, such as Object b is divided into 2 subgraphs, such as subgraph n12 and n13 by subgraph n11, n15 and n17.
It should be noted that the frame of the dotted line of subgraph shown in Fig. 5 b indicates, and in practical applications, segmentation obtains The shape of subgraph be to be not necessarily rectangular, the side of being intended merely in Fig. 5 b depending on the concrete shape of objects in images Just schematic diagram shown in drawing.For example, being split if a certain image is the ball for placing an a certain color on meadow The a certain subgraph obtained afterwards is the image of the ball, is circular;If a certain image is to place a half blue on meadow The ball of half red, then a certain subgraph obtained after being split be the blue hemisphere image, be it is semicircular, it is another Subgraph is the image of the red hemisphere, is semicircular.
Step 404, object test equipment carries out tracking and matching between former frame subgraph and present frame subgraph, obtains Corresponding first motion information of multiple present frame subgraphs;The first present frame subgraph and the second present frame subgraph it Between carry out tracking and matching, obtain the corresponding actual position information of multiple present frame subgraphs.
Such as shown in Fig. 6, former frame left image and former frame right image are left image k1 and right image k2, k1 pairs of left image The first former frame sub-picture pack enclosed tool the image k11, k12, k13 ... ... answered, corresponding second former frame of k18, right image k2 Image includes subgraph k21, k22, k23 ... ..., k28.As it can be seen that all will be right in former frame left image and former frame right image As a is divided into 3 subgraphs, such as subgraph k14, n16 and n17, object b is divided into 2 subgraphs, such as subgraph K11 and k13.Equally, schematic diagram shown in drawing for convenience, the frame of subgraph dotted line shown in Fig. 6 indicate.
When executing this step, on the one hand, object test equipment can obtain subgraph first according to above-mentioned step shown in Fig. 2 As k11, k12, k13 ... ..., in k18, corresponding first candidate subimage of each subgraph, in the first candidate subimage Including each sub-picture pack be contained in present frame left image n1, for example, the subgraph k11 of left image k1 is corresponding first candidate Subgraph may include: subgraph n12, n14 and the n11 of present frame left image n1 shown in above-mentioned Fig. 5 b;Obtain subgraph K21, k22, k23 ... ..., in k28, corresponding second candidate subimage of each subgraph is wrapped in the second candidate subimage The each subgraph included includes in the right image n2 of present frame, for example, corresponding second candidate of the subgraph k23 of right image k2 Subgraph may include: subgraph n23, n24 and the n21 of present frame right image n2 shown in above-mentioned Fig. 5 b.
Then the corresponding first candidate subgraph of each subgraph from subgraph k11, k12, k13 ... ..., k18 again As in, corresponding first best subgraph, such as the first best subgraph n12 of subgraph k11 are chosen;From subgraph k21, K22, k23 ... ... choose corresponding second best subgraph in k28 in corresponding second candidate subimage of each subgraph Picture, such as the second best subgraph k25 of subgraph k26.
Finally further according to the first best subgraph and corresponding first former frame subgraph and the second best subgraph and right The the second former frame subgraph answered determines that the first best subgraph and the second best subgraph corresponding first move letter Breath.For example, determine that the first motion information of subgraph n12 is to move horizontally to the right with 50 pixel/second speed, subgraph The first motion information of n15 is mobile etc. horizontally to the right with 10 pixels/second speed.
On the other hand, object test equipment can obtain subgraph n11, n12 according to above-mentioned step shown in Fig. 3, In n13 ... ..., n18, the corresponding third candidate subimage of each subgraph, include in third candidate subimage is each Sub-picture pack is contained in present frame right image n2, for example, the corresponding third candidate subimage of subgraph n11 may include: above-mentioned Subgraph n22, n24 and the n21 of present frame right image n2 shown in Fig. 5 b;Then again from subgraph n11, n12, n13 ... ..., In n18 in the corresponding third candidate subimage of each subgraph, the corresponding best subgraph of third, such as subgraph are chosen The best subgraph n21 of the third of n11;Finally according to the best subgraph of third and corresponding first present frame subgraph, is determined Three best subgraphs and the corresponding actual position information of the first present frame subgraph.
Step 405, multiple present frame subgraphs corresponding that object test equipment is obtained according to above-mentioned steps 404 One motion information and actual position information cluster multiple present frame subgraphs, can be by subgraph than as shown in Figure 5 b N11, n15 and n17 merge into same cluster, and subgraph n12 and n13 are merged into another cluster.
Step 406, object test equipment identifies object represented by present frame subgraph in each cluster, such as Fig. 5 b institute Show, object represented by identified sub-images n11, n15 and n17 is pedestrian, and object represented by identified sub-images n12 and n13 is Pedestrian.
The embodiment of the present invention also provides a kind of object test equipment, and structural schematic diagram is as shown in fig. 7, specifically can wrap It includes:
Image acquisition unit 10, for obtaining the present frame left image and present frame right image of binocular camera.
Cutting unit 11, present frame left image and present frame right figure for being obtained respectively to described image acquiring unit 10 The present frame left image and work as being split to obtain corresponding present frame subgraph, or respectively to treated at least once Previous frame right image is split to obtain corresponding present frame subgraph.
Tracking and matching unit 12, for former frame subgraph and the cutting unit 11 segmentation present frame subgraph it Between carry out tracking and matching, obtain corresponding the first motion information based on image of multiple present frame subgraphs;It is described Former frame subgraph is the former frame left image and the corresponding subgraph of former frame right image of present frame.
Specifically, tracking and matching unit 12 is specifically used for when obtaining the first motion information in the first present frame In image, corresponding with the first former frame subgraph the first candidate subimage is determined, wherein the first former frame subgraph is The corresponding former frame subgraph of the former frame left image;In the second present frame subgraph, determining and the second former frame Corresponding second candidate subimage of subgraph, wherein the second former frame subgraph is that the former frame right image is corresponding Former frame subgraph;The corresponding first best subgraph of the first former frame subgraph is chosen from first candidate subimage, The corresponding second best subgraph of the second former frame subgraph is chosen from second candidate subimage;Most according to described first Good subgraph and corresponding first former frame subgraph and the second best subgraph and corresponding second former frame subgraph, really The fixed first best subgraph and corresponding first motion information of the second best subgraph.
Wherein, tracking and matching unit 12 is specifically used for calculating the described first candidate son when choosing the first best subgraph Each subgraph in image, respectively the information matches value between the first former frame subgraph;If the described first candidate subgraph The information matches value minimum of the first subgraph and the first former frame subgraph as in, and in presetting range then selects described the One subgraph is the first best subgraph;When choosing the second best subgraph, it is specifically used for calculating the described second candidate subgraph Each subgraph as in, respectively the information matches value between the second former frame subgraph;If second candidate subimage In the second subgraph and the second former frame subgraph information matches value it is minimum, and in presetting range, then select described second Subgraph is the second best subgraph.
Tracking and matching unit 12 is also used to be tracked between the first present frame subgraph and the second present frame subgraph Matching, obtains the corresponding actual position information of the multiple present frame subgraph;The first present frame subgraph is institute The corresponding present frame subgraph of present frame left image is stated, the second present frame subgraph is that the present frame right image is corresponding Present frame subgraph.
Specifically, tracking and matching unit 12 is when obtaining actual position information, in the second present frame subgraph, really Fixed third candidate subimage corresponding with the first present frame subgraph;Described first is chosen from the third candidate subimage to work as The best subgraph of the corresponding third of previous frame subgraph;According to the best subgraph of the third and corresponding first present frame subgraph Picture determines the best subgraph of the third and the corresponding actual position information of the first present frame subgraph.
Cluster cell 13, multiple present frame subgraphs for being obtained according to the tracking and matching unit 12 are corresponding First motion information and actual position information cluster the multiple present frame subgraph, and obtained each cluster includes Present frame subgraph indicate an object.
First motion information includes orientation information and the velocity amplitude based on image, then the cluster cell 13, is specifically used for root According to corresponding first motion information of the multiple present frame subgraph and actual position information, the present frame subgraph is determined It include orientation information and values for actual speed in second operation information as corresponding second operation information;If certain two The irrelevance of orientation information is in the difference in presetting range and between values for actual speed in preset model between a present frame subgraph In enclosing, and the difference between the actual position information of certain two present frame subgraph is in presetting range, will it is described certain two A present frame subgraph merges into same cluster.
Object identification unit 14, present frame subgraph institute table in each cluster that the cluster cell 13 obtains for identification The object shown.
Carried out in this way between present frame subgraph and former frame subgraph tracking and matching and the first present frame subgraph with Tracking and matching is carried out between second present frame subgraph, the information as included in subgraph is much larger than single pixel Information, therefore the reliability of tracking and matching is higher, makes it possible to accurately identify the object in image;In addition, in subgraph The quantity of pixel is usually in hundreds of left and right, relative to the pixel of nearly million quantity in a frame image, for subgraph it Between carry out tracking and matching calculation amount it is smaller.
Refering to what is shown in Fig. 8, in a specific embodiment, object test equipment is in addition to may include as shown in Figure 7 It can also include correction unit 15 and storage element 16 outside structure, in which:
Correct unit 15, for according to the distortion parameter of camera left in the binocular camera to described image acquiring unit The 10 present frame left images obtained carry out deformation correction, are worked as according to the distortion parameter of camera right in the binocular camera to described Previous frame right image carries out deformation correction;Cutting unit 11 above-mentioned in this way is specifically used for carrying out shape to the correction unit 15 respectively Present frame left image and present frame right image after becoming correction are split to obtain corresponding present frame subgraph.
The correction unit 15 if being also used to a certain camera in the left camera and right camera has rotation, and revolves Turn a certain angle, then it, will be described on the basis of the image of another camera in the present frame left image and present frame right image The image of a certain camera rotates a certain angle to a direction;If the line of the left camera and right camera with Angle between vertical line is not 90 degree, then on the basis of the present frame left image, the present frame right image is amplified certain One multiple, and the multiple and angle that amplify have functional relation;Alternatively, on the basis of the present frame right image, it will be described current Frame left image reduces a certain multiple, and the multiple and angle that reduce have functional relation.
Above-mentioned tracking and matching unit 12 can be carried out in the former frame subgraph and present frame subgraph that storage element 16 stores Tracking and matching.Later, storage element 16, for deleting locally-stored after above-mentioned cutting unit 11 obtains present frame subgraph The former frame subgraph, store the present frame subgraph that above-mentioned cutting unit 11 is divided.
The embodiment of the present invention also provides a kind of terminal device, and structural schematic diagram is as shown in figure 9, the terminal device can be because matching It sets or performance is different and generate bigger difference, may include one or more central processing units (central Processing units, CPU) 20 (for example, one or more processors) and memory 21, one or more are deposited Store up the storage medium 22 (such as one or more mass memory units) of application program 221 or data 222.Wherein, it stores Device 21 and storage medium 22 can be of short duration storage or persistent storage.Be stored in storage medium 22 program may include one or More than one module (diagram does not mark), each module may include to the series of instructions operation in terminal device.More into one Step ground, central processing unit 20 can be set to communicate with storage medium 22, execute one in storage medium 22 on the terminal device Series of instructions operation.
Specifically, application program of the application program 221 stored in storage medium 22 including object detection, and the program It may include the image acquisition unit 10 in above-mentioned object test equipment, cutting unit 11, tracking and matching unit 12, cluster cell 13, object identification unit 14 corrects unit 15 and storage element 16, herein without repeating.Further, central processing unit 20 can be set to communicate with storage medium 22, execute the application of the object detection stored in storage medium 22 on the terminal device The corresponding sequence of operations of program.
Terminal device can also include one or more power supplys 23, one or more wired or wireless networks connect Mouth 24, one or more input/output interfaces 25, and/or, one or more operating systems 223, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..
The step as performed by object test equipment described in above method embodiment can be based on the end shown in Fig. 9 The structure of end equipment.
The embodiment of the present invention also provides a kind of storage medium, and the storage medium stores a plurality of instruction, and described instruction is suitable for It is loaded as processor and executes the method for checking object as performed by above-mentioned object test equipment.
The embodiment of the present invention also provides a kind of terminal device, including pocessor and storage media, the processor, for real Existing each instruction;
The storage medium is for storing a plurality of instruction, and described instruction is for being loaded by processor and being executed such as above-mentioned object Method for checking object performed by detection device.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium may include: read-only memory (ROM), random access memory ram), disk or CD etc..
Method for checking object, device and storage medium is provided for the embodiments of the invention above to be described in detail, Used herein a specific example illustrates the principle and implementation of the invention, and the explanation of above embodiments is only used In facilitating the understanding of the method and its core concept of the invention;At the same time, for those skilled in the art, according to the present invention Thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as Limitation of the present invention.

Claims (14)

1. a kind of method for checking object characterized by comprising
Obtain the present frame left image and present frame right image of binocular camera;
The present frame left image and present frame right image are split respectively to obtain corresponding present frame subgraph;
Tracking and matching is carried out between former frame subgraph and the present frame subgraph, obtains multiple present frame subgraphs Corresponding the first motion information based on image;First motion information includes orientation information and the speed based on image Value, the former frame subgraph is the former frame left image and the corresponding subgraph of former frame right image of present frame;
Tracking and matching is carried out between the first present frame subgraph and the second present frame subgraph, obtains multiple present frame The corresponding actual position information of image;The actual position information is each present frame subgraph and the binocular camera phase Pair location information, the first present frame subgraph is the corresponding present frame subgraph of the present frame left image, described the Two present frame subgraphs are the corresponding present frame subgraph of the present frame right image;
According to corresponding first motion information of multiple present frame subgraphs and actual position information, described work as to multiple Previous frame subgraph is clustered, and the present frame subgraph that obtained each cluster includes indicates an object;
Identify object represented by present frame subgraph in each cluster.
2. the method as described in claim 1, which is characterized in that the binocular camera includes left camera and right camera, institute After stating the present frame left image and present frame right image that obtain binocular camera, the method also includes:
Deformation correction is carried out to the present frame left image according to the distortion parameter of the left camera, according to the right camera Distortion parameter to the present frame right image carry out deformation correction.
3. the method as described in claim 1, which is characterized in that the binocular camera includes left camera and right camera, institute After stating the present frame left image and present frame right image that obtain binocular camera, the method also includes:
If a certain camera has rotation in the left camera and right camera, and rotates a certain angle, then with described current In frame left image and present frame right image, on the basis of the image of another camera, by the image of a certain camera to one Direction rotates a certain angle;
If the angle between the left camera and the line and vertical line of right camera is not 90 degree, with the present frame On the basis of left image, the present frame right image is amplified into a certain multiple, and the multiple amplified and the angle have functional relation; Alternatively, the present frame left image is reduced a certain multiple, and the multiple reduced and institute on the basis of the present frame right image Stating angle has functional relation.
4. the method as described in claim 1, which is characterized in that after the segmentation obtains present frame subgraph, the method Further include:
The locally-stored former frame subgraph is deleted, the present frame subgraph is stored.
5. the method as described in claim 1, which is characterized in that it is described former frame subgraph and the present frame subgraph it Between carry out tracking and matching, obtain corresponding the first motion information based on image of multiple present frame subgraphs, specifically Include:
In the first present frame subgraph, the first candidate subimage corresponding with the first former frame subgraph is determined;Wherein, The first former frame subgraph is the corresponding former frame subgraph of the former frame left image;
In the second present frame subgraph, the second candidate subimage corresponding with the second former frame subgraph is determined;Wherein, The second former frame subgraph is the corresponding former frame subgraph of the former frame right image;
The corresponding first best subgraph of the first former frame subgraph is chosen from first candidate subimage, from described second The corresponding second best subgraph of the second former frame subgraph is chosen in candidate subimage;
According to the described first best subgraph and corresponding first former frame subgraph and the second best subgraph and corresponding Two former frame subgraphs determine the described first best subgraph and corresponding first motion information of the second best subgraph.
6. method as claimed in claim 5, which is characterized in that
It is described that the corresponding first best subgraph of the first former frame subgraph is chosen from first candidate subimage, it is specific to wrap It includes: calculating each subgraph in first candidate subimage, respectively the information matches value between the first former frame subgraph; If the information matches value of the first subgraph and the first former frame subgraph is minimum in first candidate subimage, and preset In range, then select first subgraph for the first best subgraph;
It is described that the corresponding second best subgraph of the second former frame subgraph is chosen from second candidate subimage, it is specific to wrap It includes: calculating each subgraph in second candidate subimage, respectively the information matches value between the second former frame subgraph; If the information matches value of the second subgraph and the second former frame subgraph is minimum in second candidate subimage, and preset In range, then select second subgraph for the second best subgraph.
7. the method as described in claim 1, which is characterized in that described in the first present frame subgraph and the second present frame subgraph Tracking and matching is carried out as between, the corresponding actual position information of multiple present frame subgraphs is obtained, specifically includes:
In the second present frame subgraph, third candidate subimage corresponding with the first present frame subgraph is determined;
The corresponding best subgraph of third of the first present frame subgraph is chosen from the third candidate subimage;
According to the best subgraph of the third and corresponding first present frame subgraph, the best subgraph of the third and are determined The corresponding actual position information of one present frame subgraph.
8. method as described in any one of claim 1 to 7, which is characterized in that described according to multiple present frame subgraphs Corresponding first motion information and actual position information cluster multiple present frame subgraphs, specifically include:
According to corresponding first motion information of multiple present frame subgraphs and actual position information, determine described current Corresponding second motion information of frame subgraph includes orientation information and values for actual speed in second motion information;
If the irrelevance of orientation information is in presetting range and between values for actual speed between certain two present frame subgraph Difference is in presetting range, and the difference between the actual position information of certain two present frame subgraph is in presetting range It is interior, certain described two present frame subgraph are merged into same cluster.
9. a kind of object test equipment characterized by comprising
Image acquisition unit, for obtaining the present frame left image and present frame right image of binocular camera;
Cutting unit, for being split to obtain corresponding present frame to the present frame left image and present frame right image respectively Subgraph;
Tracking and matching unit obtains more for carrying out tracking and matching between former frame subgraph and the present frame subgraph Corresponding the first motion information based on image of a present frame subgraph;Work as in the first present frame subgraph with second Tracking and matching is carried out between previous frame subgraph, obtains the corresponding actual position information of multiple present frame subgraphs;Its In, first motion information includes orientation information and the velocity amplitude based on image, and the actual position information is each current The frame subgraph location information opposite with the binocular camera, the former frame subgraph be present frame former frame left image and The corresponding subgraph of former frame right image;The first present frame subgraph is corresponding present frame of the present frame left image Image, the second present frame subgraph are the corresponding present frame subgraph of the present frame right image;
Cluster cell, for according to corresponding first motion information of multiple present frame subgraphs and actual bit confidence Breath, clusters multiple present frame subgraphs, and the present frame subgraph expression one that obtained each cluster includes is right As;
Object identification unit, for identification object represented by present frame subgraph in each cluster.
10. device as claimed in claim 9, which is characterized in that further include:
Unit is corrected, for carrying out shape to the present frame left image according to the distortion parameter of camera left in the binocular camera Become correction, deformation correction is carried out to the present frame right image according to the distortion parameter of camera right in the binocular camera;
The cutting unit, be also used to respectively to after the deformation correction present frame left image and present frame right image divide It cuts to obtain corresponding present frame subgraph.
11. device as claimed in claim 10, which is characterized in that
The correction unit if being also used to a certain camera in the left camera and right camera has rotation, and rotates certain One angle, will be described a certain on the basis of the image of another camera then in the present frame left image and present frame right image The image of camera rotates a certain angle to a direction;If the left camera and the line of right camera with it is vertical Angle between line is not 90 degree, then on the basis of the present frame left image, the present frame right image is amplified a certain times Number, and the multiple and angle that amplify have functional relation;Alternatively, on the basis of the present frame right image, the present frame is left The a certain multiple of image down, and the multiple and angle that reduce have functional relation.
12. device as claimed in claim 9, which is characterized in that further include:
Storage element stores the present frame subgraph for deleting the locally-stored former frame subgraph.
13. a kind of storage medium, which is characterized in that the storage medium stores a plurality of instruction, and described instruction is suitable for by processor It loads and executes method for checking object as claimed in any one of claims 1 to 8.
14. a kind of terminal device, which is characterized in that including pocessor and storage media, the processor, for realizing each finger It enables;
The storage medium is for storing a plurality of instruction, and described instruction by processor for being loaded and executing such as claim 1 to 8 Described in any item method for checking object.
CN201711206483.0A 2017-11-27 2017-11-27 A kind of method for checking object, device and storage medium Active CN108305273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711206483.0A CN108305273B (en) 2017-11-27 2017-11-27 A kind of method for checking object, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711206483.0A CN108305273B (en) 2017-11-27 2017-11-27 A kind of method for checking object, device and storage medium

Publications (2)

Publication Number Publication Date
CN108305273A CN108305273A (en) 2018-07-20
CN108305273B true CN108305273B (en) 2019-08-27

Family

ID=62870125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711206483.0A Active CN108305273B (en) 2017-11-27 2017-11-27 A kind of method for checking object, device and storage medium

Country Status (1)

Country Link
CN (1) CN108305273B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020042156A1 (en) * 2018-08-31 2020-03-05 深圳市道通智能航空技术有限公司 Motion area detection method and device, and unmanned aerial vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729860A (en) * 2013-12-31 2014-04-16 华为软件技术有限公司 Image target tracking method and device
CN105335683A (en) * 2014-05-26 2016-02-17 富士通株式会社 Object detection method and object detection apparatus
CN107392958A (en) * 2016-05-16 2017-11-24 杭州海康机器人技术有限公司 A kind of method and device that object volume is determined based on binocular stereo camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729860A (en) * 2013-12-31 2014-04-16 华为软件技术有限公司 Image target tracking method and device
CN105335683A (en) * 2014-05-26 2016-02-17 富士通株式会社 Object detection method and object detection apparatus
CN107392958A (en) * 2016-05-16 2017-11-24 杭州海康机器人技术有限公司 A kind of method and device that object volume is determined based on binocular stereo camera

Also Published As

Publication number Publication date
CN108305273A (en) 2018-07-20

Similar Documents

Publication Publication Date Title
CN109858461B (en) Method, device, equipment and storage medium for counting dense population
CN109816012B (en) Multi-scale target detection method fusing context information
CN109544615B (en) Image-based repositioning method, device, terminal and storage medium
WO2020259481A1 (en) Positioning method and apparatus, electronic device, and readable storage medium
WO2018232837A1 (en) Tracking photography method and tracking apparatus for moving target
KR102113909B1 (en) 3D modeling method and device
CN107633526A (en) A kind of image trace point acquisition methods and equipment, storage medium
JP2018507476A (en) Screening for computer vision
CN109214337A (en) A kind of Demographics' method, apparatus, equipment and computer readable storage medium
CN105427333B (en) Real-time Registration, system and the camera terminal of video sequence image
JP2020119540A (en) Learning method and learning device for object detector capable of hardware optimization based on cnn for detection at long distance or military purpose using image concatenation, and testing method and testing device using the same
CN110930503B (en) Clothing three-dimensional model building method, system, storage medium and electronic equipment
CN111027555B (en) License plate recognition method and device and electronic equipment
CN110023989A (en) A kind of generation method and device of sketch image
CN110796135A (en) Target positioning method and device, computer equipment and computer storage medium
CN110210278A (en) A kind of video object detection method, device and storage medium
CN115329111B (en) Image feature library construction method and system based on point cloud and image matching
CN110675426A (en) Human body tracking method, device, equipment and storage medium
KR101903684B1 (en) Image characteristic estimation method and device
JP2022027464A (en) Method and device related to depth estimation of video
CN113643365A (en) Camera pose estimation method, device, equipment and readable storage medium
CN108305273B (en) A kind of method for checking object, device and storage medium
CN113610967B (en) Three-dimensional point detection method, three-dimensional point detection device, electronic equipment and storage medium
CN106033613B (en) Method for tracking target and device
CN113342055A (en) Unmanned aerial vehicle flight control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211011

Address after: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd.

Address before: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.