CN108710828A - The method, apparatus and storage medium and vehicle of identification object - Google Patents

The method, apparatus and storage medium and vehicle of identification object Download PDF

Info

Publication number
CN108710828A
CN108710828A CN201810350799.5A CN201810350799A CN108710828A CN 108710828 A CN108710828 A CN 108710828A CN 201810350799 A CN201810350799 A CN 201810350799A CN 108710828 A CN108710828 A CN 108710828A
Authority
CN
China
Prior art keywords
position information
light stream
information
clustered
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810350799.5A
Other languages
Chinese (zh)
Other versions
CN108710828B (en
Inventor
张建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BAIC Motor Co Ltd
Beijing Automotive Group Co Ltd
Beijing Automotive Research Institute Co Ltd
Original Assignee
BAIC Motor Co Ltd
Beijing Automotive Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BAIC Motor Co Ltd, Beijing Automotive Research Institute Co Ltd filed Critical BAIC Motor Co Ltd
Priority to CN201810350799.5A priority Critical patent/CN108710828B/en
Publication of CN108710828A publication Critical patent/CN108710828A/en
Application granted granted Critical
Publication of CN108710828B publication Critical patent/CN108710828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

This disclosure relates to the method, apparatus and storage medium and vehicle of a kind of identification object, including:Obtain the current frame image and previous frame image in vehicle periphery preset range;Target signature is obtained according to current frame image and previous frame image, and determines first position information of the target signature in current frame image, and determines the light stream of target signature according to first position information;Target signature is clustered according to first position information and light stream to obtain first object object;The second position information and velocity information of the target to be clustered in vehicle periphery preset range are obtained, and according to second position information and velocity information target to be clustered is clustered to obtain the second object;Whether it is same object according to first position information, second position information and light stream and velocity information identification first object object and the second object.

Description

The method, apparatus and storage medium and vehicle of identification object
Technical field
This disclosure relates to technical field of vehicle, and in particular, to a kind of method, apparatus and storage medium of identification object And vehicle.
Background technology
In order to improve the accuracy rate and reliability of single-sensor detection, multi-sensor information fusion technology is in vehicle safety Ensure that research field is more and more paid attention to.
Currently, in order to improve accuracy of the pilotless automobile to front Context awareness, generally use radar sensor and The mode that visual sensor is merged into row information, wherein radar sensor can be set to the upper position of vehicle front bumper It sets, visual sensor is set on front windshield, still, is carried out since above- mentioned information fusion process relies primarily on location information Fusion, and visual sensor be easy by weather conditions and illumination condition variation influenced, vision-based detection effect also correspondingly by It influences, and when the coincidence of different target object, merely relies on the position of object in image that can not be differentiated and come, in this way, Merely position to rely on information can not accurately identify visual sensor and the collected object of radar sensor, so as to cause Information Fusion failure.
Invention content
To solve the above-mentioned problems, the present disclosure proposes it is a kind of identification object method, apparatus and storage medium and Vehicle.
According to the first aspect of the embodiments of the present disclosure, a kind of method of identification object is provided, vehicle is applied to, including:
Obtain the current frame image and previous frame image in vehicle periphery preset range;
Target signature is obtained according to the current frame image and the previous frame image, and determines the target signature in institute The first position information in current frame image is stated, and determines the light stream of the target signature according to the first position information;
The target signature is clustered according to the first position information and the light stream to obtain first object object;
Obtain the second position information and velocity information of the target to be clustered in the vehicle periphery preset range, and according to The second position information and the velocity information are clustered to obtain the second object to the target to be clustered;
Institute is identified according to the first position information, the second position information and the light stream and the velocity information State whether first object object and second object are same object.
Optionally, described to include according to the current frame image and previous frame image acquisition target signature:
The image characteristic point in the current frame image and the previous frame image is detected respectively;
Close region is carried out to described image characteristic point to match to obtain the target signature.
Optionally, described to determine that the light stream of the target signature includes according to the first position information:
Obtain the third place information of the target signature in the previous frame image;
The light stream of the target signature is determined according to the first position information and the third place information.
Optionally, the light that the target signature is determined according to the first position information and the third place information Stream includes:
The displacement distance of the target signature is determined according to the first position information and the third place information;
The light stream of the target signature is obtained according to the displacement distance and default collection period.
Optionally, described that the target signature is clustered according to the first position information and the light stream to obtain One object includes:
The target signature clustered according to the first position information to obtain the first cluster result;
Target signature in first cluster result is clustered according to the light stream to obtain the first object object.
Optionally, described that the target to be clustered is clustered according to the second position information and the velocity information Obtaining the second object includes:
The target to be clustered is clustered to obtain the second cluster result according to the second position information;
Target to be clustered in second cluster result is clustered according to the velocity information to obtain described second Object.
Optionally, described according to the first position information, the second position information and the light stream and the speed It spends information and identifies whether the first object object and second object are that same object includes:
The first object object and second mesh are determined according to the first position information and the second position information Mark object whether location matches;
When meeting the first object object and the second object location matches, continue to determine the first object object Light stream whether matched with the velocity information of second object;
When the light stream of the first object object is matched with the velocity information of second object, first mesh is determined It is same object to mark object and second object.
Optionally, the light stream of the determination first object object whether the velocity information with second object Before matching, further include:
The velocity information of the light stream of the first object object and second object is normalized respectively;
The light stream of the determination first object object whether matched with the velocity information of second object including:
Determine whether the light stream after the normalization and the difference of the velocity information after the normalization are less than or equal to Predetermined threshold value;
The difference of light stream after the normalization and the velocity information after the normalization is less than or equal to described pre- If when threshold value, determining that the light stream of the first object object is matched with the velocity information of second object.
According to the second aspect of the embodiment of the present disclosure, a kind of device of identification object is provided, vehicle is applied to, including:
Acquisition module, for obtaining current frame image and previous frame image in vehicle periphery preset range;
Processing module for obtaining target signature according to the current frame image and the previous frame image, and determines institute First position information of the target signature in the current frame image is stated, and the mesh is determined according to the first position information Mark the light stream of feature;
First cluster module, for being clustered to the target signature according to the first position information and the light stream Obtain first object object;
Second cluster module, the second position information for obtaining the target to be clustered in the vehicle periphery preset range And velocity information, and according to the second position information and the velocity information target to be clustered is clustered to obtain Two objects;
Identification module, for according to the first position information, the second position information and the light stream and described Velocity information identifies whether the first object object and second object are same object.
Optionally, the processing module includes:
Detection sub-module, for detecting the image characteristic point in the current frame image and the previous frame image respectively;
Matching module matches to obtain the target signature for carrying out close region to described image characteristic point.
Optionally, the processing module includes:
Acquisition submodule, for obtaining the third place information of the target signature in the previous frame image;
First determination sub-module, for determining the target according to the first position information and the third place information The light stream of feature.
Optionally, first determination sub-module, for according to the first position information and the third place information Determine the displacement distance of the target signature;And the target signature is obtained according to the displacement distance and default collection period Light stream.
Optionally, first cluster module includes:
First cluster submodule, for being clustered the target signature to obtain first according to the first position information Cluster result;
Second cluster submodule, for clustering the target signature in first cluster result according to the light stream Obtain the first object object.
Optionally, second cluster module includes:
Third clusters submodule, for being clustered to obtain the to the target to be clustered according to the second position information Two cluster results;
4th cluster submodule, for by the target to be clustered in second cluster result according to the velocity information into Row cluster obtains second object.
Optionally, the identification module includes:
Second determination sub-module, for determining described first according to the first position information and the second position information Object and second object whether location matches;
Third determination sub-module, for when meeting the first object object and the second object location matches, after It is continuous to determine whether the light stream of the first object object matches with the velocity information of second object;
4th determination sub-module, the velocity information for light stream and second object in the first object object Timing determines that the first object object and second object are same object.
Optionally, further include:
Submodule is normalized, for distinguishing the velocity information of the light stream of the first object object and second object It is normalized;
The third determination sub-module, for determining the light stream after the normalization and the velocity information after the normalization Difference whether be less than or equal to predetermined threshold value;Light stream after the normalization and the velocity information after the normalization When difference is less than or equal to the predetermined threshold value, the speed of the light stream and second object of the first object object is determined Information matches.
According to the third aspect of the embodiment of the present disclosure, a kind of computer readable storage medium is provided, is stored thereon with calculating The step of machine program, which realizes method described above when being executed by processor.
According to the fourth aspect of the embodiment of the present disclosure, a kind of device of identification object is provided, including:
Memory is stored thereon with computer program;And
One or more processor, for the step of executing the program in the memory, realizing method described above.
According to a fifth aspect of the embodiments of the present disclosure, a kind of vehicle is provided, the dress of identification object described above is included It sets.
Through the above technical solutions, obtaining the current frame image and previous frame image in vehicle periphery preset range;According to The current frame image and the previous frame image obtain target signature, and determine the target signature in the current frame image In first position information, and determine according to the first position information light stream of the target signature;According to described first Location information and the light stream cluster the target signature to obtain first object object;It obtains the vehicle periphery and presets model The second position information and velocity information of target to be clustered in enclosing, and according to the second position information and the velocity information The target to be clustered is clustered to obtain the second object;According to the first position information, second position information and institute It states light stream and the velocity information identifies whether the first object object and second object are same object, to Object can be accurately identified by realizing, and it is correct to solve identification caused by identifying object according to location information in the prior art The relatively low problem of rate.
Other feature and advantage of the disclosure will be described in detail in subsequent specific embodiment part.
Description of the drawings
Attached drawing is for providing further understanding of the disclosure, and a part for constitution instruction, with following tool Body embodiment is used to explain the disclosure together, but does not constitute the limitation to the disclosure.In the accompanying drawings:
Fig. 1 is a kind of flow chart of the method for identification object shown in disclosure exemplary embodiment;
Fig. 2 is the flow chart of the method for another identification object shown in disclosure exemplary embodiment;
Fig. 3 is the device block diagram of the first identification object shown in disclosure exemplary embodiment;
Fig. 4 is the device block diagram of second of identification object shown in disclosure exemplary embodiment;
Fig. 5 is the device block diagram of the third identification object shown in disclosure exemplary embodiment;
Fig. 6 is the device block diagram of the 4th kind of identification object shown in disclosure exemplary embodiment;
Fig. 7 is the device block diagram of the 5th kind of identification object shown in disclosure exemplary embodiment;
Fig. 8 is the device block diagram of the 6th kind of identification object shown in disclosure exemplary embodiment;
Fig. 9 is the device block diagram of the 7th kind of identification object shown in disclosure exemplary embodiment;
Figure 10 is the device block diagram of the 8th kind of identification object shown in disclosure exemplary embodiment.
Specific implementation mode
The specific implementation mode of the disclosure is described in detail below in conjunction with attached drawing.It should be understood that this place is retouched The specific implementation mode stated is only used for describing and explaining the disclosure, is not limited to the disclosure.
The scene that the disclosure can be applied to information fusion can install visual sensor in the car under the scene With radar sensor and processor, which is used for according in preset range around default collection period collection vehicle Image, angle information, range information of the radar sensor for the target to be clustered in preset range around collection vehicle And velocity information, the previous frame image and current frame image which can acquire according to visual sensor determine present frame First object object in image, and according to radar sensor acquisition target to be clustered angle information, range information and speed Degree information determines the second object, in this way, according to the first position information of the first object object and light stream and second target The second position information and velocity information of object judge whether the first object object and second object are same target, and at this First object object and the second object merge when being same object into row information, and target can be accurately identified to realize Object solves the problems, such as that recognition correct rate caused by identifying object according to location information in the prior art is relatively low.
The disclosure is described in detail with reference to specific embodiment.
Fig. 1 is a kind of flow chart of the method for identification object shown in disclosure exemplary embodiment, as shown in Figure 1, Applied to vehicle, this method includes:
S101, the current frame image in acquisition vehicle periphery preset range and previous frame image.
It in the disclosure, can be all according to default acquisition by the visual sensor (such as video camera) installed in the car Phase acquires the image in the vehicle periphery preset range, which is that visual sensor is collected in current collection period Image, the previous frame image be visual sensor in upper collection period the image collected.
S102, target signature is obtained according to the current frame image and the previous frame image, and determines the target signature at this First position information in current frame image, and determine according to the first position information light stream of the target signature.
Wherein it is possible to the image characteristic point in the current frame image and the previous frame image be detected respectively, for example, the image Characteristic point can be SURF characteristic points, SIFT feature etc.;And close region is carried out to the image characteristic point and matches to obtain the mesh Mark feature, it should be noted that the target signature generally includes multiple, and illustratively, which can be the present frame figure The various pieces of all existing same first object object in picture and the previous frame image, such as the current frame image and previous frame image In all there is same vehicle, then the target signature can be left back car door, vehicle rear window, left back tire of the same vehicle etc., separately Outside, when all there is multiple same first object objects in frame image and the previous frame image in this prior, which can be The corresponding various pieces of multiple same first object objects, above-mentioned example are merely illustrative, and the disclosure is not construed as limiting this.
In this step, the light stream of the target signature can be by obtaining the target signature in the previous frame image Three location informations, and determine according to the first position information and the third place information light stream of the target signature.
S103, the target signature is clustered according to the first position information and the light stream to obtain first object object.
In one possible implementation, it is clustered the target signature to obtain first according to the first position information Cluster result, and the target signature in first cluster result clustered according to the light stream to obtain the first object object.
S104, the second position information and velocity information for obtaining target to be clustered in the vehicle periphery preset range, and The target to be clustered is clustered to obtain the second object according to the second position information and the velocity information.
Wherein, it is detected respectively since second object may be divided into multiple portions by radar sensor, The target to be clustered is the various pieces in second object, in this way, this step can be corresponding by multiple second objects Various pieces are clustered to obtain multiple second objects respectively.
In one possible implementation, the vehicle periphery can be acquired by the radar sensor installed in the car Angle information, range information and the velocity information of target to be clustered in preset range, it is seen then that can according to the angle information and The range information obtains the second position information of the target to be clustered.
S105, according to the first position information, the second position information and the light stream and the velocity information identify this Whether one object and second object are same object.
In this step, the first object object can be determined according to the first position information and the second position information and is somebody's turn to do Second object whether location matches;When meeting the first object object and the second object location matches, continues to determine and be somebody's turn to do Whether the light stream of first object object matches with the velocity information of second object;The first object object light stream with this second When the velocity information matching of object, determine that the first object object and second object are same object.
Using the above method, the collected first object object of visual sensor and radar sensor acquisition can be obtained respectively The second object arrived, and after the matching of two location informations of the first position information of first object object and the second object, it is right The velocity information of the light stream of first object object and the second object is matched, in this way, enterprising on the matched basis of location information The matching of row light stream and velocity information can accurately identify object to realize, solve in the prior art according to position The relatively low problem of recognition correct rate caused by information identifies object.
Fig. 2 is a kind of flow chart of the method for identification object shown in disclosure exemplary embodiment, as shown in Fig. 2, Applied to vehicle, the method includes:
S201, the current frame image in acquisition vehicle periphery preset range and previous frame image.
It in the disclosure, can be all according to default acquisition by the visual sensor (such as video camera) installed in the car Phase acquires the image in the vehicle periphery preset range, which is that visual sensor is collected in current collection period Image, the previous frame image be visual sensor in upper collection period the image collected.
S202, image characteristic point in the current frame image and the previous frame image is detected respectively.
Since SURF features and SIFT feature all have scale invariant feature, and the locality suitable for detection image is special Sign, the extracting method that SURF features or SIFT feature therefore, in the disclosure may be used are extracted on the current frame image and this The extraction of image characteristic point (i.e. SURF characteristic points either SIFT feature) SURF features or SIFT feature in one frame image Method can refer to the prior art, repeat no more, and the extracting method of above-mentioned image characteristic point is merely illustrative, the disclosure to this not It is construed as limiting.
It should be noted that the image characteristic point may include multiple, wherein in view of the image characteristic point is usually to scheme The pixel (such as angle point, inflection point and crosspoint) most easy to identify as in, thus can be according to the image in subsequent step Characteristic point carries out close region matching.
S203, image characteristic point progress close region is matched to obtain the target signature.
In this step, it may be determined that image characteristic point distinguishes the point in frame image in this prior and the previous frame image It sets, and obtains around the position in the current frame image and the previous frame image existing match point in preset range, from And can determine that the match point and the image characteristic point constitute the target signature, the method for the above-mentioned determination target signature is only lifted Example explanation, the disclosure are not construed as limiting this.
S204, first position information of the target signature in this prior in frame image is determined.
S205, the third place information of the target signature in the previous frame image is obtained.
S206, the light stream that the target signature is determined according to the first position information and the third place information.
Wherein it is possible to the displacement distance of the target signature is determined according to the first position information and the third place information, And the light stream of the target signature is obtained according to the displacement distance and default collection period.
S207, the target signature clustered according to the first position information to obtain the first cluster result.
In one possible implementation, which can be divided into multiple regions, at this point, this is each Target signature in region, which is gathered, to be divided the region quantity in region for one kind and is more than or equal to preset quantity, so that cluster Accuracy is higher, and above-mentioned example is merely illustrative, and the disclosure is not construed as limiting this.
S208, the target signature in first cluster result is clustered according to the light stream to obtain first object object.
It, may be mistakenly by the target of different first object objects due to when being clustered by the first position information Feature is gathered for one kind, and in order to avoid the problem, the light stream based on different first object objects is different, and the disclosure can be by the first cluster As a result in target signature continuation clustered according to light stream, to improve different first object objects are clustered it is accurate Degree.
S209, the second position information and velocity information for obtaining target to be clustered in the vehicle periphery preset range.
Wherein, it is detected respectively since second object may be divided into multiple portions by radar sensor, The target to be clustered is the various pieces in second object, in this way, subsequent step can correspond to multiple second objects Various pieces clustered to obtain multiple second objects respectively.
In one possible implementation, the vehicle periphery can be acquired by the radar sensor installed in the car Angle information, range information and the velocity information of target to be clustered in preset range, it is seen then that can according to the angle information and The range information obtains the second position information of the target to be clustered.
S210, the target to be clustered is clustered to obtain the second cluster result according to the second position information.
Since the second position information of acquisition is usually three-dimensional coordinate, it can be by the vehicle periphery preset range Multiple three dimensions are divided into, so that it is determined that the target three dimensions where the target to be clustered, and then realize to be clustered to this Target is clustered, and above-mentioned example is merely illustrative, and certainly, which can also be carried out coordinate and converted by the disclosure To two-dimensional coordinate, so as to be clustered to the target to be clustered in two dimensional surface, the disclosure is not construed as limiting this.
S211, the target to be clustered in second cluster result is clustered according to the velocity information to obtain second mesh Mark object.
Similarly, the target to be clustered in second cluster result may be the part of different second objects, therefore, The part of the second object of difference in the target three dimensions can be clustered by velocity information, to improve not With the cluster accuracy of the second object.
S212, the first object object and second object are determined according to the first position information and the second position information Whether location matches.
Due to the first position, information is two-dimensional coordinate, and the second position information is three-dimensional coordinate, and therefore, it is necessary to should First position information carries out coordinate conversion with the second position information, to carry out location matches, above-mentioned coordinate transform process ginseng The prior art is examined, is repeated no more.
In one possible implementation, GNN algorithms may be used and determine the first object object and second object Whether location matches, above-mentioned example is merely illustrative, and the disclosure is not construed as limiting this.
In the first object object and the second object location matches, step S213 is continued to execute;
When the first object object and the second target object location mismatch, step S214 is executed.
S213, determine whether the light stream of the first object object matches with the velocity information of second object.
It should be noted that since the light stream and velocity information belong to different equivalents, therefore, it is impossible to directly by the light stream and Velocity information is matched, to sum up, the light stream for determining the first object object whether the velocity information with second object Before matching, the velocity information by the light stream of the first object object and second object is needed to be normalized respectively, in this way, determining The light stream of the first object object whether matched with the velocity information of second object including:It determines the light stream after normalization and returns Whether the difference of the velocity information after one change is less than or equal to predetermined threshold value;The speed after light stream and normalization after normalization When spending the difference of information less than or equal to the predetermined threshold value, the speed of the light stream and second object of the first object object is determined Information matches are spent, when the difference of the velocity information after light stream and normalization after normalization is more than the predetermined threshold value, determining should The light stream of first object object and the velocity information of second object mismatch.
When the light stream of the first object object is matched with the velocity information of second object, step S215 is executed;
When the light stream of the first object object is mismatched with the velocity information of second object, step S214 is executed.
S214, determine that the first object object and second object are different target object.
S215, determine that the first object object and second object are same object.
In this way, the classification information of the object can be determined by the collected current frame image of the visual sensor, and Category information is merged with the collected velocity information of radar sensor, to realize visual sensor and radar sensing The Precise fusion of device.
It should be noted that for above method embodiment, for simple description, therefore it is all expressed as a series of dynamic It combines, but those skilled in the art should understand that, the disclosure is not limited by the described action sequence, because of foundation The disclosure, certain steps can be performed in other orders or simultaneously, for example, step S201-S208 is to obtain first object The process of object, step S209-S211 are the process for obtaining the second object, which is self-contained process, therefore, step S209-S211 can be executed before step S201-S208 or two processes are performed simultaneously;Secondly, people in the art Member should also know that embodiment described in this description belongs to preferred embodiment, involved action and module and differs Surely it is necessary to the disclosure.
Using the above method, the collected first object object of visual sensor and radar sensor acquisition can be obtained respectively The second object arrived, and after the matching of two location informations of the first position information of first object object and the second object, it is right The velocity information of the light stream of first object object and the second object is matched, in this way, enterprising on the matched basis of location information The matching of row light stream and velocity information solves and is known in the prior art according to location information so as to accurately identify object The relatively low problem of recognition correct rate caused by other object.
Fig. 3 is a kind of device block diagram of identification object shown in disclosure exemplary embodiment, as shown in figure 3, using In vehicle, including:
Acquisition module 301, for obtaining current frame image and previous frame image in vehicle periphery preset range;
Processing module 302 for obtaining target signature according to the current frame image and the previous frame image, and determines the mesh First position information of the feature in this prior in frame image is marked, and determines the light of the target signature according to the first position information Stream;
First cluster module 303, for the target signature cluster according to the first position information and the light stream To first object object;
Second cluster module 304, the second confidence for obtaining the target to be clustered in the vehicle periphery preset range Breath and velocity information, and the target to be clustered is clustered to obtain the second mesh according to the second position information and the velocity information Mark object;
Identification module 305, for being believed according to the first position information, the second position information and the light stream and the speed Breath identifies whether the first object object and second object are same object.
Fig. 4 is a kind of device block diagram of identification object shown in disclosure exemplary embodiment, as shown in figure 4, at this Managing module 302 includes:
Detection sub-module 3021, for detecting the image characteristic point in the current frame image and the previous frame image respectively;
Matching module 3022 matches to obtain the target signature for carrying out close region to the image characteristic point.
Fig. 5 is a kind of device block diagram of identification object shown in disclosure exemplary embodiment, as shown in figure 5, at this Managing module 302 includes:
Acquisition submodule 3023, for obtaining the third place information of the target signature in the previous frame image;
First determination sub-module 3024, for determining target spy according to the first position information and the third place information The light stream of sign.
Optionally, first determination sub-module 3024, for true according to the first position information and the third place information The displacement distance of the fixed target signature;And the light stream of the target signature is obtained according to the displacement distance and default collection period.
Fig. 6 is a kind of device block diagram of identification object shown in disclosure exemplary embodiment, as shown in fig. 6, this One cluster module 303 includes:
First cluster submodule 3031, for being clustered the target signature to obtain first according to the first position information Cluster result;
Second cluster submodule 3032, for clustering the target signature in first cluster result according to the light stream Obtain the first object object.
Fig. 7 is a kind of device block diagram of identification object shown in disclosure exemplary embodiment, as shown in fig. 7, this Two cluster modules 304 include:
Third clusters submodule 3041, for being clustered to obtain the to the target to be clustered according to the second position information Two cluster results;
4th cluster submodule 3042, for by the target to be clustered in second cluster result according to the velocity information into Row cluster obtains second object.
Fig. 8 is a kind of device block diagram of identification object shown in disclosure exemplary embodiment, as shown in figure 8, the knowledge Other module 305 includes:
Second determination sub-module 3051, for determining first mesh according to the first position information and the second position information Mark object and second object whether location matches;
Third determination sub-module 3052, for meeting the first object object and when the second object location matches, after It is continuous to determine whether the light stream of the first object object matches with the velocity information of second object;
4th determination sub-module 3053, the velocity information for light stream and second object in the first object object Timing determines that the first object object and second object are same object.
Fig. 9 is a kind of device block diagram of identification object shown in disclosure exemplary embodiment, as shown in figure 9, also wrapping It includes:
Submodule 3054 is normalized, for distinguishing the velocity information of the light stream of the first object object and second object It is normalized;
The third determination sub-module 3053, for determining the light stream after the normalization and the velocity information after the normalization Whether difference is less than or equal to predetermined threshold value;Light stream and the difference of the velocity information after the normalization after the normalization is small When the predetermined threshold value, determine that the light stream of the first object object is matched with the velocity information of second object.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Using above-mentioned apparatus, the collected first object object of visual sensor and radar sensor acquisition can be obtained respectively The second object arrived, and after the matching of two location informations of the first position information of first object object and the second object, it is right The velocity information of the light stream of first object object and the second object is matched, in this way, enterprising on the matched basis of location information The matching of row light stream and velocity information solves and is known in the prior art according to location information so as to accurately identify object The relatively low problem of recognition correct rate caused by other object.
Figure 10 is a kind of 1000 block diagram of device of identification object shown according to an exemplary embodiment.Such as Figure 10 institutes Show, which may include:Processor 1001, memory 1002.The device 1000 can also include multimedia component 1003, one or more of input/output (I/O) interface 1004 and communication component 1005.
Wherein, processor 1001 is used to control the integrated operation of the device 1000, to complete above-mentioned identification object All or part of step in method.Memory 1002 is for storing various types of data to support the behaviour in the device 1000 To make, these data for example may include the instruction of any application program or method for being operated on the device 1000, and The relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc..The memory 1002 It can be realized by any kind of volatibility or non-volatile memory device or combination thereof, such as static random-access is deposited Reservoir (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), erasable programmable Read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory (Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as ROM), magnetic memory, flash memory, disk or CD.Multimedia component 1003 may include screen and audio component.Wherein Screen for example can be touch screen, and audio component is for output and/or input audio signal.For example, audio component may include One microphone, microphone is for receiving external audio signal.The received audio signal can be further stored in storage Device 1002 is sent by communication component 1005.Audio component further includes at least one loud speaker, is used for exports audio signal.I/ O Interface 1004 provides interface between processor 1001 and other interface modules, other above-mentioned interface modules can be keyboard, mouse Mark, button etc..These buttons can be virtual push button or entity button.Communication component 1005 is for the device 1000 and other Wired or wireless communication is carried out between equipment.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field Communication, abbreviation NFC), 2G, 3G or 4G or they one or more of combination, therefore corresponding communication Component 1005 may include:Wi-Fi module, bluetooth module, NFC module.
In one exemplary embodiment, device 1000 can be by one or more application application-specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device, Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array (Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member Part is realized, for executing the above-mentioned method for identifying object.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should The step of method of above-mentioned identification object is realized when program instruction is executed by processor.For example, the computer-readable storage Medium can be the above-mentioned memory 1002 including program instruction, and above procedure instruction can be held by the processor 1001 of device 1000 Method of the row to complete above-mentioned identification object.
In yet another exemplary embodiment, a kind of vehicle is also provided, the device of identification object described above is included.
The preferred embodiment of the disclosure is described in detail above in association with attached drawing, still, the disclosure is not limited to above-mentioned reality The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure Monotropic type, these simple variants belong to the protection domain of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance In the case of shield, can be combined by any suitable means, in order to avoid unnecessary repetition, the disclosure to it is various can The combination of energy no longer separately illustrates.
In addition, arbitrary combination can also be carried out between a variety of different embodiments of the disclosure, as long as it is without prejudice to originally Disclosed thought equally should be considered as disclosure disclosure of that.

Claims (19)

1. a kind of method of identification object, which is characterized in that it is applied to vehicle, including:
Obtain the current frame image and previous frame image in vehicle periphery preset range;
Target signature is obtained according to the current frame image and the previous frame image, and determines that the target signature is worked as described First position information in prior image frame, and determine according to the first position information light stream of the target signature;
The target signature is clustered according to the first position information and the light stream to obtain first object object;
The second position information and velocity information of the target to be clustered in the vehicle periphery preset range are obtained, and according to described Second position information and the velocity information are clustered to obtain the second object to the target to be clustered;
According to the first position information, the second position information and the light stream and velocity information identification described the Whether one object and second object are same object.
2. according to the method described in claim 1, it is characterized in that, described according to the current frame image and the previous frame figure Include as obtaining target signature:
The image characteristic point in the current frame image and the previous frame image is detected respectively;
Close region is carried out to described image characteristic point to match to obtain the target signature.
3. according to the method described in claim 2, it is characterized in that, described determine the target according to the first position information The light stream of feature includes:
Obtain the third place information of the target signature in the previous frame image;
The light stream of the target signature is determined according to the first position information and the third place information.
4. according to the method described in claim 3, it is characterized in that, described according to the first position information and the third position Confidence breath determines that the light stream of the target signature includes:
The displacement distance of the target signature is determined according to the first position information and the third place information;
The light stream of the target signature is obtained according to the displacement distance and default collection period.
5. according to the method described in claim 1, it is characterized in that, described according to the first position information and the light stream pair The target signature is clustered to obtain first object object:
The target signature clustered according to the first position information to obtain the first cluster result;
Target signature in first cluster result is clustered according to the light stream to obtain the first object object.
6. according to the method described in claim 1, it is characterized in that, described believe according to the second position information and the speed Breath is clustered to obtain the second object to the target to be clustered includes:
The target to be clustered is clustered to obtain the second cluster result according to the second position information;
Target to be clustered in second cluster result is clustered according to the velocity information to obtain second target Object.
7. according to the method described in claim 6, it is characterized in that, described according to the first position information, the second Confidence is ceased identifies whether the first object object and second object are same with the light stream and the velocity information Object includes:
The first object object and second object are determined according to the first position information and the second position information Whether location matches;
When meeting the first object object and the second object location matches, continue the light for determining the first object object Whether stream matches with the velocity information of second object;
When the light stream of the first object object is matched with the velocity information of second object, the first object object is determined It is same object with second object.
8. the method according to the description of claim 7 is characterized in that the determination first object object light stream whether with Before the velocity information matching of second object, further include:
The velocity information of the light stream of the first object object and second object is normalized respectively;
The light stream of the determination first object object whether matched with the velocity information of second object including:
It is default to determine whether the difference of the velocity information after the light stream after the normalization and the normalization is less than or equal to Threshold value;
Light stream and the difference of the velocity information after the normalization after the normalization are less than or equal to the default threshold When value, determine that the light stream of the first object object is matched with the velocity information of second object.
9. a kind of device of identification object, which is characterized in that it is applied to vehicle, including:
Acquisition module, for obtaining current frame image and previous frame image in vehicle periphery preset range;
Processing module for obtaining target signature according to the current frame image and the previous frame image, and determines the mesh First position information of the feature in the current frame image is marked, and determines that the target is special according to the first position information The light stream of sign;
First cluster module, for being clustered to obtain to the target signature according to the first position information and the light stream First object object;
Second cluster module, second position information and speed for obtaining the target to be clustered in the vehicle periphery preset range Information is spent, and the target to be clustered is clustered to obtain the second mesh according to the second position information and the velocity information Mark object;
Identification module, for according to the first position information, the second position information and the light stream and the speed Information identifies whether the first object object and second object are same object.
10. device according to claim 9, which is characterized in that the processing module includes:
Detection sub-module, for detecting the image characteristic point in the current frame image and the previous frame image respectively;
Matching module matches to obtain the target signature for carrying out close region to described image characteristic point.
11. device according to claim 10, which is characterized in that the processing module includes:
Acquisition submodule, for obtaining the third place information of the target signature in the previous frame image;
First determination sub-module, for determining the target signature according to the first position information and the third place information Light stream.
12. according to the devices described in claim 11, which is characterized in that first determination sub-module, for according to described the One location information and the third place information determine the displacement distance of the target signature;And according to the displacement distance and in advance If collection period obtains the light stream of the target signature.
13. device according to claim 9, which is characterized in that first cluster module includes:
First cluster submodule, the first cluster is obtained for being clustered the target signature according to the first position information As a result;
Second cluster submodule, for being clustered to obtain the target signature in first cluster result according to the light stream The first object object.
14. device according to claim 9, which is characterized in that second cluster module includes:
Third clusters submodule, poly- for being clustered to obtain second to the target to be clustered according to the second position information Class result;
4th cluster submodule, for gathering the target to be clustered in second cluster result according to the velocity information Class obtains second object.
15. device according to claim 14, which is characterized in that the identification module includes:
Second determination sub-module, for determining the first object according to the first position information and the second position information Object and second object whether location matches;
Third determination sub-module, it is true for when meeting the first object object and the second object location matches, continuing Whether the light stream of the fixed first object object matches with the velocity information of second object;
4th determination sub-module is matched for the light stream in the first object object with the velocity information of second object When, determine that the first object object and second object are same object.
16. device according to claim 15, which is characterized in that further include:
Submodule is normalized, for carrying out the velocity information of the light stream of the first object object and second object respectively Normalization;
The third determination sub-module, the difference for determining the light stream after the normalization and the velocity information after the normalization Whether value is less than or equal to predetermined threshold value;The difference of light stream and the velocity information after the normalization after the normalization When less than or equal to the predetermined threshold value, the velocity information of the light stream and second object of the first object object is determined Matching.
17. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The step of any one of claim 1-8 the methods are realized when execution.
18. a kind of device of identification object, which is characterized in that including:
Memory is stored thereon with computer program;And
One or more processor is realized for executing the program in the memory described in any one of claim 1-8 The step of method.
19. a kind of vehicle, which is characterized in that including:
Claim 9-16 any one of them identifies the device of object.
CN201810350799.5A 2018-04-18 2018-04-18 Method, device and storage medium for identifying target object and vehicle Active CN108710828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810350799.5A CN108710828B (en) 2018-04-18 2018-04-18 Method, device and storage medium for identifying target object and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810350799.5A CN108710828B (en) 2018-04-18 2018-04-18 Method, device and storage medium for identifying target object and vehicle

Publications (2)

Publication Number Publication Date
CN108710828A true CN108710828A (en) 2018-10-26
CN108710828B CN108710828B (en) 2021-01-01

Family

ID=63867209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810350799.5A Active CN108710828B (en) 2018-04-18 2018-04-18 Method, device and storage medium for identifying target object and vehicle

Country Status (1)

Country Link
CN (1) CN108710828B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262503A (en) * 2019-07-01 2019-09-20 百度在线网络技术(北京)有限公司 Unmanned sales cart dispatching method, device, equipment and readable storage medium storing program for executing
CN110633625A (en) * 2019-07-31 2019-12-31 北京木牛领航科技有限公司 Identification method and system
CN112689775A (en) * 2020-04-29 2021-04-20 华为技术有限公司 Radar point cloud clustering method and device
CN114442101A (en) * 2022-01-28 2022-05-06 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930609A (en) * 2010-08-24 2010-12-29 东软集团股份有限公司 Approximate target object detecting method and device
CN102542843A (en) * 2010-12-07 2012-07-04 比亚迪股份有限公司 Early warning method for preventing vehicle collision and device
CN103246896A (en) * 2013-05-24 2013-08-14 成都方米科技有限公司 Robust real-time vehicle detection and tracking method
CN103587467A (en) * 2013-11-21 2014-02-19 中国科学院合肥物质科学研究院 Dangerous-overtaking early-warning prompting method and system
US20140139369A1 (en) * 2012-11-22 2014-05-22 Denso Corporation Object detection apparatus
CN104658249A (en) * 2013-11-22 2015-05-27 上海宝康电子控制工程有限公司 Method for rapidly detecting vehicle based on frame difference and light stream
CN105701479A (en) * 2016-02-26 2016-06-22 重庆邮电大学 Intelligent vehicle multi-laser radar fusion recognition method based on target features
CN106204640A (en) * 2016-06-29 2016-12-07 长沙慧联智能科技有限公司 A kind of moving object detection system and method
CN106379319A (en) * 2016-10-13 2017-02-08 上汽大众汽车有限公司 Automobile driving assistance system and control method
CN106842188A (en) * 2016-12-27 2017-06-13 上海思致汽车工程技术有限公司 A kind of object detection fusing device and method based on multisensor
CN107507231A (en) * 2017-09-29 2017-12-22 智造未来(北京)机器人系统技术有限公司 Trinocular vision identifies follow-up mechanism and method
CN107705560A (en) * 2017-10-30 2018-02-16 福州大学 A kind of congestion in road detection method for merging visual signature and convolutional neural networks
CN107918386A (en) * 2017-10-25 2018-04-17 北京汽车集团有限公司 Multi-Sensor Information Fusion Approach, device and vehicle for vehicle

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930609A (en) * 2010-08-24 2010-12-29 东软集团股份有限公司 Approximate target object detecting method and device
CN102542843A (en) * 2010-12-07 2012-07-04 比亚迪股份有限公司 Early warning method for preventing vehicle collision and device
US20140139369A1 (en) * 2012-11-22 2014-05-22 Denso Corporation Object detection apparatus
CN103246896A (en) * 2013-05-24 2013-08-14 成都方米科技有限公司 Robust real-time vehicle detection and tracking method
CN103587467A (en) * 2013-11-21 2014-02-19 中国科学院合肥物质科学研究院 Dangerous-overtaking early-warning prompting method and system
CN104658249A (en) * 2013-11-22 2015-05-27 上海宝康电子控制工程有限公司 Method for rapidly detecting vehicle based on frame difference and light stream
CN105701479A (en) * 2016-02-26 2016-06-22 重庆邮电大学 Intelligent vehicle multi-laser radar fusion recognition method based on target features
CN106204640A (en) * 2016-06-29 2016-12-07 长沙慧联智能科技有限公司 A kind of moving object detection system and method
CN106379319A (en) * 2016-10-13 2017-02-08 上汽大众汽车有限公司 Automobile driving assistance system and control method
CN106842188A (en) * 2016-12-27 2017-06-13 上海思致汽车工程技术有限公司 A kind of object detection fusing device and method based on multisensor
CN107507231A (en) * 2017-09-29 2017-12-22 智造未来(北京)机器人系统技术有限公司 Trinocular vision identifies follow-up mechanism and method
CN107918386A (en) * 2017-10-25 2018-04-17 北京汽车集团有限公司 Multi-Sensor Information Fusion Approach, device and vehicle for vehicle
CN107705560A (en) * 2017-10-30 2018-02-16 福州大学 A kind of congestion in road detection method for merging visual signature and convolutional neural networks

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110262503A (en) * 2019-07-01 2019-09-20 百度在线网络技术(北京)有限公司 Unmanned sales cart dispatching method, device, equipment and readable storage medium storing program for executing
CN110262503B (en) * 2019-07-01 2022-06-03 百度在线网络技术(北京)有限公司 Unmanned vending vehicle scheduling method, device, equipment and readable storage medium
CN110633625A (en) * 2019-07-31 2019-12-31 北京木牛领航科技有限公司 Identification method and system
CN112689775A (en) * 2020-04-29 2021-04-20 华为技术有限公司 Radar point cloud clustering method and device
CN112689775B (en) * 2020-04-29 2022-06-14 华为技术有限公司 Radar point cloud clustering method and device
CN114442101A (en) * 2022-01-28 2022-05-06 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
CN114442101B (en) * 2022-01-28 2023-11-14 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar

Also Published As

Publication number Publication date
CN108710828B (en) 2021-01-01

Similar Documents

Publication Publication Date Title
CN109255352B (en) Target detection method, device and system
KR101758576B1 (en) Method and apparatus for detecting object with radar and camera
CN108710828A (en) The method, apparatus and storage medium and vehicle of identification object
US20190034714A1 (en) System and method for detecting hand gestures in a 3d space
WO2016025713A1 (en) Three-dimensional hand tracking using depth sequences
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
JP2012068965A (en) Image recognition device
CN112927303B (en) Lane line-based automatic driving vehicle-mounted camera pose estimation method and system
CN112859125B (en) Entrance and exit position detection method, navigation method, device, equipment and storage medium
CN111428644A (en) Zebra crossing region monitoring method, system and medium based on deep neural network
JP7185419B2 (en) Method and device for classifying objects for vehicles
CN111382637A (en) Pedestrian detection tracking method, device, terminal equipment and medium
CN116129350B (en) Intelligent monitoring method, device, equipment and medium for safety operation of data center
CN110705338A (en) Vehicle detection method and device and monitoring equipment
CN105913034B (en) Vehicle identification method and device and vehicle
CN104680145A (en) Method and device for detecting door opening/closing state change
CN111291749B (en) Gesture recognition method and device and robot
JP2012221162A (en) Object detection device and program
CN112104838B (en) Image distinguishing method, monitoring camera and monitoring camera system thereof
JP6077785B2 (en) Object detection apparatus and program
CN112749727A (en) Local server, image identification system and updating method thereof
CN109543610B (en) Vehicle detection tracking method, device, equipment and storage medium
CN110163032B (en) Face detection method and device
CN115546762A (en) Image clustering method, device, storage medium and server
CN114255321A (en) Method and device for collecting pet nose print, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant