CN108845574B - Target identification and tracking method, device, equipment and medium - Google Patents

Target identification and tracking method, device, equipment and medium Download PDF

Info

Publication number
CN108845574B
CN108845574B CN201810675774.2A CN201810675774A CN108845574B CN 108845574 B CN108845574 B CN 108845574B CN 201810675774 A CN201810675774 A CN 201810675774A CN 108845574 B CN108845574 B CN 108845574B
Authority
CN
China
Prior art keywords
target object
position information
information
radar
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810675774.2A
Other languages
Chinese (zh)
Other versions
CN108845574A (en
Inventor
赵仲夏
彭广平
陶涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Kuangshi Robot Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Kuangshi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Kuangshi Robot Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201810675774.2A priority Critical patent/CN108845574B/en
Publication of CN108845574A publication Critical patent/CN108845574A/en
Application granted granted Critical
Publication of CN108845574B publication Critical patent/CN108845574B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a target identification and tracking method, a target identification and tracking device, target identification and tracking equipment and a target identification and tracking medium, and belongs to the technical field of computers. The method comprises the steps of determining whether a target object exists in a radar sensing range according to collected radar data; when a target object exists in a radar sensing range, acquiring a visual image corresponding to radar data; identifying whether an image of a target object exists in the visual image; and if the image of the target object exists in the visual image, tracking the target object. On one hand, compared with the method in the prior art, the method is simple in scheme and low in consumption of computing resources; on the other hand, because the pre-recognition is performed through the radar, the requirement on the frame number of the visual recognition is not high, so the requirement on equipment for acquiring the visual image is not high, a common camera with lower cost can be adopted, and the method in the embodiment of the invention is lower in cost, unlike the method in the prior art, which requires a depth camera or a binocular camera certainly.

Description

Target identification and tracking method, device, equipment and medium
Technical Field
The invention relates to the technical field of computers, in particular to a target identification and tracking method, device, equipment and medium.
Background
The target identification and tracking of the existing automatic guided vehicle have two schemes of detection device tracking and visual tracking. The detection device tracking scheme is to receive signals sent by a transmitter carried by a person through a vehicle-mounted receiver for tracking. However, the scheme has a hard requirement on the position of the receiving and sending device, the vehicle and the people cannot be shielded, and confusion occurs when multiple vehicles chase after multiple people, so that the robustness is not high in the environment of multiple vehicles and multiple people in the way of obstacles. For the visual tracking scheme, a depth or binocular camera is adopted to use a machine learning or depth learning algorithm, the requirement on computing resources is high, and the price of the depth camera is high.
Disclosure of Invention
The target identification and tracking method, the target identification and tracking device, the target identification and tracking equipment and the target identification and tracking medium can solve the technical problems of complexity and high cost of the target identification and tracking method in the prior art.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a target identification and tracking method, including: determining whether a target object exists in a radar sensing range according to the collected radar data; when a target object exists in the radar sensing range, acquiring a visual image corresponding to the radar data, wherein the visual image comprises a region located in the radar sensing range; identifying whether an image of the target object is present in the visual image; and if the image of the target object exists in the visual image, tracking the target object.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where determining whether a target object exists within a radar sensing range according to the collected radar data includes: carrying out threshold segmentation on the radar data according to the catastrophe points to obtain a plurality of point set areas to be identified; judging whether any one point set area to be identified in the plurality of point set areas to be identified meets a preset condition or not; and if so, judging that the target object exists in the point set region to be identified which meets the preset condition.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the determining whether any one to-be-identified point set area in the multiple to-be-identified point set areas meets a preset condition includes: judging whether the distance between a starting point and a terminating point in any one to-be-identified point set area in the multiple to-be-identified point set areas is within a preset range or not and whether the shape of any to-be-identified point set area is matched with the shape of a preset target object or not; if the distance is within the preset range and the shape is matched with the preset target object, representing that the any one point set region to be identified meets the preset condition.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where determining whether a target object exists within a radar sensing range according to the collected radar data includes: respectively carrying out threshold segmentation on multiple frames of continuously collected radar data according to catastrophe points to obtain multiple to-be-identified point set regions corresponding to each frame of radar data; judging whether the point set areas to be identified at the same positions in the plurality of point set areas to be identified of each frame of radar data meet preset conditions or not; and if so, determining that the target object exists in the point set region to be identified which meets the preset condition.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the tracking the target object includes: determining first position information of the target object according to the radar data; and acquiring the instant position information of the target object in real time according to the first position information.
With reference to the fourth possible implementation manner of the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where predicting instant location information of the target object according to the first location information includes: determining a time interval for receiving two adjacent radar data; determining the speed of the target object according to the first position information corresponding to two adjacent moments and the time interval; and predicting the instant position information of the target object according to the speed and the first position information corresponding to the current moment.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where after identifying whether an image of the target object exists in the visual image, the method further includes: if the image of the target object exists in the visual image, acquiring the area information of the image of the target object in the visual image; determining first position information of the target object according to the radar data, wherein the first position information comprises distance information and direction information; determining specification information of the target object according to the first position information and the area information; determining second position information of the target object according to the specification information and the area information, wherein the second position information comprises distance information and direction information; and acquiring the instant position information of the target object in real time according to the second position information.
With reference to the sixth possible implementation manner of the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where predicting instant location information of the target object according to the second location information includes: and predicting instant position information of the target object according to the first position information and the second position information.
With reference to any one implementation manner of the fourth possible implementation manner to the seventh possible implementation manner of the first aspect, an embodiment of the present invention provides an eighth possible implementation manner of the first aspect, where the method further includes: and setting a tracking distance according to the instant position information.
In a second aspect, an embodiment of the present invention provides a target identification and tracking apparatus, including: the radar pre-recognition module is used for determining whether a target object exists in a radar sensing range according to the collected radar data; the visual identification module is used for acquiring a visual image corresponding to the radar data when a target object exists in the radar sensing range, wherein the visual image comprises an area located in the radar sensing range; an identification module for identifying whether an image of the target object exists in the visual image; a tracking module for tracking the target object when the image of the target object exists in the visual image.
In a third aspect, a terminal device provided in an embodiment of the present invention includes: a body; a radar disposed on the body; the image acquisition device is arranged on the body; the processor is arranged on the body and is respectively connected with the radar and the image acquisition device, and the processor is used for determining whether a target object exists in a radar sensing range according to radar data acquired by the radar and acquiring a visual image, corresponding to the moment when the target object exists, in the radar sensing range acquired by the image acquisition device when the target object exists; and identifying whether an image of the target object exists in the visual image.
With reference to the third aspect, an embodiment of the present invention provides a first possible implementation manner of the third aspect, where the processor is configured to track the target object, and includes: determining first position information of the target object according to the radar data; and acquiring the instant position information of the target object in real time according to the first position information.
With reference to the third aspect, an embodiment of the present invention provides a second possible implementation manner of the third aspect, where the processor is configured to track the target object, and includes: acquiring region information of the image of the target object in the visual image; determining first position information of the target object according to the radar data, wherein the first position information comprises distance information and direction information; determining specification information of the target object according to the first position information and the area information; determining second position information of the target object according to the specification information and the area information, wherein the second position information comprises distance information and direction information; and acquiring the instant position information of the target object in real time according to the second position information.
With reference to the third aspect, an embodiment of the present invention provides a third possible implementation manner of the third aspect, where the obtaining, by the processor, instant location information of the target object in real time according to the second location information includes: and acquiring instant position information of the target object in real time according to the first position information and the second position information.
In a fourth aspect, an embodiment of the present invention provides a storage medium, where the storage medium stores instructions that, when executed on a computer, cause the computer to execute the target identifying and tracking method according to any one of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: whether a target object exists in a radar sensing range is determined through radar data, namely, the target object is pre-identified through the radar data, then when the target object exists in the radar pre-identification, whether the target object exists is determined through visual identification, and if the target object exists in the visual image, the target object is tracked. On one hand, compared with the method in the prior art, the method is simple in scheme and low in consumption of computing resources; on the other hand, because the pre-recognition is performed through the radar, the requirement on the frame number of the visual recognition is not high, so the requirement on equipment for acquiring the visual image is not high, a common camera with lower cost can be adopted, and the method in the embodiment of the invention is lower in cost, unlike the method in the prior art, which requires a depth camera or a binocular camera certainly.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flowchart illustrating a target identification and tracking method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of the partitioning of radar data in the target recognition and tracking method of FIG. 1;
FIG. 3 is a functional block diagram of a target recognition and tracking device according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal device according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal device shown in fig. 4.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
First embodiment
Since the existing target recognition and tracking method cannot stably follow a fixed person quickly at a low cost, in order to improve the stable following of the fixed person, the present embodiment provides a target recognition and tracking method, first, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that shown here. The present embodiment will be described in detail below.
Please refer to fig. 1, which is a flowchart illustrating a target identification and tracking method according to an embodiment of the present invention. The specific process shown in FIG. 1 will be described in detail below.
And S101, determining whether a target object exists in a radar sensing range according to the collected radar data.
In an embodiment of the invention, the radar data is a set formed by a plurality of points which are acquired in real time and have distance information and direction information. Such as may be acquired in real time by a radar module having radar data acquisition capabilities. For example, the radar module collects the result returned by each beam by emitting a plurality of beams, and uses the result as radar data. For example a radar module mounted on an automated guided vehicle or a radar module mounted on a robot.
Alternatively, the radar sensing range may be a circular plane or a sector plane, such as a sector with a central angle of 90 degrees or 50 degrees and a radius of r. In general, the value of r can be related to the measurement parameters of the radar module or set according to the requirements of the user within the measurement parameters.
For example, a rectangular coordinate system is established with a radar acquisition point as a center, and a radar sensing range can be a first quadrant, a second quadrant, a third quadrant, a fourth quadrant, or even any combination of two adjacent quadrants, such as a range formed by the first quadrant and the second quadrant as the radar sensing range.
In this embodiment, the target object may be an object without a vital sign, such as a vehicle or a ship. It may also be a living characteristic object, such as an animal or a human. Here, the number of the carbon atoms is not particularly limited.
As a first possible implementation manner, step S101 includes: carrying out threshold segmentation on radar data according to catastrophe points to obtain a plurality of point set areas to be identified; judging whether any one point set area to be identified in the plurality of point set areas to be identified meets a preset condition or not; and if so, judging that a target object exists in the point set region to be identified which meets the preset condition.
The catastrophe points refer to the fact that the distance between two adjacent points in the radar data is larger than a preset value, and the preset value can be set according to user requirements. For example, when the distance between two adjacent points a and B is greater than a preset value, a is divided into one to-be-identified point set region and B is divided into another to-be-identified point set region by threshold segmentation.
Each point set area to be identified comprises a plurality of points, and each point corresponds to the result acquired by the light beam emitted by the radar. For example, the radar collects the result C by the emitted C-beam, and the result C exists in the radar data in the form of a point, for example, a point including two-dimensional information of distance and direction.
Optionally, the threshold used for threshold segmentation may also be set according to user requirements, and is not specifically limited herein.
Optionally, the determining whether any one to-be-identified point set region in the multiple to-be-identified point set regions meets a preset condition includes: judging whether the distance between a starting point and a terminating point in any one to-be-identified point set region in the multiple to-be-identified point set regions is within a preset range or not and whether the shape of any one to-be-identified point set region is matched with the shape of a preset target object or not; if the distance is within the preset range and the shape is matched with the preset target object, representing that any one point set area to be identified meets the preset condition. That is, the size and shape of any one to-be-identified point set region in the multiple to-be-identified point set regions are judged to match with the preset target object. Or matching the size, the amplitude and the shape formed between the starting point and the ending point of any one point set area to be identified in the plurality of point set areas to be identified with the characteristics of the height, the shape and the like of the preset target object.
The shape of the point set region to be identified refers to a shape formed by sequentially connecting points from a starting point to an end point in the point set region to be identified. For example, assuming that the point set region to be recognized includes three points, which are a start point, a middle point, and an end point, respectively, the formed shape is formed by connecting the start point, the middle point, and the end point in this order.
The starting point in the radar sensing range is the starting point with the positive boundary as the starting point and the negative boundary as the ending point in the radar sensing range. For example, the radar sensing range is 270 degrees, and between plus 135 degrees and minus 135 degrees, the point at plus 135 degrees is taken as a starting point, and the point at minus 135 degrees is taken as an ending point.
In practical use, the point set region to be identified may be determined by counting points from a start point to an end point within a radar sensing range, for example, when a distance from a first point (start point) to a second point is within a preset range, a distance from the second point to a third point is calculated, and if the distance from the second point to the third point is not within the preset range, the second point and the third point are determined to be a mutation point, the second point and the first point are taken as a point set region to be identified, and the second point is taken as the end point of the point set region to be identified. And taking the third point as the starting point of the next point set area to be identified, and so on, thereby dividing all points in the radar sensing range.
For example, as shown in fig. 2, a point set region D formed by radar data is subjected to threshold segmentation according to a mutation point to obtain a plurality of to-be-identified point set regions, where the plurality of to-be-identified point set regions include a to-be-identified point set region D1, a to-be-identified point set region D2, a to-be-identified point set region D3, and a to-be-identified point set region D4. And matching the size, the amplitude and the shape formed between the starting point and the ending point of any one point set area to be identified in the point set area d1, the point set area d2, the point set area d3 and the point set area d4 to be identified with the graph formed by the target object in the radar data, thereby realizing the purpose of determining whether the target object exists in the radar sensing range through the radar data.
As a second possible implementation manner, step S101 includes: respectively carrying out threshold segmentation on multiple frames of continuously collected radar data according to catastrophe points to obtain multiple to-be-identified point set regions corresponding to each frame of radar data; judging whether the point set areas to be identified at the same positions in the plurality of point set areas to be identified of each frame of radar data meet preset conditions or not; and if so, determining that the target object exists in the point set region to be identified which meets the preset condition. Respectively identifying point set areas to be identified corresponding to the same position in all frames of radar data, and judging whether the point set areas to be identified meet preset conditions; and if the preset condition is met, determining that the target object exists in the radar sensing range, namely recognizing that the target object appears in the radar sensing range.
The preset condition may refer to the content recorded in the first possible implementation manner, and is not described herein again.
In the embodiment, by identifying the point set areas to be identified corresponding to the same position, the situation that the target object does not exist can be effectively filtered, so that irrelevant items are filtered, and the accuracy of identifying the target object is further improved.
Alternatively, the multi-frame radar data may be radar data acquired from radar data acquired in real time at 20 frames per second, that is, each frame of radar data is radar data acquired within a predetermined unit time period. For example, assuming that 20 frames of radar data can be collected for one second, the total number of radar data of a plurality of frames is 20. Or selecting continuous n (n is a positive integer less than 20 and greater than 1) frames from 20 frames of radar data to perform threshold segmentation according to the mutation points.
And step S102, when it is determined that a target object exists in the radar sensing range, acquiring a visual image corresponding to the radar data, wherein the visual image comprises a region located in the radar sensing range.
Alternatively, the visual image is an image selected from images acquired in real time corresponding to the time at which the radar data is acquired. Wherein the image is a real-time image within the radar perception range acquired by an image acquisition device (such as a monocular camera).
Alternatively, the acquisition time of the visual image and the radar data may be within a preset time interval. The preset time interval may be set according to actual requirements, and is not specifically limited herein.
Continuing with the example in step S101, a cheap general camera (which is cheaper than the depth camera and the binocular camera) is installed on the automated guided vehicle to acquire a visual image in the radar sensing range in real time, and when it is determined that the target object exists through step S101, a visual image corresponding to the radar data is acquired.
For example, after it is determined that the target object exists through step S101, assuming that radar data of the target object exists and is acquired at time t1, a visual image corresponding to time t1 is acquired from a plurality of visual images acquired by a camera in real time.
It should be noted that the acquisition range for acquiring the visual image is equal to and covers the radar sensing range. Or the acquisition range of the acquired visual image is larger than and covers the radar perception range, namely the radar perception range is positioned in the acquisition range of the acquired visual image. Or slightly smaller than the radar sensing range, i.e. within the radar sensing range. In some embodiments, to enable a more accurate tracking effect, the acquisition range may be set equal to and cover the radar perception range.
For example, assuming that the radar sensing range coincides with the area of the first quadrant, the acquisition range for acquiring the visual image also coincides with the area of the first quadrant.
Step S103, identifying whether the image of the target object exists in the visual image.
As a possible implementation manner, a template matching manner is adopted to identify whether an image of a target object exists in the visual image, specifically, by matching the image in the visual image with a preset template, if matching, it is determined that an image of the target object exists in the visual image, otherwise, it does not exist.
In practical applications, whether the image of the target object exists in the visual image may be identified by other means. For example, on the one hand, for a low-performance camera, a traditional machine learning method or a cloud scheme can be adopted to identify whether an image of a target object exists in a visual image when a network is clear. On the other hand, for a high-performance camera, a depth learning method may be employed to recognize whether an image of a target object exists in a visual image.
And step S104, if the image of the target object exists in the visual image, tracking the target object.
In a first optional embodiment, in tracking, when a clear visual image cannot be acquired under the condition of illumination change of vision, in order to better track the target object, step S104 includes: determining first position information of a target object according to the radar data; and acquiring the instant position information of the target object in real time according to the first position information.
The first position information refers to a position of a target object in a two-dimensional plane, wherein the position is obtained through radar data, and the first position information comprises distance information and direction information.
Optionally, determining the first position information of the target object according to the radar data includes: and determining first position information of the target object according to the distance information and the direction information carried by the radar data. Because the radar data is two-dimensional data obtained based on a two-dimensional plane, the distance and the direction of the target object from the radar acquisition point can be obtained through the radar data, and therefore the first position information is obtained through the direction and the distance.
The radar acquisition points are places for transmitting radar beams, and are positions of the automatic guided transport vehicle under the assumption that radar data are acquired through a radar on the automatic guided transport vehicle.
Optionally, the obtaining, in real time, the instant location information of the target object according to the first location information includes: and inputting the real-time first position information into an extended Kalman filtering tracking system for processing so as to predict the real-time position information of the target object. The specific implementation mode is as follows:
determining a time interval for receiving two adjacent radar data; determining the speed of the target object according to the first position information corresponding to two adjacent moments and the time interval; and inputting the first position information corresponding to the speed and the current moment into an extended Kalman filtering tracking system to predict the instant position information.
The extended Kalman filtering tracking system uses the speed and the position of a target object as state quantities (namely, first position information and the speed are used as the state quantities), a motion model is used for prediction, radar data or a visual identification result is used for observation, and therefore the tracking system is constructed. Wherein the motion model can be expressed as: the position of the current time (i.e., the first position of the current time) is the position before the current time (i.e., the first position before the current time) + the velocity-time difference; the time difference is the difference between two adjacent time instants.
The instant location information refers to a location corresponding to the target object at the current moment.
In the embodiment of the invention, the instant position information of the target object is obtained, so that the position output at any moment is ensured for tracking, and the real-time performance of tracking is further ensured.
Continuing with the example in step S101, after the automated guided vehicle obtains the instant location information of the target object based on the target identification and tracking method provided by the embodiment of the present invention, the target object can be tracked through the instant location information.
In a second optional embodiment, when the radar data is occluded, in order to better track the target object, step S104 includes: acquiring area information of an image of a target object in a visual image; determining first position information of a target object according to the radar data, wherein the first position information comprises distance information and direction information; determining specification information of the target object according to the first position information and the area information; determining second position information of the target object according to the specification information and the area information, wherein the second position information comprises distance information and direction information; and acquiring the instant position information of the target object in real time according to the second position information.
Alternatively, the area information may be a rectangular box or an irregular area. Here, the number of the carbon atoms is not particularly limited.
The first position information may refer to the description in the first optional embodiment, and is not described herein again.
The region information refers to a range of the image of the target object in the visual image.
Optionally, determining the specification information of the target object according to the first position information and the area information includes: the distance of the target object from the device that acquired the radar data (e.g., the distance between the target object and the automated guided vehicle) is derived from the radar data, the distance of the target object from the acquired visual image is obtained from the visual image, and the position of the target object in the visual image (i.e., the second position information) is obtained from the radar data and the visual image, and, in particular, the width of the target object in the camera plane and the position of the target object corresponding to the width in the visual image can be obtained according to the principle that the radar can image in the camera and the position of the radar and the camera a priori, and then the data and the camera imaging principle (namely pinhole imaging) are used for obtaining the target object, thereby obtaining specification information of the target object and a relative position of the target object and the device that collects the visual image or collects the radar data (or a relative position between the target object and the automated guided vehicle).
The specification information may be size information of the target object, such as length and width. When the target object is a human body, the specification information is the height of the human body and the width of the human body.
Optionally, as an implementation scenario, when a specific part of the target object needs to be tracked, the second position information further includes depth information. And the real-time tracking of the target object is realized through the depth information, the distance information and the direction information.
For example, if the target object is a human body and the head of the human body needs to be tracked currently, the position of the head of the human body can be obtained through the depth information in the second position information, so that the head can be tracked.
Alternatively, the acquisition of the instant position information may be obtained by using the extended kalman filter tracking system mentioned in the first optional embodiment. For a detailed implementation, please refer to the description in the first alternative embodiment, which is not repeated herein.
In the embodiment, by acquiring the specification information of the target object, when the vehicle and the person are shielded, the vehicle and the person can still finish quick identification and tracking, so that the robustness is effectively improved.
In a third alternative embodiment, in the case of no illumination change in vision and when the radar data is not occluded, step S104 includes: acquiring area information of an image of a target object in a visual image; determining first position information of a target object according to the radar data; determining specification information of the target object according to the first position information and the area information; determining second position information of the target object according to the specification information and the area information; and acquiring instant position information of the target object in real time according to the first position information and the second position information.
Alternatively, the acquisition of the instant position information may be obtained by using the extended kalman filter tracking system mentioned in the first optional embodiment. For a detailed implementation, please refer to the description in the first alternative embodiment, which is not repeated herein.
In a fourth optional embodiment, after step S104, the method further includes: if the image of the target object does not exist in the visual image, returning to step S101, and executing step S101 again, so that after determining that the target object exists in the radar sensing range according to the acquired radar data again, step S102 to step S104 are continuously executed until the image of the target object exists in the visual image, and executing the first optional embodiment, the second optional embodiment, or the third optional embodiment again.
In a fifth optional embodiment, the method further comprises: the tracking distance is set. The tracking distance refers to a distance maintained between the target object and the tracking target object during tracking of the target object. For example, the tracking distance of the target object is set according to the instant position information of the target object, so that the tracking distance is adjusted in real time, and the target object can be conveniently tracked in real time. By setting the tracking distance, a tracking device or tracking equipment (such as an automatic guided vehicle) for tracking the target object can track at any distance, so that real-time tracking is more convenient to realize when the distance between the tracking target and the tracking equipment is changed. For example, when an obstacle occurs between the target object and the tracking device, the tracking device needs to bypass the obstacle in order to be able to track the target object, but bypassing the obstacle causes the distance between the target object and the tracking device to become large, but by the above method, by setting the tracking distance in real time, even if the tracking distance becomes large or small during tracking, effective tracking is possible, further improving the accuracy of stable tracking of the fixed target object. In addition, specification information of the target object can be referred to when the tracking distance is set, so that different tracking distances can be set for target objects with different volume specifications, and the target object can be tracked more conveniently.
According to the target identification and tracking method provided by the embodiment of the invention, whether a target object exists in a radar sensing range is determined through radar data, namely, the target object is pre-identified through the radar data, then, when the target object exists in the radar pre-identification, whether the target object exists is determined through visual identification, and if the target object exists in a visual image, the target object is tracked. On one hand, compared with the method in the prior art, the method is simple in scheme and low in consumption of computing resources; on the other hand, because the pre-recognition is performed through the radar, the requirement on the frame number of the visual recognition is not high, so the requirement on equipment for acquiring the visual image is not high, a common camera with lower cost can be adopted, and the method in the embodiment of the invention is lower in cost, unlike the method in the prior art, which requires a depth camera or a binocular camera certainly.
Second embodiment
Fig. 2 shows a target recognition and tracking apparatus that adopts the target recognition and tracking method shown in the first embodiment in a one-to-one correspondence, corresponding to the target recognition and tracking method in the first embodiment. As shown in fig. 2, the target recognition and tracking device 400 includes a radar pre-recognition module 410, a vision recognition module 420, a recognition module 430, and a tracking module 440. The implementation functions of the radar pre-recognition module 410, the visual recognition module 420, the recognition module 430, and the tracking module 440 correspond to the corresponding steps in the first embodiment one to one, and for avoiding redundancy, detailed descriptions are not provided in this embodiment.
And a radar pre-recognition module 410, configured to determine whether a target object exists in a radar sensing range according to the collected radar data.
Optionally, the radar pre-recognition module 410 may be further configured to perform threshold segmentation on the radar data according to the catastrophe points to obtain a plurality of to-be-recognized point set regions; judging whether any one point set area to be identified in the plurality of point set areas to be identified meets a preset condition or not; and if so, judging that the target object exists in the point set region to be identified which meets the preset condition.
Wherein, judge whether any one in a plurality of point set regions of waiting to discern waits to discern the point set region and satisfies the preset condition, include: judging whether the distance between a starting point and a terminating point in any one to-be-identified point set area in the multiple to-be-identified point set areas is equal to a preset value or not and whether the shape of any one to-be-identified point set area is matched with a target object or not; if yes, representing that any point set area to be identified meets a preset condition.
Optionally, the radar pre-recognition module 410 may be further configured to perform threshold segmentation on continuously acquired multiple frames of radar data according to the catastrophe points, respectively, to obtain multiple to-be-recognized point set regions corresponding to each frame of radar data; judging whether the point set areas to be identified at the same positions in the multiple point set areas to be identified of each frame of radar data meet preset conditions or not; and if so, determining that the target object exists in the point set region to be identified which meets the preset condition.
A visual identification module 420, configured to, when it is determined that a target object exists in the radar sensing range, obtain a visual image corresponding to the radar data, where the visual image includes an area located in the radar sensing range.
An identifying module 430 for identifying whether an image of the target object exists in the visual image.
A tracking module 440, configured to track the target object when an image of the target object exists in the visual image.
In a first optional embodiment, during tracking, when a clear visual image cannot be acquired under the condition of illumination change of vision, in order to better track a target object, the tracking module 440 is configured to determine first position information of the target object according to radar data; and acquiring instant position information of the target object in real time according to the first position information so as to realize real-time tracking of the target object through the instant position information.
Optionally, the obtaining, in real time, the instant location information of the target object according to the first location information includes: and inputting the real-time first position information into an extended Kalman filtering tracking system for processing so as to predict the real-time position information of the target object. The specific implementation mode is as follows:
determining a time interval for receiving two adjacent radar data; determining the speed of the target object according to the first position information corresponding to two adjacent moments and the time interval; and inputting the first position information corresponding to the speed and the current moment into an extended Kalman filtering tracking system to predict the instant position information.
In a second optional embodiment, when the radar data is blocked, in order to better track the target object, the tracking module 440 is configured to obtain region information of the image of the target object in the visual image; determining first position information of a target object according to the radar data, wherein the first position information comprises distance information and direction information; determining specification information of the target object according to the first position information and the area information; determining second position information of the target object according to the specification information and the area information, wherein the second position information comprises distance information and direction information; and acquiring the instant position information of the target object in real time according to the second position information.
In a third optional embodiment, in order to better track the target object under the condition that there is no illumination change in the vision and when the radar data is not occluded, the tracking module 440 is configured to obtain the region information of the image of the target object in the visual image; determining first position information of a target object according to the radar data; determining specification information of the target object according to the first position information and the area information; determining second position information of the target object according to the specification information and the area information; and acquiring instant position information of the target object in real time according to the first position information and the second position information.
Further, the target recognition and tracking device 400 further includes: and a tracking distance setting module.
And the tracking distance setting module is used for setting a tracking distance according to the instant position information.
The target recognition and tracking device 400 may be a robotic vehicle or a smart robot, or the target recognition and tracking device 400 may be mounted on a robotic vehicle or a smart robot.
Third embodiment
As shown in fig. 4 and 5, is a schematic diagram of a terminal device 300. The terminal device 300 comprises a memory 302, a processor 304, and a computer program 303, a radar 305, an image acquisition apparatus 306, and a body 307 stored in the memory 302 and executable on the processor 304. When executed by the processor 304, the computer program 303 implements the target identification and tracking method in the first embodiment, which is not described herein again to avoid repetition. Alternatively, the computer program 303 is executed by the processor 304 to implement the functions of each model/unit in the target identifying and tracking apparatus according to the second embodiment, and for avoiding repetition, the details are not described herein again.
Illustratively, the computer program 303 may be partitioned into one or more modules/units, which are stored in the memory 302 and executed by the processor 304 to implement the present invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 303 in the terminal device 300. For example, the computer program 303 may be divided into the radar pre-recognition module 410, the vision recognition module 420, the recognition module 430, and the tracking module 440 in the second embodiment, and specific functions of each module are as described in the first embodiment or the second embodiment, which are not described herein again.
The Memory 302 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a programmable read-only Memory (PROM), an Erasable read-only Memory (EPROM), an electrically Erasable read-only Memory (EEPROM), and the like. The memory 302 is used for storing a program, and the processor 304 executes the program after receiving an execution instruction, and the method defined by the flow disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 304, or implemented by the processor 304.
The processor 304 may be an integrated circuit chip having signal processing capabilities. The processor 304 may be a general-purpose processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In this embodiment, the radar 305 is disposed on the body 307 and connected to the processor 304. The radar 305 is used to collect radar data in real time within a radar sensing range.
The image capturing device 306 is disposed on the body 307 and connected to the processor 304. The image acquisition device 306 is used for acquiring a visual image in a radar perception range in real time.
Optionally, the image capture device 306 is a monocular camera.
Optionally, the radar 305 and the image acquisition device 306 are disposed on the same axis or the same horizontal line of the body 307. As shown in fig. 5, the radar 305 and the image acquisition device 306 are on the same axis in the direction indicated by the arrow, so that the radar 305 and the image acquisition device 306 can acquire data in the same area.
In actual use, the radar 305 and the image acquisition device 306 may be separated by a preset distance. In general, the value of the preset distance may be set according to the performance of the radar 305 and the image acquisition device 306, or the acquisition angle of view.
Optionally, in order to better cover the radar sensing range of the radar 305 with the image capturing range of the image capturing device 306, the radar 305 is integrated with the image capturing device 306.
Optionally, the processor 304 is configured to execute steps S101 to S104 in the foregoing first embodiment.
Optionally, the processor 304 is further configured to perform the steps in the fifth optional embodiment of the foregoing first embodiment.
The terminal device 300 is used as a tracking device for tracking a target object, and the terminal device 300 may be, but is not limited to, an automated guided vehicle or an intelligent robot.
It is understood that the structure shown in fig. 4 and 5 is only a schematic diagram of the structure of the terminal device 300, and the terminal device 300 may further include more or less components than those shown in fig. 4 and 5. The components shown in fig. 4 may be implemented in hardware, software, or a combination thereof.
Fourth embodiment
An embodiment of the present invention further provides a storage medium, where instructions are stored in the storage medium, and when the instructions are executed on a computer, the computer program is executed by a processor to implement the target identification and tracking method in the first embodiment, and details are not repeated here in order to avoid repetition. Alternatively, the computer program is executed by the processor to implement the functions of each model/unit in the target identifying and tracking apparatus according to the second embodiment, and details are not repeated here to avoid repetition.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by hardware, or by software plus a necessary general hardware platform, and based on such understanding, the technical solution of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute the method of the various implementation scenarios of the present invention.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (13)

1. A method for target identification and tracking, the method comprising:
determining whether a target object exists in a radar sensing range according to the collected radar data;
when a target object exists in the radar sensing range, acquiring a visual image corresponding to the radar data, wherein the visual image comprises a region located in the radar sensing range; the visual image is an image selected from the acquired images and corresponding to the moment when the radar data is acquired;
identifying whether an image of the target object is present in the visual image;
if the image of the target object exists in the visual image, tracking the target object;
the tracking the target object comprises:
acquiring region information of the image of the target object in the visual image;
determining first position information of the target object according to the radar data, wherein the first position information comprises distance information and direction information;
determining specification information of the target object according to the first position information and the area information;
determining second position information of the target object according to the specification information and the area information, wherein the second position information comprises distance information and direction information;
and predicting the instant position information of the target object according to the second position information.
2. The method of claim 1, wherein determining whether a target object is present within a radar sensing range based on the collected radar data comprises:
carrying out threshold segmentation on the radar data according to the catastrophe points to obtain a plurality of point set areas to be identified;
judging whether any one point set area to be identified in the plurality of point set areas to be identified meets a first preset condition or not;
and if so, judging that the target object exists in the point set region to be identified which meets the preset condition.
3. The method according to claim 2, wherein the determining whether any one of the to-be-identified point set regions satisfies a preset condition comprises:
judging whether the distance between a starting point and a terminating point in any one to-be-identified point set area in the multiple to-be-identified point set areas is within a preset range or not and whether the shape of any to-be-identified point set area is matched with the shape of a preset target object or not; if the distance is within the preset range and the shape is matched with the preset target object, representing that the any one point set region to be identified meets the preset condition.
4. The method of claim 1, wherein determining whether a target object is present within a radar sensing range based on the collected radar data comprises:
respectively carrying out threshold segmentation on multiple frames of continuously collected radar data according to catastrophe points to obtain multiple to-be-identified point set regions corresponding to each frame of radar data;
judging whether the point set areas to be identified at the same positions in the plurality of point set areas to be identified of each frame of radar data meet preset conditions or not;
and if so, determining that the target object exists in the point set region to be identified which meets the preset condition.
5. The method according to any one of claims 1 to 4, wherein the tracking the target object comprises:
determining first position information of the target object according to the radar data;
and predicting the instant position information of the target object according to the first position information.
6. The method of claim 5, wherein predicting the immediate location information of the target object based on the first location information comprises:
determining a time interval for receiving two adjacent radar data;
determining the speed of the target object according to the first position information corresponding to two adjacent moments and the time interval;
and predicting the instant position information of the target object according to the speed and the first position information corresponding to the current moment.
7. The method of claim 1, wherein predicting the immediate location information of the target object based on the second location information comprises:
and predicting instant position information of the target object according to the first position information and the second position information.
8. The method of claim 5, further comprising:
and setting a tracking distance according to the instant position information.
9. An apparatus for identifying and tracking objects, comprising:
the radar pre-recognition module is used for determining whether a target object exists in a radar sensing range according to the collected radar data;
the visual identification module is used for acquiring a visual image corresponding to the radar data when a target object exists in the radar sensing range, wherein the visual image comprises an area located in the radar sensing range; the visual image is an image selected from the acquired images and corresponding to the moment when the radar data is acquired;
an identification module for identifying whether an image of the target object exists in the visual image;
a tracking module for tracking the target object when an image of the target object exists in the visual image;
the tracking module is used for acquiring the area information of the image of the target object in the visual image; determining first position information of the target object according to the radar data, wherein the first position information comprises distance information and direction information; determining specification information of the target object according to the first position information and the area information; determining second position information of the target object according to the specification information and the area information, wherein the second position information comprises distance information and direction information; and predicting the instant position information of the target object according to the second position information.
10. A terminal device, comprising:
a body;
a radar disposed on the body;
the image acquisition device is arranged on the body;
the processor is arranged on the body and is respectively connected with the radar and the image acquisition device, and the processor is used for determining whether a target object exists in a radar sensing range according to radar data acquired by the radar and acquiring a visual image, corresponding to the moment when the target object exists, in the radar sensing range acquired by the image acquisition device when the target object exists; identifying whether an image of the target object is present in the visual image; and tracking the target object when an image of the target object exists in the visual image; the visual image is an image selected from the acquired images and corresponding to the moment when the radar data is acquired;
the processor is configured to track the target object, and includes: acquiring region information of the image of the target object in the visual image; determining first position information of the target object according to the radar data, wherein the first position information comprises distance information and direction information; determining specification information of the target object according to the first position information and the area information; determining second position information of the target object according to the specification information and the area information, wherein the second position information comprises distance information and direction information; and acquiring the instant position information of the target object in real time according to the second position information.
11. The terminal device of claim 10, wherein the processor is configured to track the target object, and comprises: determining first position information of the target object according to the radar data; and acquiring the instant position information of the target object in real time according to the first position information.
12. The terminal device according to claim 10, wherein the processor is configured to obtain the instant location information of the target object in real time according to the second location information, and includes: and acquiring instant position information of the target object in real time according to the first position information and the second position information.
13. A storage medium having stored thereon instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 8.
CN201810675774.2A 2018-06-26 2018-06-26 Target identification and tracking method, device, equipment and medium Active CN108845574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810675774.2A CN108845574B (en) 2018-06-26 2018-06-26 Target identification and tracking method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810675774.2A CN108845574B (en) 2018-06-26 2018-06-26 Target identification and tracking method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN108845574A CN108845574A (en) 2018-11-20
CN108845574B true CN108845574B (en) 2021-01-12

Family

ID=64202564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810675774.2A Active CN108845574B (en) 2018-06-26 2018-06-26 Target identification and tracking method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN108845574B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
AU2019357615B2 (en) 2018-10-11 2023-09-14 Tesla, Inc. Systems and methods for training machine models with augmented data
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11150664B2 (en) 2019-02-01 2021-10-19 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
CN110412563A (en) * 2019-07-29 2019-11-05 哈尔滨工业大学 A kind of Portable distance meter and its working method of the auxiliary train railway carriage mounting based on Multi-sensor Fusion
DE102019213155A1 (en) * 2019-08-30 2021-03-04 Robert Bosch Gmbh Method and device for operating a vehicle
CN110550579B (en) * 2019-09-10 2021-05-04 灵动科技(北京)有限公司 Automatic guide forklift
CN113111682A (en) * 2020-01-09 2021-07-13 阿里巴巴集团控股有限公司 Target object sensing method and device, sensing base station and sensing system
CN111409584A (en) * 2020-02-27 2020-07-14 广汽蔚来新能源汽车科技有限公司 Pedestrian protection method, device, computer equipment and storage medium
CN111736140B (en) * 2020-06-15 2023-07-28 杭州海康微影传感科技有限公司 Object detection method and image pickup device
CN111929672A (en) * 2020-08-06 2020-11-13 浙江大华技术股份有限公司 Method and device for determining movement track, storage medium and electronic device
CN112356845B (en) * 2020-11-19 2022-02-18 中国第一汽车股份有限公司 Method, device and equipment for predicting motion state of target and vehicle
CN113030944B (en) * 2021-04-16 2024-02-02 深圳市众云信息科技有限公司 Radar target tracking method
CN113282082A (en) * 2021-04-30 2021-08-20 苏州优世达智能科技有限公司 Unmanned ship autonomous tracking system based on combination of binocular vision and radar

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1032745A (en) * 1996-07-16 1998-02-03 Japan Radio Co Ltd Moving target follow-up image pickup device
CN101526995A (en) * 2009-01-19 2009-09-09 西安电子科技大学 Synthetic aperture radar target identification method based on diagonal subclass judgment analysis
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697007B (en) * 2008-11-28 2012-05-16 北京航空航天大学 Radar image-based flyer target identifying and tracking method
WO2012122589A1 (en) * 2011-03-11 2012-09-20 The University Of Sydney Image processing
EP2827099A1 (en) * 2013-07-16 2015-01-21 Leica Geosystems AG Laser tracker with target searching functionality
CN104796612B (en) * 2015-04-20 2017-12-19 河南弘金电子科技有限公司 High definition radar linkage tracing control camera system and linkage tracking
CN105137421A (en) * 2015-06-25 2015-12-09 苏州途视电子科技有限公司 Photoelectric composite low-altitude early warning detection system
CN106405540A (en) * 2016-08-31 2017-02-15 上海鹰觉科技有限公司 Radar and photoelectric device complementation-based detection and identification device and method
CN108078555A (en) * 2016-11-23 2018-05-29 南京理工大学 A kind of vital sign remote monitoring device based on Kalman filtering and target following
CN107092039A (en) * 2017-03-10 2017-08-25 南京沃杨机械科技有限公司 Farm machinery navigation farm environment cognitive method
CN107944382B (en) * 2017-11-20 2019-07-12 北京旷视科技有限公司 Method for tracking target, device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1032745A (en) * 1996-07-16 1998-02-03 Japan Radio Co Ltd Moving target follow-up image pickup device
CN101526995A (en) * 2009-01-19 2009-09-09 西安电子科技大学 Synthetic aperture radar target identification method based on diagonal subclass judgment analysis
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle

Also Published As

Publication number Publication date
CN108845574A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN108845574B (en) Target identification and tracking method, device, equipment and medium
CN112292711B (en) Associating LIDAR data and image data
CN110163904B (en) Object labeling method, movement control method, device, equipment and storage medium
CN109541583B (en) Front vehicle distance detection method and system
CN111753609B (en) Target identification method and device and camera
US11205276B2 (en) Object tracking method, object tracking device, electronic device and storage medium
EP3007099B1 (en) Image recognition system for a vehicle and corresponding method
CN111160302A (en) Obstacle information identification method and device based on automatic driving environment
US9513108B2 (en) Sensor system for determining distance information based on stereoscopic images
CN110850859B (en) Robot and obstacle avoidance method and obstacle avoidance system thereof
CN115049700A (en) Target detection method and device
EP3703008A1 (en) Object detection and 3d box fitting
TW201539378A (en) Object detection system
CN111913177A (en) Method and device for detecting target object and storage medium
CN111382637A (en) Pedestrian detection tracking method, device, terminal equipment and medium
CN104915642A (en) Method and apparatus for measurement of distance to vehicle ahead
WO2018103024A1 (en) Intelligent guidance method and apparatus for visually handicapped person
Sorial et al. Towards a real time obstacle detection system for unmanned surface vehicles
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
US20230009925A1 (en) Object detection method and object detection device
JP2013069045A (en) Image recognition device, image recognition method, and image recognition program
CN115601435B (en) Vehicle attitude detection method, device, vehicle and storage medium
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
US9183448B2 (en) Approaching-object detector, approaching object detecting method, and recording medium storing its program
CN113516685B (en) Target tracking method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100000 Beijing Haidian District, Dongbei Wangxi Road, No. 8 Building, No. 2 District 106-1

Applicant after: Beijing Wide-sighted Robot Technology Co., Ltd.

Applicant after: MEGVII INC.

Address before: 100000 Beijing Haidian District, Dongbei Wangxi Road, No. 8 Building, No. 2 District 106-1

Applicant before: Beijing AI Ruisi Robot Technology Co Ltd

Applicant before: MEGVII INC.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant