CN111213153A - Target object motion state detection method, device and storage medium - Google Patents

Target object motion state detection method, device and storage medium Download PDF

Info

Publication number
CN111213153A
CN111213153A CN201980004912.7A CN201980004912A CN111213153A CN 111213153 A CN111213153 A CN 111213153A CN 201980004912 A CN201980004912 A CN 201980004912A CN 111213153 A CN111213153 A CN 111213153A
Authority
CN
China
Prior art keywords
dimensional
target object
feature point
determining
dimensional feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980004912.7A
Other languages
Chinese (zh)
Inventor
周游
赵峰
杜劼熹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Shenzhen DJ Innovation Industry Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN111213153A publication Critical patent/CN111213153A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method, a detection device and a storage medium for detecting the motion state of a target object are provided, wherein the method comprises the following steps: the method comprises the steps of obtaining a first driving environment image at a first time (S201), obtaining a second driving environment image at a second time (S202), determining three-dimensional information of a first three-dimensional characteristic point of the target object under a world coordinate system according to the first driving environment image (S203), determining three-dimensional information of a second three-dimensional characteristic point of the target object under the world coordinate system according to the second driving environment image (S204), and determining the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point under the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point under the world coordinate system (S205). The accuracy of detecting the moving state of the vehicle can be improved.

Description

Target object motion state detection method, device and storage medium
Technical Field
The embodiment of the invention relates to the field of intelligent control, in particular to a method and a device for detecting the motion state of a target object and a storage medium.
Background
With the continuous iterative development of intelligent control technology, more and more vehicles are provided with automatic driving systems or auxiliary driving systems, and the automatic driving systems or the auxiliary driving systems bring much convenience to drivers. In an automatic driving system or a driving-assist system, a very important function is to estimate the motion state of the surrounding vehicle. For example, the moving state of the vehicle may include position information, speed information, and moving direction of the vehicle, etc. Existing autonomous driving systems or assisted driving systems estimate the motion state of the surrounding vehicle by observing the center of mass of the surrounding vehicle. However, existing autonomous or assisted driving systems often do not accurately determine the center of mass of the surrounding vehicle. For example, as shown in fig. 1, fig. 1 is a schematic diagram of a 3D scan point of a vehicle-mounted laser radar projected to a bird's eye view (bird view) perspective. At the beginning, the target vehicle is in the front right of the vehicle, and the vehicle-mounted laser radar of the vehicle can only scan the rear left side of the target vehicle. As shown in fig. 1, the dotted line portion in fig. 1 is a schematic diagram of a projection range of the 3D scan point of the target vehicle at the bird's eye view angle, wherein the frame of the vehicle in fig. 1 is artificially labeled. As shown in fig. 1, if the centroid is calculated by averaging from the 3D scan points at this time, the centroid is located at the lower left corner of the vehicle.
Failure to accurately determine the center of mass of the surrounding vehicle can result in inaccurate predictions of the state of motion of the surrounding vehicle. Therefore, how to accurately estimate the motion state of the surrounding vehicle is a problem to be solved.
Disclosure of Invention
The embodiment of the invention provides a method, a device and a storage medium for detecting the motion state of a target object, which can improve the accuracy of detecting the motion state of a vehicle.
In a first aspect, an embodiment of the present invention provides a method for detecting a motion state of a target object, where the method includes:
acquiring a first driving environment image at a first time;
acquiring a second driving environment image at a second time, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
determining three-dimensional information of a first three-dimensional feature point of the target object in a world coordinate system according to the first driving environment image;
determining three-dimensional information of a second three-dimensional feature point of the target object in a world coordinate system according to the second driving environment image, wherein the second three-dimensional feature point is matched with the first three-dimensional feature point;
and determining the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
In a second aspect, an embodiment of the present invention provides another method for detecting a motion state of a target object, where the method includes:
acquiring a first driving environment image at a first time;
acquiring a second driving environment image at a second time, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
extracting at least one first two-dimensional feature point of a target object from the first driving environment image;
establishing an object model object of the target object according to the at least one first two-dimensional feature point;
extracting at least one second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object;
updating the object model of the target object according to the second two-dimensional feature points to obtain an updated object model of the target object;
and determining the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point matched with the first two-dimensional feature point in the second driving environment image.
In a third aspect, an embodiment of the present invention provides a detection apparatus, where the detection apparatus includes a memory, a processor, and an image capture device;
the camera device is used for acquiring a driving environment image;
the memory to store program instructions;
the processor calls the program instructions stored in the memory to execute the following steps:
shooting at a first time through the camera device to obtain a first driving environment image; shooting at a second time to obtain a second driving environment image, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
determining three-dimensional information of a first three-dimensional feature point of the target object in a world coordinate system according to the first driving environment image;
determining three-dimensional information of a second three-dimensional feature point of the target object in a world coordinate system according to the second driving environment image, wherein the second three-dimensional feature point is matched with the first three-dimensional feature point;
and determining the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
In a fourth aspect, an embodiment of the present invention provides a detection apparatus, where the detection apparatus includes: the apparatus comprises: a memory, a processor, and a camera device;
the camera device is used for acquiring a driving environment image;
the memory to store program instructions;
the processor calls the program instructions stored in the memory to execute the following steps:
acquiring a first driving environment image at a first time through the camera device; acquiring a second driving environment image at a second time, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
extracting at least one first two-dimensional feature point of a target object from the first driving environment image;
establishing an object model object of the target object according to the at least one first two-dimensional feature point;
extracting at least one second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object;
updating the object model of the target object according to the second two-dimensional feature points to obtain an updated object model of the target object;
and determining the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point matched with the first two-dimensional feature point in the second driving environment image.
In the embodiment of the present invention, the detection device may determine, according to the first driving environment image, three-dimensional information of a first three-dimensional feature point of the target object in a world coordinate system, determine, according to the second driving environment image, three-dimensional information of a second three-dimensional feature point of the target object in the world coordinate system, and determine, according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system, a motion state of the target object. The first three-dimensional characteristic point is matched with the second three-dimensional characteristic point, namely the motion state of the target object can be determined according to the three-dimensional coordinate information of the matched three-dimensional characteristic point on the target object, the motion state of the target movement is not required to be determined through the mass center of the target object, and the accuracy of detecting the motion state of the target object can be improved. The safety of vehicle driving can be improved, and vehicle driving is more automatic and intelligent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic diagram of a 3D scanning point of a vehicle-mounted laser radar projected to a bird's-eye view angle according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for detecting a motion state of a target object according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of another method for detecting a motion state of a target object according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a target image provided by an embodiment of the invention;
fig. 5 is a schematic diagram of a first driving environment image according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a moving speed profile of a target object according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a method for fitting the moving speed of a target object according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a relationship between two-dimensional feature points and three-dimensional feature points according to an embodiment of the present invention;
fig. 9 is a schematic flow chart of a method for detecting a motion state of a target object according to another embodiment of the present invention;
FIG. 10 is a schematic diagram of another relationship between two-dimensional feature points and three-dimensional feature points according to an embodiment of the present invention;
fig. 11 is a schematic flow chart of a method for detecting a motion state of a target object according to another embodiment of the present invention;
fig. 12 is a schematic flow chart of a method for detecting a motion state of a target object according to another embodiment of the present invention;
fig. 13 is a schematic diagram illustrating a relationship between a two-dimensional feature point and a three-dimensional feature point according to another embodiment of the present invention;
fig. 14A is a schematic view of a driving environment image according to an embodiment of the present invention;
FIG. 14B is a schematic view of another driving environment image provided by an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a detection apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for detecting the motion state of the target object provided by the embodiment of the invention can be applied to detection equipment, and the detection equipment can be equipment deployed on the object, such as a vehicle event data recorder and the like. Alternatively, the detection device is a device connected to and in the object, such as a mobile phone, a tablet computer, and the like. The target object is detected to be present around the object where the device is located. The object and the target object where the detection equipment is located can be vehicles, mobile robots, unmanned planes and the like, and the vehicles can be intelligent electric vehicles, scooters, balance cars, automobiles, trucks or machine vehicles and the like. The object in which the detection device is located and the target object may be the same, e.g. both vehicles. The object in which the detection device is located and the target object may be different, for example, the target object is a vehicle, and the object in which the detection device is located may be a mobile robot moving on the ground. In the embodiment of the present invention, the object where the detection device is located and the target object are both vehicles, for convenience of distinguishing, the object where the detection device is located may be referred to as a vehicle, and the vehicle corresponding to the target object may be referred to as a target vehicle.
The following further describes a method for detecting a motion state of a target object and related devices.
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for detecting a motion state of a target object according to an embodiment of the present invention, and optionally, the method may be executed by a detection device. As shown in fig. 2, the method for detecting the motion state of the target object may include the following steps.
S201, the detection device acquires a first driving environment image at the first time.
S202, the detection device acquires a second running environment image at a second time.
Wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image include a target object.
In S201 and S202, the detection device may include an image pickup device, and the driving environment image may be obtained by photographing the driving environment by the image pickup device. In one embodiment, the detection device may obtain the driving environment image by photographing the driving environment at regular time intervals by the image pickup device. For example, the fixed time interval may be 0.1s (seconds), the detection apparatus may start counting time from when the own vehicle is started, obtain a first running environment image by the imaging device when the running time of the own vehicle is 0.1s (i.e., a first time), obtain a second running environment image by the imaging device when the running time of the own vehicle is 0.2s (i.e., a second time), and so on. The fixed time interval may be set by a user or may be automatically set by the detection device. The detection equipment can also shoot the running environment at random time intervals through the camera device to obtain running environment images. For example, the random time interval may be 0.2s, 0.1s, or the like, and the first travel environment image is captured by the image capturing device when the travel time of the own vehicle is 0.2s (i.e., the first time), and the second travel environment image is captured by the image capturing device when the travel time of the own vehicle is 0.3s (i.e., the second time), or the like.
In another embodiment, the detection device may capture a driving environment by a camera to obtain a video stream, and select a specific frame in the video stream for detection. For example, the selection of a specific frame in the video stream may be continuous adjacent frames, or may be performed according to a fixed frame number interval, or may be performed according to a random frame number interval, which is similar to the foregoing time interval and is not described again.
Wherein the first driving environment image and the second driving environment image may include one or more objects therein. The target object may be any one of objects in the first travel environment image. The target object may be a moving object in the driving environment image, such as a surrounding driving vehicle, a non-motor vehicle, a walking pedestrian, or the like, and the target object may also be a non-moving object in the driving environment, such as a surrounding stationary vehicle or pedestrian, a road surface fixture, or the like. When the target object is a moving object, the motion state of the target object detected by the detection device may include speed information; when the target object is a non-moving object, it may be considered that the motion state of the target object detected by the detection device includes velocity information having a velocity of zero.
Taking the target object as the road surface other vehicle as an example, the detection apparatus obtains a running environment image by the camera device, such as a captured image or a certain frame image in a video stream, performs visual recognition on the running environment image (e.g., performs vehicle recognition on the image using CNN neural network) to obtain regions in the image that are considered as vehicles, and marks a recognized bounding box for each of these regions. After the bounding box which may be a vehicle is obtained, the bounding box is screened according to a preset reference threshold (such as a size, a proportion and other thresholds), and the bounding box which is obviously not a vehicle is removed, for example, the bounding box in a slender strip shape is removed. Then, regarding the rest boundary frames, regarding the larger boundary frame, the closer the boundary frame to the vehicle or the larger the vehicle is, and the higher weight is given to the boundary frame; the boundary box closer to the own lane is considered to be closer to the lane in which the own vehicle is located or located in the lane in which the own vehicle is located, and higher weight is given thereto. The higher the weight is, the higher the potential risk of the identified other vehicles corresponding to the bounding box relative to the vehicle is considered to be, so that in the subsequent processing, the regions of the bounding boxes with the highest weight in the predetermined number can be selectively selected for the subsequent processing, and the calculation amount can be reduced. Of course, in some other embodiments, the number of processes is not limited, and all of the identified regions may be processed, so that the selection process is omitted.
In one embodiment, the detection device may detect the motion states of a plurality of objects, i.e. the target object may refer to a plurality of objects.
S203, the detection device determines the three-dimensional information of the first three-dimensional feature point of the target object in a world coordinate system according to the first driving environment image.
In this embodiment of the present invention, the first three-dimensional feature point may be a three-dimensional feature point corresponding to a first two-dimensional feature point of the target object in the first driving environment image, and the first two-dimensional feature point may be any one of a plurality of two-dimensional feature points of the target object in the first driving environment image. The detection device can determine the three-dimensional information of the first three-dimensional characteristic point of the target object in a world coordinate system according to the point cloud sensor or determine the world coordinate of the first three-dimensional characteristic point of the target object according to the binocular image. The binocular image is an image obtained by shooting through a binocular camera device, and if the binocular image can comprise a left visual image and a right visual image. The three-dimensional feature points may be feature points having three-dimensional information, and the two-dimensional feature points may be feature points having two-dimensional information.
And S204, determining the three-dimensional information of a second three-dimensional characteristic point of the target object in a world coordinate system by the detection equipment according to the second driving environment image, wherein the second three-dimensional characteristic point is matched with the first three-dimensional characteristic point.
In this embodiment of the present invention, the second three-dimensional feature point may be a three-dimensional feature point corresponding to a second two-dimensional feature point of the target object in the second driving environment image. The second two-dimensional feature point may be a two-dimensional feature point that matches the first two-dimensional feature point in the second travel environment image, so that the second three-dimensional feature point matches the first three-dimensional feature point. That is, the second two-dimensional feature point and the first two-dimensional feature point may refer to two-dimensional feature points at the same position on the target object, and the second three-dimensional feature point and the first three-dimensional feature point may refer to three-dimensional feature points at the same position on the target object. The detection equipment can determine the three-dimensional information of the second three-dimensional characteristic point of the target object in a world coordinate system through the point cloud sensor, or determine the world coordinate of the second three-dimensional characteristic point of the target object according to the binocular image.
S205, the detection equipment determines the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
In the embodiment of the invention, the detection equipment can determine the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system; therefore, the vehicle can control the running state of the vehicle according to the running state of the target object, can realize automatic driving, can avoid traffic accidents and improve the driving safety of the vehicle. The motion state of the target object includes a moving speed and/or a rotating direction of the target object, and if the target object is a target vehicle, the moving speed is a driving speed, and the rotating direction is a driving direction.
In one embodiment, step S205 may include steps S11-S13 as follows.
s11, the detection device projects the three-dimensional information of the first three-dimensional feature point in the world coordinate system to the bird's-eye view angle to obtain a first bird's-eye view coordinate.
And s12, the detection device projects the three-dimensional information of the second three-dimensional feature point in the world coordinate system to the bird's-eye view angle to obtain a second bird's-eye view coordinate.
s13, the detection device determines the motion state of the target object according to the first bird's-eye view coordinate and the second bird's-eye view coordinate.
In steps s11 to s13, if the motion state of the target object is directly calculated according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system, the calculation amount is large, more resources are consumed, and the efficiency is low. Therefore, in order to improve the calculation efficiency and save resources, the detection device can determine the motion state of the target object according to the bird's-eye view visual coordinates. Further, the detection device may determine the motion state of the target object based on the first bird's-eye view visual coordinate and the second bird's-eye view visual coordinate. Since the first bird's-eye view visual coordinate and the second bird's-eye view visual coordinate are both two-dimensional coordinates, i.e., include only longitude and latitude, and the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system each include three-dimensional coordinates, i.e., include height, longitude and latitude. Therefore, the motion state of the target object is determined by the first bird's-eye view visual coordinate and the second bird's-eye view visual coordinate, so that resources can be saved, and the calculation efficiency can be improved.
In one implementation, step s13 includes: the detection device determines displacement information of the target object in the first direction and the second direction and/or rotation angle information in the first rotation direction according to the first bird's-eye view visual coordinate and the second bird's-eye view visual coordinate, and determines the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
When the target object is an object moving on the ground, the target object is generally kept horizontal to the ground, so that the displacement of the target object in the height direction can be ignored, and the inclination angle of the target object to the ground can be ignored. The detection device may determine displacement information of the target object in the first direction and the second direction, and/or rotation angle information in the first rotation direction, based on the first bird's-eye view visual coordinate and the second bird's-eye view visual coordinate. At most, only the information of the target object in three directions needs to be solved, so that the calculation amount can be greatly reduced. The first direction and the second direction may be a horizontal direction of the target object, i.e., an x-axis direction (e.g., a front or a rear direction of the target object, i.e., a longitudinal direction) and a y-axis direction (e.g., a left or a right direction of the target object, i.e., a transverse direction), and a front direction of the target object refers to a head direction of the target vehicle. The first winding direction is a direction around the z-axis (i.e. perpendicular to the ground), i.e. can be considered as a moving speed direction of the target object, i.e. a heading direction of the target vehicle. Further, the detection device may determine the motion state of the target object according to the displacement information and/or the rotation angle information of the target object. For example, the motion state of the target object includes a moving speed and/or a rotation angle of the target object, and the detection device may determine the moving speed of the target object according to the displacement information and determine the rotation direction of the target movement according to the rotation angle information of the target object.
In one embodiment, the relationship between the displacement information and the rotation angle information of the target object can be represented by formula (1), and the detection device can calculate the formula (1) by an optimal solution method to obtain the optimal displacement information and the optimal rotation angle information.
P2i=R·P1i+t(1)
Wherein, in the formula (1), P1iFor the ith first bird's eye view visual coordinate, P2iThe ith second bird's eye view visual coordinate, t is displacement information of the target object in the first direction and the second direction, and R is rotation angle information of the target object in the first direction.
In another embodiment, step S205 may include steps S21-S23 as follows.
s21, the detection device determines the displacement information of the target object in the first direction and the second direction and/or the rotation angle information in the first winding direction according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
s22, the detection device determines the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
In steps s 21-s 22, the detection device may determine displacement information of the target object in the first direction and the second direction, and/or rotation angle information in the first direction directly according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system; and determining the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
In one embodiment, the relationship between the displacement information and the rotation angle information of the target object can be represented by formula (2), and the detection device can calculate the formula (2) by an optimal solution method to obtain the optimal displacement information and the optimal rotation angle information.
Q2i=R·Q1i+t(2)
Wherein, the Q1iFor the ith three-dimensional information of the first three-dimensional feature in the world coordinate system, the Q2iThe ith three-dimensional feature point is three-dimensional information of the second three-dimensional feature point in a world coordinate system, t is displacement information of the target object in a first direction and a second direction, and R is rotation angle information of the target object in a first winding direction.
In one embodiment, the motion state includes a moving speed or/and a rotating direction of the target object, and the determining the motion state of the target object according to the displacement information and/or the rotating angle information of the target object includes: determining the moving speed of the target object according to the displacement information of the target object; and/or determining the rotating direction of the target object according to the rotating angle information.
Wherein the displacement information of the target movement may include a movement distance of the target object from the first time to the second time. The rotation angle information may include a rotation angle of the target object in the first winding direction from the first time to the second time, that is, a rotation angle of the target vehicle in the moving speed direction from the first time to the second time.
The detection device may determine a moving time length of the target object according to the first time and the second time, and divide a moving distance of the target object from the first time to the second time by the moving time length to obtain a moving speed. Further, the detection device may directly use the calculated moving speed as the moving speed of the target object, or filter the calculated moving speed to obtain the moving speed of the target object. For example, assuming that the target object is a target vehicle, the distance moved by the target vehicle from the first time to the second time is 1 m. The first time is 0.1s of the travel time of the host vehicle, and the second time is 0.2s of the travel time of the host vehicle. And calculating the moving time length of the target vehicle according to the first time and the second time, wherein the moving time length is 0.1s, and calculating the moving speed of the target vehicle according to the moving distance and the moving time length, namely 1m/0.1s is 10m/s is 36 km/h.
If the rotation angle of the target object in the first winding direction (i.e., the moving speed direction) from the first time to the second time is smaller than the preset angle, it indicates that the moving speed direction of the target object is not changed, and the rotation direction of the target object may be determined according to the moving speed direction of the target object at the first time or the moving speed direction of the target object at the second time. If the rotation angle of the target object in the first direction from the first time to the second time is greater than or equal to the preset angle, it indicates that the moving speed direction of the target object changes, and the rotation direction of the target object can be determined according to the moving speed direction of the target object in the second time.
For example, assuming that the target object is a target vehicle, the preset angle is 90 degrees, the moving speed direction of the target vehicle at the first time is an x-axis direction (i.e., a 90-degree direction), and the moving speed direction of the target vehicle at the second time is a direction deviated to the left of the x-axis by 3 degrees (i.e., a 93-degree direction). The rotation angle of the target vehicle in the moving speed direction from the first time to the second time is 3 degrees, and the rotation angle is smaller than the preset angle, which indicates that the target vehicle does not change the driving direction; the traveling direction of the target vehicle may be determined according to a speed direction of the target vehicle at a first time or a moving speed direction of the target vehicle at a second time. Therefore, it is determined that the traveling direction of the target vehicle is the x-axis direction, i.e., the target vehicle travels in the same direction as the host vehicle. Assuming that the moving speed direction of the target vehicle at the first time is the x-axis direction (i.e., the 90-degree direction), the moving speed direction of the target vehicle at the second time is the opposite direction to the x-axis direction (i.e., the rear of the host vehicle). The rotation angle of the target vehicle in the moving speed direction from the first time to the second time is 180 degrees, and the rotation angle is greater than a preset angle, which indicates that the target vehicle changes the driving direction; the traveling direction of the target vehicle may be determined according to the moving speed direction of the target vehicle at the second time, and thus, the traveling direction of the target vehicle may be determined to be the rear of the own vehicle. I.e. the target vehicle travels opposite to the host vehicle.
In one embodiment, the determining the moving speed of the target object according to the displacement information of the target object includes: and determining a target moving speed according to the displacement information of the target object, and determining the speed obtained after filtering the target moving speed as the moving speed of the target object.
Wherein the displacement information of the target object includes a moving distance of the target object in the first direction from the first time to the second time, and a moving distance of the target object in the second direction from the first time to the second time. Therefore, the moving speed of the target object calculated from the displacement information includes a moving speed in the first direction and a moving speed in the second direction.
The detection device may determine a moving duration of the target object according to the first time and the second time, and calculate according to the moving information and the moving duration of the target object to obtain a target moving speed, where the target moving speed includes a speed in a first direction and a speed in a second direction. The target speed calculated in this way has large noise, and therefore, the speed obtained by filtering the target moving speed by the detection device can be determined as the moving speed of the target object, so as to improve the accuracy of the moving speed of the target object. For example, the detection device may filter the target moving speed through a filter to obtain the moving speed of the target object. The filter may be a kalman filter or a wide band filter, etc.
In the embodiment of the present invention, the detection device may determine, according to the first driving environment image, three-dimensional information of a first three-dimensional feature point of the target object in a world coordinate system, determine, according to the second driving environment image, three-dimensional information of a second three-dimensional feature point of the target object in the world coordinate system, and determine, according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system, a motion state of the target object. The first three-dimensional characteristic point is matched with the second three-dimensional characteristic point, namely the motion state of the target object can be determined according to the three-dimensional information of the matched three-dimensional characteristic point on the target object, the motion state of the target movement is not required to be determined through the mass center of the target object, and the accuracy of detecting the motion state of the target object can be improved. The safety of vehicle driving can be improved, and vehicle driving is more automatic and intelligent.
Based on the above description of the method for detecting the motion state of the target object, another method for detecting the motion state of the target object is provided in the embodiment of the present invention, please refer to fig. 3. Optionally, the method may be performed by a detection device, and the detection device may include a camera device, where the camera device may include a main camera device and a binocular camera device, and the binocular camera device includes a left-vision camera device and a right-vision camera device. As shown in fig. 3, the method for detecting the motion state of the target object may include the following steps.
S301, the detection device acquires a first driving environment image at a first time.
S302, the detection device acquires a second driving environment image at a second time, wherein the first time is a time before the second time.
In steps S301 to S302, the detection apparatus may capture a first driving environment image by the left vision camera at a first time, and capture a second driving environment image by the left vision camera at a second time.
S303, extracting a first two-dimensional feature point of the target object from the first driving environment image by the detection equipment.
In the embodiment of the present invention, the detection device may extract all feature points of the target object from the first driving environment image, or extract key feature points of the target object. The first two-dimensional feature point is any one of all feature points or key feature points of the target object.
In another embodiment, to reduce the amount of computation by the detection device, detection is performedThe device may extract the corner points of the first feature region by using a corner point detection algorithm, where the corner points may be key feature points, and the key feature points are used as the first two-dimensional feature points. The corner detection algorithm may be any one of a feature From Accelerative Segment Test (FAST), an edge detection algorithm (SUSAN), or a Harris corner detection algorithm. Taking the angular point detection algorithm as the Harris angular point detection algorithm as an example, the detection equipment can acquire any two-dimensional characteristic point of the first driving environment image
Figure BDA0002440506040000121
The structure tensor of (2), the structure tensor can be expressed by formula (3).
Figure BDA0002440506040000122
Wherein A in the formula (3) is a structure tensor, IxAnd IyGradient information of points (u, v) on the first running environment image in the x-axis and y-axis directions, w (u, v) represents a type of window sliding on the first running environment image, and a brace<>Indicating averaging.
Further, a function M may be employedCJudging whether the point (u, v) is a key feature point, when M isC>MthDetermining points (u, v) as key feature points; when M isC≤MthWhen the point (u, v) is determined not to be a key feature point. Function MCCan be expressed by the formula (4), MthIs a set threshold.
MC=det(A)-k·trace2(A) (4)
Where k in equation (4) represents the parameter for adjusting the sensitivity, k may be an empirical value, e.g., k may be any value in the range [0.04, 0.15], det (a) is the determinant of matrix a, and trace (a) is the trace of matrix a.
In one embodiment, step S303 includes steps S31 and S32 as follows.
s31, the detection device determines a first characteristic region of the target object in the first driving environment image.
s32, the detection device extracts a first two-dimensional feature point of the target object from the first feature region.
In steps s31 and s32, the detection device may determine a first feature region of the target object in the first driving environment image, where the first feature region may include all feature information or part of feature information of the target object, and the first feature region may be obtained by projection. Further, a first two-dimensional feature point of the target object is extracted from the first feature region. The first two-dimensional feature point may be any feature point of the first feature region or any key feature point of the first feature region.
In one embodiment, the step s32 includes capturing a target image by a camera, and determining a second characteristic region of the target object in the target image: and projecting the second characteristic region into the first driving environment image, and determining the obtained projection region as the first characteristic region of the target object.
Since the pixels of the image captured by the main image capture device (i.e., the main camera) are high, the detection apparatus can more easily recognize the target object (e.g., the vehicle) from the image captured by the main image capture device. Therefore, the main camera device can shoot a target image including the target object, and then the second characteristic region of the target object can be determined in the target image. The second characteristic region may be a bounding box, and the second characteristic region is projected to a first driving environment image captured by a binocular camera to obtain a first characteristic region. The image obtained by shooting by the binocular camera device is a gray image which is beneficial to extracting the characteristic points, so that the subsequent detection equipment extracts the first two-dimensional characteristic points of the target object from the first driving environment image so as to obtain the three-dimensional information of the first three-dimensional characteristic points corresponding to the first two-dimensional characteristic points in a world coordinate system.
Specifically, the detection device may capture the target image by the main camera at the first time. The target image may be processed according to a preceding detection algorithm to obtain second feature regions of the plurality of objects, and at least one second feature region may be selected from the second feature regions of the plurality of objects as a second feature region of the target object. The Detection algorithm of the preamble may be a Detection algorithm based on a Convolutional Neural Network (CNN), or the like. The detection device projects the second characteristic region into the first driving environment image, and determines the obtained projection region as the first characteristic region of the target object. For example, as shown in fig. 4, the region of the white frame in fig. 4 is a characteristic region of a plurality of objects. It is assumed that the feature region having the largest size in fig. 4 is the second feature region of the target object. The detection device projects the second feature region of the target object into the first driving environment image, and the projected first feature region may be a region corresponding to the rightmost frame in fig. 5.
In one embodiment, in order to reduce the amount of calculation of the detection device and save resources, the detection device may filter the second feature region of the target object in the target image before projecting the second feature region of the target object to the first driving environment image. Alternatively, the detection device may screen the first feature region of the target object obtained by projection after projecting the second feature region of the target object in the target image to the first driving environment image. That is, the execution sequence of the step of projecting the second feature region in the target image and the step of screening the feature region may not be limited. In one embodiment, the execution sequence of the two steps may be set by the user. In another embodiment, the execution sequence of the two steps may be set according to the characteristics of the image pickup device of the detection apparatus. For example, when the difference between the viewing angles of the main camera and the binocular camera is large, the step of projecting may be performed first, and then the step of screening may be performed; when the difference in the view angles between the main image pickup device and the binocular image pickup device is small, the order of execution of the two may not be limited.
The following describes a manner of screening the second feature region of the target object before projecting the second feature region of the target object in the target image onto the first driving environment image:
in one embodiment, the detection device determines whether a first parameter of the target object satisfies a preset condition; if yes, the step of projecting the second characteristic region into the first driving environment image and determining the obtained projection region as the first characteristic region of the target object is executed.
The first parameter includes at least one of an attribute corresponding to the second characteristic region, a lane in which the target object is located, and a driving direction of the target object, and the attribute includes a size and/or a shape of the second characteristic region.
Before the second characteristic region of the target object in the target image is projected into the first driving environment image, in order to reduce the calculation amount of the detection device and save resources, the detection device may screen the characteristic region of the object in the target image, so as to filter out an erroneous characteristic region and filter out a characteristic region of an object having a small influence on the driving safety of the vehicle. Specifically, the detection device may determine whether a first parameter of the target object satisfies a preset condition, and if the preset condition is satisfied, it indicates that the running state of the target object has a large influence on the running safety of the vehicle, the detection device may take a second feature region of the target object in the target image as an effective region, and perform a step of projecting the second feature region into the first running environment image; if the preset condition is not met, it is indicated that the running state of the target object has a small influence on the driving safety of the vehicle, or it is indicated that the second characteristic region is not the region where the object is located, that is, the second characteristic region is the wrong characteristic region, the second characteristic region of the target object in the target image may be used as an invalid region, and the second characteristic region is screened out.
For example, it is assumed that the attribute includes the shape of the second feature region, and the target image includes the second feature region of the object 1, the second feature region of the object 2, and the second feature region of the object 3. If the shape of the second characteristic region of the object 1 is an elongated strip shape, it indicates that the object 1 is not a vehicle, and it is determined that the parameter of the object 1 does not satisfy the preset condition, and the second characteristic region of the object 1 may be used as an invalid characteristic region to screen out the second characteristic region of the object 1. If the shapes of the second feature regions of the object 2 and the object 3 are both rectangles, it is determined that the object 2 and the object 3 are vehicles, and it is determined that the parameters of the object 2 and the object 3 satisfy the preset conditions, the second feature regions of the object 2 and the object 3 may be used as valid feature regions, and the second feature region of the target object may be any one of the second feature regions of the object 2 and the object 3.
In one embodiment, the detection device may filter the characteristic region of the object in the target image according to the size of the characteristic region of the object, the lane in which the object is located, and the driving direction of the object. Specifically, the detection device may set a first weight for the second characteristic region according to a size of the second characteristic region of the target object in the target image, set a second weight for the second characteristic region according to a lane in which the target object is located in the target image, and set a third weight for the second characteristic region according to a driving direction of the target object in the target image. Summing the first weight, the second weight and the third weight to obtain a total weight of a second characteristic region, and if the total weight of the second characteristic region is greater than a preset value, determining the second characteristic region as an effective characteristic region; and if the total weight of the second characteristic region is less than or equal to the preset value, determining the second characteristic region as an invalid characteristic region, and screening out the second characteristic region.
Wherein, the larger the size of the second characteristic region is, the closer the target object is to the vehicle, that is, the running state of the target object has a higher influence on the driving safety of the vehicle, the first weight may be set to a larger value (e.g. 5); conversely, the smaller the size of the second characteristic region, the longer the distance between the target object and the host vehicle is, i.e., the lower the influence of the operating state of the target object on the driving safety of the host vehicle is, the smaller the first weight value (e.g., 2) may be set.
The target object is a target vehicle, the closer the distance between the lane where the target vehicle is located and the lane where the vehicle is located is, for example, the target vehicle and the vehicle are located in the same lane or adjacent lanes, the higher the influence of the running state of the target vehicle on the driving safety of the vehicle is, and the second weight may be set to a larger value (for example, 3). Conversely, the farther the distance between the lane where the target vehicle is located and the lane where the host vehicle is located is, for example, the target vehicle is located in the first lane, and the host vehicle is located in the third lane, the influence of the running state of the target vehicle on the driving safety of the host vehicle is low, and the second weight may be set to a small value (e.g., 1).
If the driving direction of the target vehicle in the target image is the same as the driving direction of the vehicle, the probability of rear-end collision or collision between the target vehicle and the vehicle is high, that is, the driving safety of the vehicle is highly affected by the running state of the target vehicle, and the third weight may be set to a large value (e.g., 3). On the contrary, if the traveling direction of the target vehicle in the target image is opposite to the traveling direction of the host vehicle, the probability that the target vehicle collides with the host vehicle is small, that is, the influence of the operating state of the target vehicle on the traveling safety of the host vehicle is low, the third weight may be set to a small value (e.g., 2).
The following describes a manner of screening the first feature region of the target object after projecting the second feature region of the target object in the target image onto the first driving environment image:
in another embodiment, after step s32, the method further includes: and determining whether the second parameter of the target object meets a preset condition, if so, executing the step of extracting the first two-dimensional feature point from the first feature area.
The second parameter includes at least one of an attribute corresponding to the first characteristic region, a lane in which the target object is located, and a driving direction of the target object, and the attribute includes a size and/or a shape of the first characteristic region.
After the detection device projects the second feature region into the first driving environment image to obtain the first feature region of the target object, in order to reduce the calculation amount of the detection device and save resources, the detection device may screen the feature regions of the object in the first driving environment image, so as to filter out erroneous feature regions and feature regions of objects having less influence on the driving safety of the vehicle. Specifically, the detection device may determine whether the second parameter of the target object meets a preset condition, and if the second parameter meets the preset condition, which indicates that the running state of the target object has a large influence on the driving safety of the vehicle, may take the first feature region of the target object in the first driving environment image as an effective region, and perform step S304; if the preset condition is not met, it is indicated that the running state of the target object has a small influence on the driving safety of the vehicle, or it is indicated that the first feature region is not a region where the first feature region moves, that is, the first feature region is an incorrect feature region, the first feature region of the target object in the first driving environment image may be taken as an invalid region, and the first feature region may be removed.
For example, the second parameter includes the size of the first feature region, and the first driving environment image includes the first feature region of the object 1, the first feature region of the object 2, and the first feature region of the object 3, which are obtained by projection. If the size of the first characteristic region of the object 1 is smaller than the preset size, that is, the parameter of the object 1 does not meet the preset condition, it indicates that the distance between the object 1 and the vehicle is long, that is, it indicates that the running state of the object 1 has a small influence on the driving safety of the vehicle. The first feature region of the object 1 in the first driving environment image may be used as an invalid feature region, and the first feature region of the object 1 may be screened out. If the sizes of the first characteristic regions of the object 1 and the object 2 are larger than or equal to the preset size, that is, the parameters of the object 2 and the object 3 meet the preset condition, it indicates that the distance between the object 2 and the object 3 and the vehicle is short, that is, it indicates that the running states of the object 2 and the object 3 have little influence on the driving safety of the vehicle. The first feature region of the object 2 and the first feature region of the object 3 in the first driving environment image may be used as the valid feature region, and the first feature region of the target object may be any one of the feature regions of the object 2 and the object 3.
In one embodiment, s32 includes: and extracting a plurality of two-dimensional feature points from the first feature region, and screening the plurality of two-dimensional feature points according to a preset algorithm to obtain first two-dimensional feature points.
In order to reduce the calculation amount of the detection device and save resources, the detection device may perform screening on a plurality of two-dimensional feature points extracted from the first feature region. Specifically, the detection device may extract a plurality of two-dimensional feature points from the first feature region, and exclude non-compliant feature points from the plurality of two-dimensional feature points of the first feature region by using a preset algorithm, so as to obtain compliant first two-dimensional feature points. For example, the preset algorithm may be a Random sampling consensus (RANSAC) algorithm, and if the position distribution of the two-dimensional feature points of the first feature region is as shown in fig. 6, the RANSAC is used to fit the two-dimensional feature points of the first feature region to obtain a line segment. The line segment may be as shown in fig. 7, and the feature points located on the line segment are compliant feature points, and are retained, that is, the first two-dimensional feature point is any feature point on the line segment; the feature points not on the line segment are non-compliant feature points, and the non-compliant feature points may be excluded.
S304, the detection equipment determines the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system.
In the embodiment of the invention, the detection equipment can determine the three-dimensional information of the first three-dimensional characteristic point corresponding to the first two-dimensional characteristic point in the world coordinate system through the point cloud sensor or the binocular image.
S305, determining a second two-dimensional feature point matched with the first two-dimensional feature point from the second driving environment image by the detection equipment.
In the embodiment of the present invention, the detection device may extract all the feature points in the second driving environment image, or extract key feature points in the second driving environment image. And comparing the first two-dimensional feature point with the feature point extracted from the second driving environment image to determine a second two-dimensional feature matched with the first two-dimensional feature point from the second driving environment image. The matching of the second two-dimensional feature and the first two-dimensional feature may mean that the pixel information of the second two-dimensional feature is the same as or similar to the pixel information of the first two-dimensional feature, and is greater than a preset threshold, that is, the matching of the second two-dimensional feature and the first two-dimensional feature may mean a feature point at the same position on the target object.
S306, the detection equipment determines the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system.
In the embodiment of the invention, the detection equipment can determine the three-dimensional information of the second three-dimensional characteristic point corresponding to the first two-dimensional characteristic point in a world coordinate system through the point cloud sensor or the binocular image.
S307, the detection equipment determines the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
In the embodiment of the invention, as the first two-dimensional feature point is matched with the second two-dimensional feature point, the first three-dimensional feature point is matched with the second three-dimensional feature point, and the detection equipment can determine the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in a world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
For example, as shown in fig. 8, the first two-dimensional feature point is p1, the second two-dimensional feature point is p2, the first three-dimensional feature point is D1, and the second three-dimensional feature point is D2. Since p1 and p2 are two-dimensional feature points matched with each other, D1 is a three-dimensional feature point corresponding to p1, and D2 is a three-dimensional feature point corresponding to p2, D1 and D2 are matched with each other, that is, D1 and D2 are 3D points at the same position on the target object at different times. The three-dimensional information of the D1 in the world coordinate system is different from the three-dimensional information of the D2 in the world coordinate system because the target object is translated or rotated in the moving process, so that the operation state of the target object can be determined according to the three-dimensional information of the D1 in the world coordinate system and the three-dimensional information of the D2 in the world coordinate system.
In the embodiment of the present invention, the detection device extracts a first two-dimensional feature point of the target object from the first driving environment image, and determines a second two-dimensional feature point matching the first two-dimensional feature point from the second driving environment image. And determining the three-dimensional information of the first three-dimensional characteristic point corresponding to the first two-dimensional characteristic point in a world coordinate system, and determining the three-dimensional information of the second three-dimensional characteristic point corresponding to the second two-dimensional characteristic point in the world coordinate system. The detection device can determine the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system. And determining the matched three-dimensional characteristic points through the matched two-dimensional characteristic points, and determining the running state of the target object according to the three-dimensional information of the matched three-dimensional characteristic points in the world coordinate system. The motion state of the target movement does not need to be determined through the centroid of the target object, and the accuracy of detecting the motion state of the target object can be improved. The safety of vehicle driving can be improved, and vehicle driving is more automatic and intelligent.
Based on the above description of the method for detecting the motion state of the target object, another method for detecting the motion state of the target object is provided in the embodiments of the present invention, please refer to fig. 9. Optionally, the method may be performed by a detection device, and the detection device may include a camera device, where the camera device may include a main camera device and a binocular camera device, and the binocular camera device includes a left-vision camera device and a right-vision camera device. In the embodiment of the present invention as shown in fig. 9, the method for detecting the motion state of the target object may include the following steps.
S901, the detection device acquires a first driving environment image at a first time.
And S902, the detection device acquires a second driving environment image at a second time, wherein the first time is a time before the second time.
In the embodiment of the present invention, for the explanation of step S901 and step S902, refer to the explanation of step S301 and step S302 in fig. 3, and repeated parts are not repeated.
S903, extracting a first two-dimensional feature point of the target object from the first driving environment image by the detection equipment.
S904, the detection device determines a second two-dimensional feature point matched with the first two-dimensional feature point from the second driving environment image.
S905, the detection equipment determines three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a camera coordinate system.
In the embodiment of the present invention, the detection device may determine the three-dimensional information of the first three-dimensional feature point in the camera coordinate system according to a tracking algorithm (KLT) and a camera model.
In one embodiment, step S905 includes: and determining a third two-dimensional feature point matched with the first two-dimensional feature point from a third driving environment image, wherein the third driving environment image and the first driving environment image are two images simultaneously shot by a binocular camera. And determining first depth information according to the first two-dimensional feature point and the third two-dimensional feature point, and determining three-dimensional information of the first three-dimensional feature point in a camera coordinate system according to the first depth information.
For example, it is assumed that the first driving environment image may be captured at the first time by a left vision imaging device of a binocular imaging device, that is, the first driving environment image may also be referred to as a first left image. The third driving environment image is obtained by shooting the right vision camera of the binocular camera at the first time, namely the third driving image can also be called as a first right image. As shown in fig. 10, the first two-dimensional feature point is p1, and the first three-dimensional feature point is D1. The detection device may determine a third two-dimensional feature point matching the p1 from the first right image with a feature point matching algorithm, which may be a KLT algorithm or the like. And determining first depth information according to p1 and the third two-dimensional feature point, and determining three-dimensional information of D1 in a camera coordinate system according to the first depth information and a camera model. Wherein the camera model may be a model for indicating a conversion relationship of the depth information and the three-dimensional information in the camera coordinate system of the three-dimensional feature point.
S906, the detection equipment determines the three-dimensional information of the first three-dimensional feature point in a world coordinate system according to the three-dimensional information of the first three-dimensional feature point in a camera coordinate system.
In the embodiment of the invention, the detection equipment can determine the three-dimensional information of the first three-dimensional feature point in the world coordinate system according to the conversion relationship between the three-dimensional coordinate information of the three-dimensional feature point in the camera coordinate system and the three-dimensional information of the three-dimensional feature point in the world coordinate system and the three-dimensional information of the first three-dimensional feature point in the camera coordinate system.
S907, the detection device determines three-dimensional information of a second three-dimensional feature point corresponding to the second two-dimensional feature point in a camera coordinate system.
In the embodiment of the present invention, the detection device may determine the three-dimensional information of the second three-dimensional feature point in the camera coordinate system according to a tracking algorithm and a camera model.
In one embodiment, step S907 includes: and determining a fourth two-dimensional feature point matched with the second two-dimensional feature point from a fourth driving environment image, wherein the fourth driving environment image and the second driving environment image are two images simultaneously shot by a binocular camera. And determining second depth information according to the second two-dimensional feature point and the fourth two-dimensional feature point, and determining three-dimensional information of the second three-dimensional feature point in a camera coordinate system according to the second depth information.
For example, it is assumed that the second driving environment image may be captured at the second time by a left vision camera of a binocular camera, that is, the second driving environment image may also be referred to as a second left image. The fourth driving environment image is obtained by shooting the right vision camera of the binocular camera at the second time, namely the fourth driving image can also be called a second right image. As shown in fig. 10, the second two-dimensional feature point is p2, and the second three-dimensional feature point is D2. The detection apparatus may determine a fourth two-dimensional feature point matching the p2 from the second right image with a feature point matching algorithm. And determining second depth information according to p2 and the fourth two-dimensional feature point, and determining three-dimensional information of D2 in a camera coordinate system according to the second depth information and a camera model.
And S908, determining the three-dimensional information of the second three-dimensional feature point in a world coordinate system by the detection equipment according to the three-dimensional information of the second three-dimensional feature point in the camera coordinate system.
In the embodiment of the invention, the detection equipment can determine the three-dimensional information of the second three-dimensional feature point in the world coordinate system according to the conversion relationship between the three-dimensional coordinate information of the three-dimensional feature point in the camera coordinate system and the three-dimensional information of the three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the camera coordinate system.
And S909, the detection equipment determines the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
In the embodiment of the present invention, the detection device may determine, according to the three-dimensional information of the first three-dimensional feature point in the camera coordinate system, the three-dimensional information of the first three-dimensional feature point in the world coordinate, and determine, according to the three-dimensional information of the second three-dimensional feature point in the camera coordinate system, the three-dimensional information of the second three-dimensional feature point in the world coordinate. Further, the motion state of the target object is determined according to the three-dimensional information of the first three-dimensional characteristic point under the world coordinate and the three-dimensional information of the second three-dimensional characteristic point under the world coordinate. The accuracy of acquiring the motion state of the target object can be improved.
Based on the above description of the method for detecting the motion state of the target object, another method for detecting the motion state of the target object is provided in the embodiments of the present invention, please refer to fig. 11. In the embodiment of the present invention as shown in fig. 11, the method for detecting the motion state of the target object may include the following steps.
S110, the detection device acquires a first driving environment image at the first time.
And S111, the detection device acquires a second driving environment image at a second time, wherein the first time is a time before the second time.
S112, extracting a first two-dimensional feature point of the target object from the first driving environment image by the detection equipment.
S113, determining a second two-dimensional feature point matched with the first two-dimensional feature point from the second driving environment image by the detection device.
S114, the detection equipment acquires three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the first time in a world coordinate system.
In the embodiment of the invention, the detection equipment can acquire the three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the first time under the world coordinate system. The point cloud sensor can be a radar sensor, a binocular stereo vision sensor, a structured light and the like.
S115, the detection equipment determines the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system according to the three-dimensional information of the three-dimensional point obtained at the first time. For example, the three-dimensional information of the first three-dimensional feature point in the world coordinate system may be determined according to the three-dimensional information of the three-dimensional point matched with the first three-dimensional feature point in the three-dimensional points acquired at the first time.
In one embodiment, step S115 includes: the detection equipment projects the three-dimensional points obtained by the point cloud sensor at the first time into two-dimensional feature points, determines a fifth two-dimensional feature point matched with the first two-dimensional feature point from the two-dimensional feature points obtained by projection, and determines three-dimensional information of the three-dimensional point corresponding to the fifth two-dimensional feature point as the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system.
The detection device may project the three-dimensional points obtained by the point cloud sensor at the first time as two-dimensional feature points, and determine a fifth two-dimensional feature point matched with the first two-dimensional feature point from the two-dimensional feature points obtained by the projection. The first two-dimensional feature point is matched with the fifth two-dimensional feature point, and the first two-dimensional feature point and the fifth two-dimensional feature point are two-dimensional feature points acquired at the first time, so that the three-dimensional point corresponding to the fifth two-dimensional feature point is matched with the first three-dimensional feature point corresponding to the first two-dimensional feature point. The three-dimensional information of the three-dimensional point corresponding to the fifth two-dimensional feature point may be determined as the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in the world coordinate system.
And S116, acquiring three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the second time in a world coordinate system by the detection equipment.
In the embodiment of the invention, the detection device can acquire the three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the second time in the world coordinate system.
And S117, determining, by the detection device, the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system according to the three-dimensional information of the three-dimensional point acquired at the second time. For example, the three-dimensional information of the first three-dimensional feature point in the world coordinate system may be determined from the three-dimensional information of the three-dimensional point matching the first three-dimensional feature point among the three-dimensional points acquired at the second time.
In one embodiment, step S117 includes: the detection equipment projects the three-dimensional points acquired through the point cloud sensor at the second time into two-dimensional feature points, determines a sixth two-dimensional feature point matched with the second two-dimensional feature point from the two-dimensional feature points obtained through projection, and determines three-dimensional information of the three-dimensional point corresponding to the sixth two-dimensional feature point as the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system.
The detection device may project the three-dimensional points acquired by the point cloud sensor at the second time as two-dimensional feature points, and determine a sixth two-dimensional feature point matched with the second two-dimensional feature point from the two-dimensional feature points obtained by projection. The second two-dimensional feature point is matched with the sixth two-dimensional feature point, and the second two-dimensional feature point and the sixth two-dimensional feature point are both two-dimensional feature points acquired at the second time, so that the three-dimensional point corresponding to the sixth two-dimensional feature point is matched with the second three-dimensional feature point corresponding to the second two-dimensional feature point. The three-dimensional information of the three-dimensional point corresponding to the sixth two-dimensional feature point may be determined as the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system.
And S118, determining the motion state of the target object by the detection equipment according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
In the embodiment of the invention, the detection equipment can obtain the three-dimensional information through the point cloud sensor to determine the three-dimensional information of the first three-dimensional characteristic point under the world coordinate and the three-dimensional information of the second three-dimensional characteristic point under the world coordinate. The running state of the target object is determined according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate, and the accuracy of obtaining the motion state of the target object can be improved.
Based on the above description of the method for detecting the motion state of the target object, another method for detecting the motion state of the target object is provided in the embodiments of the present invention, please refer to fig. 12. In the embodiment of the present invention as shown in fig. 12, the method for detecting the motion state of the target object may include the following steps.
S130, the detection device acquires a first driving environment image at the first time.
S131, the detection device acquires a second driving environment image at a second time, wherein the first time is a time before the second time. For example, the detection apparatus may capture an nth driving environment image (i.e., an nth left image) by the left vision camera at an nth time, and capture an N +1 th driving environment image (i.e., an N +1 th left image) by the left vision camera at an N +1 th time. Wherein N may be a positive integer. Accordingly, N may be one, and the detection apparatus may obtain a first driving environment image (i.e., a first left image) captured by the left vision camera at a first time and obtain a second driving environment image (i.e., an nth second left image) captured by the left vision camera at a second time.
S132, extracting at least one first two-dimensional feature point of the target object from the first driving environment image by the detection equipment. For example, as shown in fig. 13, the detection apparatus may extract at least one nth two-dimensional feature point of the target object from the nth left image, and the at least one nth two-dimensional feature point may be represented as pn
In one embodiment, step S132 includes: a first feature region of a target object is determined in the first driving environment image, and at least one first two-dimensional feature point of the target object is extracted from the first feature region.
The detection device can adopt a preceding detection algorithm to determine a first characteristic region of the target object in the first driving environment image, and extract at least one first two-dimensional characteristic point of the target object from the first characteristic region.
S133, the detection device establishes an object model of the target object according to the at least one first two-dimensional feature point, wherein the object model comprises the at least one first two-dimensional feature point.
In an embodiment of the present invention, in order to continuously update the motion state of the target object, the detection device may establish an object model of the target object according to the at least one first two-dimensional feature point. It is to be understood that including the at least one first two-dimensional feature point in the object model means that the object model is generated based on the at least one first two-dimensional feature point, and the object model is capable of matching the first two-dimensional feature point. For example, as shown in fig. 14A, assuming that the range of three-dimensional points in the first running environment image is indicated by a dotted line in the figure, three first two-dimensional feature points of the target object, p1, p2, and p3, respectively, may be extracted at this time. In the object model established according to the three first two-dimensional feature points of the target object, the motion model can be matched with the two-dimensional feature points p1, p2 and p3, that is, the motion model at least comprises the two-dimensional feature points p1, p2 and p 3.
And S134, extracting at least one second two-dimensional feature point of the target object from the second driving environment image by the detection device according to the object model of the target object, wherein the second two-dimensional feature point is a two-dimensional feature point of the target object in the second driving environment image except the two-dimensional feature point in the object model. For example, as shown in fig. 14B, the range of the three-dimensional point in the second driving environment image is indicated by a dotted line, and the range indicated by the dotted line of the forward traveling target object of the host vehicle is also changed to be more biased to one side with respect to the host vehicle, and the two-dimensional feature points of the target object may include p2, p3, p4, p5, and p6, and the object model of the target object includes p1, p2, and p 3. The p2 and the p3 are repeated two-dimensional feature points, and the repeated two-dimensional feature points are removed from the two-dimensional feature points of the target object in the second driving environment image to obtain at least one second two-dimensional feature point of the target object, wherein the at least one second two-dimensional feature point comprises p4, p5 and p 6.
Returning to fig. 13, the detection device may extract all feature points of the target object from the (N + 1) th left image, and the extracted feature points may be represented as p'n+1. P'n+1The characteristic points in (1) are compared with the characteristic points in the object model of the target object, and p 'is removed'n+1Repeating feature points in the object model of the intermediate and target object to obtain at least one N + 1-th two-dimensional feature point which can be expressed as p "n+1. And p is "n+1Is added to the object model of the target object to update the object model of the target object. Therefore, for the object model of the target object shown in fig. 14B, after adding the new three second two-dimensional feature points p4, p5 and p6, a new object model may be generated based on at least five two-dimensional feature points p2 to p6, so as to update the original object model of the target object.
In one embodiment, step S134 includes: and determining a second characteristic region of the target object in the second driving environment image according to the object model of the target object, and extracting at least one second two-dimensional characteristic point of the target object from the second characteristic region.
The detection device may determine a feature region of the second driving environment image, in which the number of feature points matching the object model of the target object is the largest, as a second feature region of the target object, and extract one or more second two-dimensional feature points of the target object from the second feature region.
In one embodiment, the determining the second characteristic region of the target object in the second driving environment image according to the object model of the target object includes: acquiring at least one object feature region in the second driving environment image, determining the number of feature points in each object feature region matched with the feature points in the object model of the target object, and determining the feature region of the object of which the number of matched feature points in the at least one object feature region is greater than a target preset value as the second feature region of the target object. The target threshold may be user set, or the detection device system defaults. For example, the target threshold value may be 3, and the characteristic region including the object 1 and the object 2 in the second travel environment image. If the number of matching between the feature points in the feature region of the object 1 and the feature points in the object model of the target object is 5, the number of matching between the feature points in the feature region of the object 2 and the feature points in the object model of the target object is 2. The characteristic region of the object 1 is determined as the second characteristic region of the target object.
And S135, the detection equipment updates the object model of the target object according to the second two-dimensional feature point to obtain the updated object model of the target object. It can be understood that, updating the object model of the target object according to the second two-dimensional feature points may refer to adding the second two-dimensional feature points to the object model of the target object; that is, the object model is regenerated based on the original two-dimensional feature points and the added two-dimensional feature points, and the newly generated object model (i.e., the updated object model) can match the original two-dimensional feature points and the newly added two-dimensional feature points. Namely, the updated object model can be matched with the first two-dimensional feature points and the second two-dimensional feature points. For example, as shown in fig. 14A and 14B, the second two-dimensional feature points include p4, p5, and p6, and the object model of the target object includes p1, p2, and p 3. And updating the object model of the target object by adopting p4, p5 and p6, wherein the updated object model of the target object can match the original two-dimensional feature points p2 and p3 and the newly added two-dimensional feature points p4, p5 and p6, namely the updated object model at least comprises p2, p3, p4, p5 and p 6. In some embodiments, the updated object model may also include the original two-dimensional feature points, i.e., p 1; in other embodiments, the updated object model may not include the original two-dimensional feature points, i.e., p 1.
S136, the detection device determines the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point matched with the first two-dimensional feature point in the second driving environment image. For example, as shown in FIG. 13, if the two-dimensional feature point p is in the N +1 th left imagen+1And the Nth two-dimensional feature point p in the Nth left imagenMatching, calculating the Nth depth information according to the Nth right image, wherein the Nth left image and the Nth right image are obtained by shooting the binocular camera device simultaneouslyIn (1). Determining two-dimensional feature point p according to Nth depth informationnCorresponding three-dimensional feature point PnThree-dimensional information in world coordinates. The (N + 1) th depth information can be calculated according to the (N + 1) th right image, and the (N + 1) th left image and the (N + 1) th right image are obtained by shooting through the binocular camera device at the same time. Determining two-dimensional feature point p according to the N +1 depth informationn+1Corresponding three-dimensional feature point Pn+1Three-dimensional information in world coordinates. Further, Pn+1Three-dimensional information and P in world coordinatesnThe three-dimensional information in world coordinates determines the motion state of the target object. Accordingly, the detection device may determine the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point in the second left image that matches the first two-dimensional feature point.
In one embodiment, the detection device acquires a third driving environment image at a third time, the second time is a time before the third time, and the motion state of the target object is updated according to the two-dimensional feature points of the target object matched in the second driving environment image and the third driving environment image.
The detection device can update the motion state of the target object in real time according to the situational environment image so as to improve the accuracy of acquiring the motion state of the target object. Specifically, the detection device may update the motion state of the target object according to the two-dimensional feature points of the target object that are matched in the second travel environment image and the third travel environment image.
In the embodiment of the present invention, the detection device may establish an object model of the target object according to at least one first two-dimensional feature point of the target object in the first driving environment image, and update the object model of the target object according to at least one second two-dimensional feature point in the second driving environment image. And determining the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point matched with the first two-dimensional feature point in the second driving environment image. The continuous motion state of the target object can be obtained, and the accuracy of obtaining the motion state of the target object can be improved.
Referring to fig. 15, fig. 15 is a schematic structural diagram of a detection apparatus according to an embodiment of the present invention. Specifically, the detection device includes: a processor 101, a memory 102, and an imaging device 103.
The memory 102 may include a volatile memory (volatile memory); the memory 102 may also include a non-volatile memory (non-volatile memory); the memory 102 may also comprise a combination of the above-mentioned kinds of memories. The processor 101 may be a Central Processing Unit (CPU). The processor 801 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), or any combination thereof.
The camera 103 may be used to take images or video. For example, the camera 103 mounted on the vehicle may capture an image or video of the running surroundings while the vehicle is running.
In one embodiment, the memory is to store program instructions; the processor calls the program instructions stored in the memory to execute the following steps:
shooting at a first time through the camera device to obtain a first driving environment image; shooting at a second time to obtain a second driving environment image, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
determining three-dimensional information of a first three-dimensional feature point of the target object in a world coordinate system according to the first driving environment image;
determining three-dimensional information of a second three-dimensional feature point of the target object in a world coordinate system according to the second driving environment image, wherein the second three-dimensional feature point is matched with the first three-dimensional feature point;
and determining the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
extracting a first two-dimensional feature point of a target object from the first driving environment image;
determining three-dimensional information of a first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system;
the determining of the three-dimensional information of the second three-dimensional feature point of the target object in the world coordinate system according to the second driving environment image includes:
determining a second two-dimensional feature point matched with the first two-dimensional feature point from the second driving environment image;
and determining the three-dimensional information of the second three-dimensional characteristic point corresponding to the second two-dimensional characteristic point in a world coordinate system.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining a first characteristic region of a target object in the first driving environment image;
and extracting a first two-dimensional feature point of the target object from the first feature area.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
shooting a target image through a camera device;
determining a second characteristic region of the target object in the target image;
the determining of the first feature region of the target object in the first driving environment image includes:
and projecting the second characteristic region into the first driving environment image, and determining the obtained projection region as a first characteristic region of the target object.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining whether a first parameter of the target object meets a preset condition, wherein the first parameter comprises at least one of an attribute corresponding to the second characteristic region, a lane where the target object is located and a driving direction of the target object, and the attribute comprises the size and/or the shape of the second characteristic region;
and if so, executing the step of projecting the second characteristic region into the first driving environment image, and determining the obtained projection region as the first characteristic region of the target object.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining whether a second parameter of the target object meets a preset condition, wherein the second parameter comprises at least one of an attribute corresponding to the first characteristic region, a lane where the target object is located and a driving direction of the target object, and the attribute comprises the size and/or the shape of the first characteristic region;
and if so, executing the step of extracting the first two-dimensional feature point from the first feature area.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining three-dimensional information of a first three-dimensional feature point corresponding to the first two-dimensional feature point in a camera coordinate system;
determining the three-dimensional information of the first three-dimensional feature point in a world coordinate system according to the three-dimensional information of the first three-dimensional feature point in a camera coordinate system;
the determining three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system includes:
determining three-dimensional information of a second three-dimensional feature point corresponding to the second two-dimensional feature point in a camera coordinate system;
and determining the three-dimensional information of the second three-dimensional feature point in a world coordinate system according to the three-dimensional information of the second three-dimensional feature point in the camera coordinate system.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining a third two-dimensional feature point matched with the first two-dimensional feature point from a third driving environment image, wherein the third driving environment image and the first driving environment image are two images shot by a binocular camera device at the same time;
determining first depth information according to the first two-dimensional feature point and the third two-dimensional feature point;
determining three-dimensional information of the first three-dimensional feature point in a camera coordinate system according to the first depth information;
the determining the three-dimensional information of the second three-dimensional feature point in the camera coordinate system includes:
determining a fourth two-dimensional feature point matched with the second two-dimensional feature point from a fourth driving environment image, wherein the fourth driving environment image and the second driving environment image are two images shot by a binocular camera device at the same time;
determining second depth information according to the second two-dimensional feature points and the fourth two-dimensional feature points;
and determining the three-dimensional information of the second three-dimensional feature point in a camera coordinate system according to the second depth information.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
acquiring three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the first time in a world coordinate system;
and determining the three-dimensional information of the first three-dimensional characteristic point corresponding to the first two-dimensional characteristic point in a world coordinate system according to the three-dimensional information of the three-dimensional point obtained at the first time.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
acquiring three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the second time in a world coordinate system;
and determining the three-dimensional information of the second three-dimensional characteristic point corresponding to the second two-dimensional characteristic point in a world coordinate system according to the three-dimensional information of the three-dimensional point acquired at the second time.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
projecting the three-dimensional points obtained by the point cloud sensor at the first time into two-dimensional feature points;
determining a fifth two-dimensional feature point matched with the first two-dimensional feature point from the two-dimensional feature points obtained by projection;
and determining the three-dimensional information of the three-dimensional point corresponding to the fifth two-dimensional feature point as the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
projecting the three-dimensional points obtained by the point cloud sensor at the second time into two-dimensional feature points;
determining a sixth two-dimensional feature point matched with the second two-dimensional feature point from the two-dimensional feature points obtained by projection;
and determining the three-dimensional information of the three-dimensional point corresponding to the sixth two-dimensional feature point as the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
projecting the three-dimensional information of the first three-dimensional characteristic point in a world coordinate system to a bird's-eye view angle to obtain a first bird's-eye view visual coordinate;
projecting the three-dimensional information of the second three-dimensional feature point in a world coordinate system to a bird's-eye view angle to obtain a second bird's-eye view visual coordinate;
and determining the motion state of the target object according to the first aerial view visual coordinate and the second aerial view visual coordinate.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining displacement information of the target object in a first direction and a second direction and/or rotation angle information in a first direction according to the first aerial view visual coordinate and the second aerial view visual coordinate;
and determining the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining displacement information of the target object in a first direction and a second direction and/or rotation angle information in a first winding direction according to three-dimensional information of the first three-dimensional characteristic point in a world coordinate system and three-dimensional information of the second three-dimensional characteristic point in the world coordinate system;
and determining the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining the moving speed of the target object according to the displacement information of the target object; and/or determining the rotating direction of the target object according to the rotating angle information.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining the target moving speed according to the displacement information of the target object;
and determining the speed obtained after filtering the target moving speed as the moving speed of the target object.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
extracting a plurality of two-dimensional feature points from the first feature region;
and screening the plurality of two-dimensional characteristic points according to a preset algorithm to obtain a first two-dimensional characteristic point.
Optionally, the number of the target objects is one or more.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
acquiring a first driving environment image at a first time through the camera device; acquiring a second driving environment image at a second time, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
extracting at least one first two-dimensional feature point of a target object from the first driving environment image;
establishing an object model object of the target object according to the at least one first two-dimensional feature point;
extracting at least one second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object;
updating the object model of the target object according to the second two-dimensional feature points to obtain an updated object model of the target object;
determining the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point matched with the first two-dimensional feature point in the second driving environment image;
optionally, the updated object model of the target object includes the at least one first two-dimensional feature point and the at least one second two-dimensional feature point.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
acquiring a third driving environment image at a third time, wherein the second time is a time before the third time;
and updating the motion state of the target object according to the two-dimensional characteristic points of the target object matched in the second driving environment image and the third driving environment image.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
determining a first characteristic region of a target object in the first driving environment image;
extracting at least one first two-dimensional feature point of the target object from the first feature region;
extracting a second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object, including:
determining a second characteristic region of the target object in the second driving environment image according to the object model of the target object;
and extracting at least one second two-dimensional feature point of the target object from the second feature region.
Optionally, the processor calls the program instructions stored in the memory to execute the following steps:
acquiring at least one object characteristic region in the second driving environment image;
determining the number of feature points in each object feature region matched with the feature points in the object model of the target object;
and determining the characteristic region of the object of which the matched number of the characteristic points in the at least one object characteristic region is greater than a target preset value as a second characteristic region of the target object.
In an embodiment of the present invention, a computer-readable storage medium is further provided, where a computer program is stored, and when the computer program is executed by a processor, the method for detecting a motion state of a target object described in the embodiments corresponding to fig. 2, fig. 3, fig. 9, and fig. 11 of the present invention is implemented, and a detection device according to the embodiment of the present invention shown in fig. 15 may also be implemented, which is not described herein again.
The computer readable storage medium may be an internal storage unit of the detection device according to any of the foregoing embodiments, for example, a hard disk or a memory of the device. The computer-readable storage medium may also be an external storage device of the vehicle control apparatus, such as a plug-in hard disk provided on the device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the apparatus. The computer-readable storage medium is used for storing the computer program and other programs and data required by the test equipment. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like. The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (45)

1. A method for detecting a motion state of a target object, the method comprising:
acquiring a first driving environment image at a first time;
acquiring a second driving environment image at a second time, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
determining three-dimensional information of a first three-dimensional feature point of the target object in a world coordinate system according to the first driving environment image;
determining three-dimensional information of a second three-dimensional feature point of the target object in a world coordinate system according to the second driving environment image, wherein the second three-dimensional feature point is matched with the first three-dimensional feature point;
and determining the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
2. The method according to claim 1, wherein the determining three-dimensional information of a first three-dimensional feature point of the target object in a world coordinate system according to the first driving environment image comprises:
extracting a first two-dimensional feature point of a target object from the first driving environment image;
determining three-dimensional information of a first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system;
the determining of the three-dimensional information of the second three-dimensional feature point of the target object in the world coordinate system according to the second driving environment image includes:
determining a second two-dimensional feature point matched with the first two-dimensional feature point from the second driving environment image;
and determining the three-dimensional information of the second three-dimensional characteristic point corresponding to the second two-dimensional characteristic point in a world coordinate system.
3. The method according to claim 2, wherein the extracting a first two-dimensional feature point of a target object from the first driving environment image comprises:
determining a first characteristic region of a target object in the first driving environment image;
and extracting a first two-dimensional feature point of the target object from the first feature area.
4. The method of claim 3, further comprising:
shooting a target image through a camera device;
determining a second characteristic region of the target object in the target image;
the determining of the first feature region of the target object in the first driving environment image includes:
and projecting the second characteristic region into the first driving environment image, and determining the obtained projection region as a first characteristic region of the target object.
5. The method of claim 4, further comprising:
determining whether a first parameter of the target object meets a preset condition, wherein the first parameter comprises at least one of an attribute corresponding to the second characteristic region, a lane where the target object is located and a driving direction of the target object, and the attribute comprises the size and/or the shape of the second characteristic region;
and if so, executing the step of projecting the second characteristic region into the first driving environment image, and determining the obtained projection region as the first characteristic region of the target object.
6. The method according to claim 4, wherein after determining the obtained projection region as the first feature region of the target object, the method further comprises:
determining whether a second parameter of the target object meets a preset condition, wherein the second parameter comprises at least one of an attribute corresponding to the first characteristic region, a lane where the target object is located and a driving direction of the target object, and the attribute comprises the size and/or the shape of the first characteristic region;
and if so, executing the step of extracting the first two-dimensional feature point from the first feature area.
7. The method according to any one of claims 2-6, wherein the determining three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system comprises:
determining three-dimensional information of a first three-dimensional feature point corresponding to the first two-dimensional feature point in a camera coordinate system;
determining the three-dimensional information of the first three-dimensional feature point in a world coordinate system according to the three-dimensional information of the first three-dimensional feature point in a camera coordinate system;
the determining three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system includes:
determining three-dimensional information of a second three-dimensional feature point corresponding to the second two-dimensional feature point in a camera coordinate system;
and determining the three-dimensional information of the second three-dimensional feature point in a world coordinate system according to the three-dimensional information of the second three-dimensional feature point in the camera coordinate system.
8. The method of claim 7, wherein determining three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a camera coordinate system comprises:
determining a third two-dimensional feature point matched with the first two-dimensional feature point from a third driving environment image, wherein the third driving environment image and the first driving environment image are two images shot by a binocular camera device at the same time;
determining first depth information according to the first two-dimensional feature point and the third two-dimensional feature point;
determining three-dimensional information of the first three-dimensional feature point in a camera coordinate system according to the first depth information;
the determining three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the camera coordinate system includes:
determining a fourth two-dimensional feature point matched with the second two-dimensional feature point from a fourth driving environment image, wherein the fourth driving environment image and the second driving environment image are two images shot by a binocular camera device at the same time;
determining second depth information according to the second two-dimensional feature points and the fourth two-dimensional feature points;
and determining the three-dimensional information of the second three-dimensional feature point in a camera coordinate system according to the second depth information.
9. The method according to any one of claims 2-6, wherein the determining three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system comprises:
acquiring three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the first time in a world coordinate system;
determining the three-dimensional information of the first three-dimensional characteristic point corresponding to the first two-dimensional characteristic point in a world coordinate system according to the three-dimensional information of the three-dimensional point obtained at the first time;
the determining three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system includes:
acquiring three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the second time in a world coordinate system;
and determining the three-dimensional information of the second three-dimensional characteristic point corresponding to the second two-dimensional characteristic point in a world coordinate system according to the three-dimensional information of the three-dimensional point acquired at the second time.
10. The method of claim 9, wherein determining three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system according to the three-dimensional information of the three-dimensional point obtained at the first time comprises:
projecting the three-dimensional points obtained through the cloud sensor at the first time into two-dimensional feature points;
determining a fifth two-dimensional feature point matched with the first two-dimensional feature point from the two-dimensional feature points obtained by projection;
determining the three-dimensional information of the three-dimensional point corresponding to the fifth two-dimensional feature point as the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system;
the determining, according to the three-dimensional information of the three-dimensional point obtained at the second time, the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system includes:
projecting the three-dimensional points obtained by the cloud sensor at the second time into two-dimensional feature points;
determining a sixth two-dimensional feature point matched with the second two-dimensional feature point from the two-dimensional feature points obtained by projection;
and determining the three-dimensional information of the three-dimensional point corresponding to the sixth two-dimensional feature point as the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system.
11. The method according to claim 1, wherein the determining the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system comprises:
projecting the three-dimensional information of the first three-dimensional characteristic point in a world coordinate system to a bird's-eye view angle to obtain a first bird's-eye view visual coordinate;
projecting the three-dimensional information of the second three-dimensional feature point in a world coordinate system to a bird's-eye view angle to obtain a second bird's-eye view visual coordinate;
and determining the motion state of the target object according to the first aerial view visual coordinate and the second aerial view visual coordinate.
12. The method of claim 11, wherein determining the state of motion of the target object from the first bird's-eye view visual coordinate and the second bird's-eye view visual coordinate comprises:
determining displacement information of the target object in a first direction and a second direction and/or rotation angle information in a first direction according to the first aerial view visual coordinate and the second aerial view visual coordinate;
and determining the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
13. The method according to claim 1, wherein the determining the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system comprises:
determining displacement information of the target object in a first direction and a second direction and/or rotation angle information in a first winding direction according to three-dimensional information of the first three-dimensional characteristic point in a world coordinate system and three-dimensional information of the second three-dimensional characteristic point in the world coordinate system;
and determining the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
14. The method according to claim 12 or 13, wherein the motion state comprises a moving speed or/and a rotating direction of the target object, and the determining the motion state of the target object according to the displacement information and/or the rotating angle information of the target object comprises:
determining the moving speed of the target object according to the displacement information of the target object; and/or determining the rotating direction of the target object according to the rotating angle information.
15. The method of claim 14, the determining a moving speed of the target object from the displacement information of the target object, comprising:
determining the target moving speed according to the displacement information of the target object;
and determining the speed obtained after filtering the target moving speed as the moving speed of the target object.
16. The method according to any one of claims 3-6, wherein said extracting a first two-dimensional feature point from the first feature region comprises:
extracting a plurality of two-dimensional feature points from the first feature region;
and screening the plurality of two-dimensional characteristic points according to a preset algorithm to obtain a first two-dimensional characteristic point.
17. The method according to any one of claims 1 to 6, wherein the number of target objects is one or more.
18. A method for detecting a motion state of a target object, the method comprising:
acquiring a first driving environment image at a first time;
acquiring a second driving environment image at a second time, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
extracting at least one first two-dimensional feature point of a target object from the first driving environment image;
establishing an object model object of the target object according to the at least one first two-dimensional feature point;
extracting at least one second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object;
updating the object model of the target object according to the second two-dimensional feature points to obtain an updated object model of the target object;
and determining the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point matched with the first two-dimensional feature point in the second driving environment image.
19. The method of claim 18, wherein the updated object model of the target object comprises the at least one first two-dimensional feature point and the at least one second two-dimensional feature point.
20. The method of claim 18, further comprising:
acquiring a third driving environment image at a third time, wherein the second time is a time before the third time;
and updating the motion state of the target object according to the two-dimensional characteristic points of the target object matched in the second driving environment image and the third driving environment image.
21. The method according to any one of claims 18 to 20, wherein said extracting at least one first two-dimensional feature point of a target object from said first driving environment image comprises:
determining a first characteristic region of a target object in the first driving environment image;
extracting at least one first two-dimensional feature point of the target object from the first feature region;
extracting a second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object, including:
determining a second characteristic region of the target object in the second driving environment image according to the object model of the target object;
and extracting at least one second two-dimensional feature point of the target object from the second feature region.
22. The method according to claim 21, wherein the determining a second feature region of the target object in the second driving environment image based on the object model of the target object comprises:
acquiring at least one object characteristic region in the second driving environment image;
determining the number of feature points in each object feature region matched with the feature points in the object model of the target object;
and determining the characteristic region of the object of which the matched number of the characteristic points in the at least one object characteristic region is greater than a target preset value as a second characteristic region of the target object.
23. A detection apparatus is characterized by comprising a memory, a processor and a camera device;
the camera device is used for acquiring a driving environment image;
the memory to store program instructions;
the processor calls the program instructions stored in the memory to execute the following steps:
shooting at a first time through the camera device to obtain a first driving environment image; shooting at a second time to obtain a second driving environment image, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
determining three-dimensional information of a first three-dimensional feature point of the target object in a world coordinate system according to the first driving environment image;
determining three-dimensional information of a second three-dimensional feature point of the target object in a world coordinate system according to the second driving environment image, wherein the second three-dimensional feature point is matched with the first three-dimensional feature point;
and determining the motion state of the target object according to the three-dimensional information of the first three-dimensional characteristic point in the world coordinate system and the three-dimensional information of the second three-dimensional characteristic point in the world coordinate system.
24. The apparatus of claim 23, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
extracting a first two-dimensional feature point of a target object from the first driving environment image;
determining three-dimensional information of a first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system;
the determining of the three-dimensional information of the second three-dimensional feature point of the target object in the world coordinate system according to the second driving environment image includes:
determining a second two-dimensional feature point matched with the first two-dimensional feature point from the second driving environment image;
and determining the three-dimensional information of the second three-dimensional characteristic point corresponding to the second two-dimensional characteristic point in a world coordinate system.
25. The apparatus of claim 24, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
determining a first characteristic region of a target object in the first driving environment image;
and extracting a first two-dimensional feature point of the target object from the first feature area.
26. The apparatus of claim 26, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
shooting a target image through a camera device;
determining a second characteristic region of the target object in the target image;
the determining of the first feature region of the target object in the first driving environment image includes:
and projecting the second characteristic region into the first driving environment image, and determining the obtained projection region as a first characteristic region of the target object.
27. The apparatus of claim 26, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
determining whether a first parameter of the target object meets a preset condition, wherein the first parameter comprises at least one of an attribute corresponding to the second characteristic region, a lane where the target object is located and a driving direction of the target object, and the attribute comprises the size and/or the shape of the second characteristic region;
and if so, executing the step of projecting the second characteristic region into the first driving environment image, and determining the obtained projection region as the first characteristic region of the target object.
28. The apparatus of claim 26, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
determining whether a second parameter of the target object meets a preset condition, wherein the second parameter comprises at least one of an attribute corresponding to the first characteristic region, a lane where the target object is located and a driving direction of the target object, and the attribute comprises the size and/or the shape of the first characteristic region;
and if so, executing the step of extracting the first two-dimensional feature point from the first feature area.
29. The apparatus according to any one of claims 24-28, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
determining three-dimensional information of a first three-dimensional feature point corresponding to the first two-dimensional feature point in a camera coordinate system;
determining the three-dimensional information of the first three-dimensional feature point in a world coordinate system according to the three-dimensional information of the first three-dimensional feature point in a camera coordinate system;
the determining three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system includes:
determining three-dimensional information of a second three-dimensional feature point corresponding to the second two-dimensional feature point in a camera coordinate system;
and determining the three-dimensional information of the second three-dimensional feature point in a world coordinate system according to the three-dimensional information of the second three-dimensional feature point in the camera coordinate system.
30. The apparatus of claim 29, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
determining a third two-dimensional feature point matched with the first two-dimensional feature point from a third driving environment image, wherein the third driving environment image and the first driving environment image are two images shot by a binocular camera device at the same time;
determining first depth information according to the first two-dimensional feature point and the third two-dimensional feature point;
determining three-dimensional information of the first three-dimensional feature point in a camera coordinate system according to the first depth information;
the determining the three-dimensional information of the second three-dimensional feature point in the camera coordinate system includes:
determining a fourth two-dimensional feature point matched with the second two-dimensional feature point from a fourth driving environment image, wherein the fourth driving environment image and the second driving environment image are two images shot by a binocular camera device at the same time;
determining second depth information according to the second two-dimensional feature points and the fourth two-dimensional feature points;
and determining the three-dimensional information of the second three-dimensional feature point in a camera coordinate system according to the second depth information.
31. The apparatus according to any one of claims 24-28, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
acquiring three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the first time in a world coordinate system;
determining the three-dimensional information of the first three-dimensional characteristic point corresponding to the first two-dimensional characteristic point in a world coordinate system according to the three-dimensional information of the three-dimensional point obtained at the first time;
the determining three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system includes:
acquiring three-dimensional information of the three-dimensional point obtained by the point cloud sensor at the second time in a world coordinate system;
and determining the three-dimensional information of the second three-dimensional characteristic point corresponding to the second two-dimensional characteristic point in a world coordinate system according to the three-dimensional information of the three-dimensional point acquired at the second time.
32. The apparatus of claim 31, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
projecting the three-dimensional points obtained through the cloud sensor at the first time into two-dimensional feature points;
determining a fifth two-dimensional feature point matched with the first two-dimensional feature point from the two-dimensional feature points obtained by projection;
determining the three-dimensional information of the three-dimensional point corresponding to the fifth two-dimensional feature point as the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in a world coordinate system;
the determining, according to the three-dimensional information of the three-dimensional point obtained at the second time, the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system includes:
projecting the three-dimensional points obtained by the cloud sensor at the second time into two-dimensional feature points;
determining a sixth two-dimensional feature point matched with the second two-dimensional feature point from the two-dimensional feature points obtained by projection;
and determining the three-dimensional information of the three-dimensional point corresponding to the sixth two-dimensional feature point as the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in a world coordinate system.
33. The apparatus of claim 23, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
projecting the three-dimensional information of the first three-dimensional characteristic point in a world coordinate system to a bird's-eye view angle to obtain a first bird's-eye view visual coordinate;
projecting the three-dimensional information of the second three-dimensional feature point in a world coordinate system to a bird's-eye view angle to obtain a second bird's-eye view visual coordinate;
and determining the motion state of the target object according to the first aerial view visual coordinate and the second aerial view visual coordinate.
34. The apparatus of claim 33, wherein the processor, when invoked by program instructions stored in the memory, performs the steps of:
determining displacement information of the target object in a first direction and a second direction and/or rotation angle information in a first direction according to the first aerial view visual coordinate and the second aerial view visual coordinate;
and determining the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
35. The apparatus of claim 23, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
determining displacement information of the target object in a first direction and a second direction and/or rotation angle information in a first winding direction according to three-dimensional information of the first three-dimensional characteristic point in a world coordinate system and three-dimensional information of the second three-dimensional characteristic point in the world coordinate system;
and determining the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
36. The apparatus of claim 34 or 35, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
determining the moving speed of the target object according to the displacement information of the target object; and/or determining the rotating direction of the target object according to the rotating angle information.
37. The apparatus of claim 36, said processor, calling said memory-stored program instructions, performing the steps of:
determining the target moving speed according to the displacement information of the target object;
and determining the speed obtained after filtering the target moving speed as the moving speed of the target object.
38. The apparatus according to any one of claims 25-28, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
extracting a plurality of two-dimensional feature points from the first feature region;
and screening the plurality of two-dimensional characteristic points according to a preset algorithm to obtain a first two-dimensional characteristic point.
39. The apparatus according to any one of claims 23-28, wherein the number of target objects is one or more.
40. A detection device, characterized in that the device comprises: a memory, a processor, and a camera device;
the camera device is used for acquiring a driving environment image;
the memory to store program instructions;
the processor calls the program instructions stored in the memory to execute the following steps:
acquiring a first driving environment image at a first time through the camera device; acquiring a second driving environment image at a second time, wherein the first time is a time before the second time, and the first driving environment image and the second driving environment image comprise target objects;
extracting at least one first two-dimensional feature point of a target object from the first driving environment image;
establishing an object model object of the target object according to the at least one first two-dimensional feature point;
extracting at least one second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object;
updating the object model of the target object according to the second two-dimensional feature points to obtain an updated object model of the target object;
and determining the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point matched with the first two-dimensional feature point in the second driving environment image.
41. The apparatus according to claim 40, wherein the updated object model of the target object comprises the at least one first two-dimensional feature point and the at least one second two-dimensional feature point.
42. The apparatus of claim 40, wherein said processor, when invoking said memory-stored program instructions, performs the steps of:
acquiring a third driving environment image at a third time, wherein the second time is a time before the third time;
and updating the motion state of the target object according to the two-dimensional characteristic points of the target object matched in the second driving environment image and the third driving environment image.
43. The device of any one of claims 40-42, wherein the processor, when invoking the program instructions stored in the memory, performs the steps of:
determining a first characteristic region of a target object in the first driving environment image;
extracting at least one first two-dimensional feature point of the target object from the first feature region;
extracting a second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object, including:
determining a second characteristic region of the target object in the second driving environment image according to the object model of the target object;
and extracting at least one second two-dimensional feature point of the target object from the second feature region.
44. The apparatus of claim 43, wherein said processor, when invoked by program instructions stored in said memory, performs the steps of:
acquiring at least one object characteristic region in the second driving environment image;
determining the number of feature points in each object feature region matched with the feature points in the object model of the target object;
and determining the characteristic region of the object of which the matched number of the characteristic points in the at least one object characteristic region is greater than a target preset value as a second characteristic region of the target object.
45. A computer-readable storage medium, comprising: the computer-readable storage medium stores a computer program which, when executed by a processor, is operable to execute the target object motion state detection method according to any one of claims 1 to 23.
CN201980004912.7A 2019-01-30 2019-01-30 Target object motion state detection method, device and storage medium Pending CN111213153A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/074014 WO2020154990A1 (en) 2019-01-30 2019-01-30 Target object motion state detection method and device, and storage medium

Publications (1)

Publication Number Publication Date
CN111213153A true CN111213153A (en) 2020-05-29

Family

ID=70790112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980004912.7A Pending CN111213153A (en) 2019-01-30 2019-01-30 Target object motion state detection method, device and storage medium

Country Status (2)

Country Link
CN (1) CN111213153A (en)
WO (1) WO2020154990A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709923A (en) * 2020-06-10 2020-09-25 中国第一汽车股份有限公司 Three-dimensional object detection method and device, computer equipment and storage medium
CN111829535A (en) * 2020-06-05 2020-10-27 北京百度网讯科技有限公司 Method and device for generating offline map, electronic equipment and storage medium
CN112115820A (en) * 2020-09-03 2020-12-22 上海欧菲智能车联科技有限公司 Vehicle-mounted driving assisting method and device, computer device and readable storage medium
CN113096151A (en) * 2021-04-07 2021-07-09 地平线征程(杭州)人工智能科技有限公司 Method and apparatus for detecting motion information of object, device and medium
CN116246235A (en) * 2023-01-06 2023-06-09 吉咖智能机器人有限公司 Target detection method and device based on traveling and parking integration, electronic equipment and medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914857B (en) * 2020-08-11 2023-05-09 上海柏楚电子科技股份有限公司 Layout method, device and system for plate excess material, electronic equipment and storage medium
CN112926488B (en) * 2021-03-17 2023-05-30 国网安徽省电力有限公司铜陵供电公司 Electric power pole tower structure information-based operator violation identification method
CN115641359B (en) * 2022-10-17 2023-10-31 北京百度网讯科技有限公司 Method, device, electronic equipment and medium for determining movement track of object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5233668A (en) * 1990-05-25 1993-08-03 Suzuki Motor Corporation Method and apparatus for discriminating aggregation pattern
CN101149803A (en) * 2007-11-09 2008-03-26 华中科技大学 Small false alarm rate test estimation method for point source target detection
CN106127137A (en) * 2016-06-21 2016-11-16 长安大学 A kind of target detection recognizer based on 3D trajectory analysis
CN108271408A (en) * 2015-04-01 2018-07-10 瓦亚视觉有限公司 Generating three-dimensional maps of scenes using passive and active measurements
CN110610127A (en) * 2019-08-01 2019-12-24 平安科技(深圳)有限公司 Face recognition method and device, storage medium and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5035284B2 (en) * 2009-03-25 2012-09-26 株式会社日本自動車部品総合研究所 Vehicle periphery display device
KR102366402B1 (en) * 2015-05-21 2022-02-22 엘지전자 주식회사 Driver assistance apparatus and control method for the same
DE102016202594A1 (en) * 2016-02-19 2017-08-24 Robert Bosch Gmbh Method and device for interpreting a vehicle environment of a vehicle and vehicle
CN108596116B (en) * 2018-04-27 2021-11-05 深圳市商汤科技有限公司 Distance measuring method, intelligent control method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5233668A (en) * 1990-05-25 1993-08-03 Suzuki Motor Corporation Method and apparatus for discriminating aggregation pattern
CN101149803A (en) * 2007-11-09 2008-03-26 华中科技大学 Small false alarm rate test estimation method for point source target detection
CN108271408A (en) * 2015-04-01 2018-07-10 瓦亚视觉有限公司 Generating three-dimensional maps of scenes using passive and active measurements
CN106127137A (en) * 2016-06-21 2016-11-16 长安大学 A kind of target detection recognizer based on 3D trajectory analysis
CN110610127A (en) * 2019-08-01 2019-12-24 平安科技(深圳)有限公司 Face recognition method and device, storage medium and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111829535A (en) * 2020-06-05 2020-10-27 北京百度网讯科技有限公司 Method and device for generating offline map, electronic equipment and storage medium
CN111709923A (en) * 2020-06-10 2020-09-25 中国第一汽车股份有限公司 Three-dimensional object detection method and device, computer equipment and storage medium
CN111709923B (en) * 2020-06-10 2023-08-04 中国第一汽车股份有限公司 Three-dimensional object detection method, three-dimensional object detection device, computer equipment and storage medium
CN112115820A (en) * 2020-09-03 2020-12-22 上海欧菲智能车联科技有限公司 Vehicle-mounted driving assisting method and device, computer device and readable storage medium
CN112115820B (en) * 2020-09-03 2024-06-21 上海欧菲智能车联科技有限公司 Vehicle-mounted driving assisting method and device, computer device and readable storage medium
CN113096151A (en) * 2021-04-07 2021-07-09 地平线征程(杭州)人工智能科技有限公司 Method and apparatus for detecting motion information of object, device and medium
CN113096151B (en) * 2021-04-07 2022-08-09 地平线征程(杭州)人工智能科技有限公司 Method and apparatus for detecting motion information of object, device and medium
CN116246235A (en) * 2023-01-06 2023-06-09 吉咖智能机器人有限公司 Target detection method and device based on traveling and parking integration, electronic equipment and medium
CN116246235B (en) * 2023-01-06 2024-06-11 吉咖智能机器人有限公司 Target detection method and device based on traveling and parking integration, electronic equipment and medium

Also Published As

Publication number Publication date
WO2020154990A1 (en) 2020-08-06

Similar Documents

Publication Publication Date Title
US11320833B2 (en) Data processing method, apparatus and terminal
CN111213153A (en) Target object motion state detection method, device and storage medium
CN112292711B (en) Associating LIDAR data and image data
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
WO2022188663A1 (en) Target detection method and apparatus
JP2019096072A (en) Object detection device, object detection method and program
CN111091037B (en) Method and device for determining driving information
CN110858405A (en) Attitude estimation method, device and system of vehicle-mounted camera and electronic equipment
EP3324359B1 (en) Image processing device and image processing method
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN108475471B (en) Vehicle determination device, vehicle determination method, and computer-readable recording medium
CN113297881A (en) Target detection method and related device
EP3364336A1 (en) A method and apparatus for estimating a range of a moving object
JP2018048949A (en) Object recognition device
JP5539250B2 (en) Approaching object detection device and approaching object detection method
US11281916B2 (en) Method of tracking objects in a scene
KR102003387B1 (en) Method for detecting and locating traffic participants using bird&#39;s-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
US11054245B2 (en) Image processing apparatus, device control system, imaging apparatus, image processing method, and recording medium
CN118311955A (en) Unmanned aerial vehicle control method, terminal, unmanned aerial vehicle and storage medium
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
Oniga et al. A fast ransac based approach for computing the orientation of obstacles in traffic scenes
US20220410942A1 (en) Apparatus and method for determining lane change of surrounding objects
CN112400094B (en) Object detecting device
CN113011212B (en) Image recognition method and device and vehicle
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200529