WO2020154990A1 - 目标物体运动状态检测方法、设备及存储介质 - Google Patents

目标物体运动状态检测方法、设备及存储介质 Download PDF

Info

Publication number
WO2020154990A1
WO2020154990A1 PCT/CN2019/074014 CN2019074014W WO2020154990A1 WO 2020154990 A1 WO2020154990 A1 WO 2020154990A1 CN 2019074014 W CN2019074014 W CN 2019074014W WO 2020154990 A1 WO2020154990 A1 WO 2020154990A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
dimensional
target object
dimensional feature
information
Prior art date
Application number
PCT/CN2019/074014
Other languages
English (en)
French (fr)
Inventor
周游
赵峰
杜劼熹
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980004912.7A priority Critical patent/CN111213153A/zh
Priority to PCT/CN2019/074014 priority patent/WO2020154990A1/zh
Publication of WO2020154990A1 publication Critical patent/WO2020154990A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the embodiment of the present invention relates to the field of intelligent control, in particular to a method for detecting the motion state of a target object, a detecting device and a storage medium.
  • FIG. 1 is a schematic diagram of a 3D scanning point of a vehicle-mounted lidar projected to a bird view perspective.
  • the target vehicle is at the front right of the vehicle, and the vehicle’s on-board Lidar can only scan to the left and rear of the target vehicle.
  • the dotted line in FIG. 1 is a schematic diagram of the projection range of the 3D scanning point of the target vehicle at the bird's-eye view angle, and the frame of the vehicle in FIG. 1 is artificially labeled.
  • the center of mass is calculated by averaging the 3D scan points, the center of mass is located at the lower left corner of the vehicle.
  • the embodiments of the present invention provide a method, a detection device and a storage medium for detecting the motion state of a target object, which can improve the accuracy of detecting the motion state of a vehicle.
  • an embodiment of the present invention provides a method for detecting the motion state of a target object, the method including:
  • the motion state of the target object is determined according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • embodiments of the present invention provide another method for detecting the motion state of a target object, the method including:
  • the motion state of the target object is determined according to the first two-dimensional feature point and the two-dimensional feature point in the second driving environment image that matches the first two-dimensional feature point.
  • an embodiment of the present invention provides a detection device, the detection device including a memory, a processor, and a camera;
  • the camera device is used to obtain an image of a driving environment
  • the memory is used to store program instructions
  • the processor calls the program instructions stored in the memory to execute the following steps:
  • a first driving environment image is captured at a first time by the camera device;
  • a second driving environment image is captured at a second time, the first time being the time before the second time, and the first driving environment
  • the image and the second driving environment image include a target object;
  • the motion state of the target object is determined according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • an embodiment of the present invention provides a detection device, the detection device includes: the device includes: a memory, a processor, and a camera;
  • the camera device is used to obtain an image of a driving environment
  • the memory is used to store program instructions
  • the processor calls the program instructions stored in the memory to execute the following steps:
  • the motion state of the target object is determined according to the first two-dimensional feature point and the two-dimensional feature point in the second driving environment image that matches the first two-dimensional feature point.
  • the detection device may determine the three-dimensional information of the first three-dimensional feature point of the target object in the world coordinate system according to the first driving environment image, and determine the second three-dimensional information of the target object according to the second driving environment image
  • the three-dimensional information of the feature point in the world coordinate system determines the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the first three-dimensional feature point and the second three-dimensional feature point are matched, and the motion state of the target object can be determined according to the three-dimensional coordinate information of the matching three-dimensional feature point on the target object, and the target object does not need to be determined by the center of mass of the target object
  • the moving motion state can improve the accuracy of detecting the motion state of the target object. It can improve the safety of vehicle driving and make vehicle driving more automatic and intelligent.
  • FIG. 1 is a schematic diagram of projecting 3D scanning points of a vehicle-mounted lidar to a bird's-eye view angle according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a method for detecting the movement state of a target object provided by an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of another method for detecting the movement state of a target object provided by an embodiment of the present invention.
  • Figure 4 is a schematic diagram of a target image provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a first driving environment image provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a moving speed distribution of a target object according to an embodiment of the present invention.
  • Fig. 7 is a schematic diagram of fitting the moving speed of a target object according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a relationship between a two-dimensional feature point and a three-dimensional feature point according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of another method for detecting the movement state of a target object provided by an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another relationship between two-dimensional feature points and three-dimensional feature points according to an embodiment of the present invention.
  • FIG. 11 is a schematic flowchart of another method for detecting the movement state of a target object according to an embodiment of the present invention.
  • FIG. 12 is a schematic flowchart of another method for detecting the movement state of a target object provided by an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of another relationship between two-dimensional feature points and three-dimensional feature points according to an embodiment of the present invention.
  • FIG. 14A is a schematic diagram of a driving environment image provided by an embodiment of the present invention.
  • 14B is a schematic diagram of another driving environment image provided by an embodiment of the present invention.
  • FIG. 15 is a schematic structural diagram of a detection device provided by an embodiment of the present invention.
  • the method for detecting the motion state of the target object may be applied to a detection device, and the detection device may be a device deployed on the object, for example, a driving recorder. Or, the detection device is a device connected to and in the object, such as a mobile phone, a tablet computer, and the like.
  • the object and target object where the detection device is located can refer to vehicles, mobile robots, drones, etc.
  • the vehicles can be smart electric vehicles, scooters, balance vehicles, cars, automobiles, trucks, or robotic vehicles, etc.
  • the object where the detection device is located and the target object may be the same, for example, both are vehicles.
  • the object where the detection device is located and the target object may be different.
  • the target object is a vehicle
  • the object where the detection device is located may be a mobile robot moving on the ground.
  • the object where the detection device is located and the target object are both vehicles for description.
  • the object where the detection device is located may be called the own vehicle, and the vehicle corresponding to the target object may be called the target vehicle.
  • FIG. 2 is a schematic flow chart of a method for detecting the motion state of a target object according to an embodiment of the present invention.
  • the method may be executed by a detection device.
  • the method for detecting the motion state of the target object may include the following steps.
  • the detection device acquires a first driving environment image at the first time.
  • the detection device acquires a second driving environment image at a second time.
  • the first time is a time before the second time
  • the first driving environment image and the second driving environment image include a target object.
  • the detection equipment may include a camera, and the driving environment can be captured by the camera to obtain a driving environment image.
  • the detection device may use the camera to photograph the driving environment at fixed time intervals to obtain the driving environment image.
  • the fixed time interval can be 0.1s (seconds), and the detection device can start timing when the vehicle is started, and the first driving can be obtained by the camera when the driving time of the vehicle is 0.1s (that is, the first time).
  • the environment image, and the second driving environment image is captured by the camera device when the driving time of the vehicle is 0.2s (that is, the second time), and so on.
  • the fixed time interval may be set by the user or automatically set by the detection device.
  • the detection equipment can also photograph the driving environment at random time intervals through the camera to obtain the driving environment image.
  • the random time interval can be 0.2s, 0.1s, etc.
  • the first driving environment image is captured by the camera device when the driving time of the vehicle is 0.2s (ie, the first time), and the first driving environment image is captured by the camera device.
  • the driving time of the vehicle is 0.3s (that is, the second time)
  • the second driving environment image is captured and so on.
  • the detection device can capture the driving environment through a camera to obtain a video stream, and select a specific frame in the video stream for detection.
  • the selection of specific frames in the video stream may be consecutive adjacent frames, or may be selected at a fixed frame number interval, or may be selected at a random frame number interval, which is similar to the aforementioned time interval and will not be repeated here.
  • the first driving environment image and the second driving environment image may include one or more objects.
  • the target object may be any object in the first driving environment image.
  • the target object can be a moving object in the driving environment image, such as surrounding moving vehicles, non-motorized vehicles, walking pedestrians, etc.
  • the target object can also be a non-moving object in the driving environment, such as surrounding stationary vehicles or pedestrians, and a fixed road surface Things and so on.
  • the motion state of the target object detected by the detection device may include speed information; when the target object is a non-moving object, it can be considered that the motion state of the target object detected by the detection device includes speed information with zero speed.
  • the detection equipment obtains the driving environment image through the camera device, such as the captured image or a frame of the video stream, and visually recognizes the driving environment image (for example, using CNN neural network to The image performs vehicle recognition) to obtain the areas in the image that are considered to be vehicles, and mark a recognized bounding box for each of these areas.
  • the bounding box is filtered according to preset reference thresholds (such as size, size, ratio, etc.) to remove the bounding box that is obviously not a vehicle, for example, to remove the slender bounding box.
  • the larger the bounding box the closer it is to the vehicle or the larger the vehicle, and the higher the weight; the closer the bounding box to the lane is, the closer it is to the lane of the vehicle. Or in the lane where the vehicle is located, the higher the weight.
  • the larger the weight the higher the potential risk of other vehicles identified by the bounding box relative to the own vehicle. Therefore, in the subsequent processing, the area of the predetermined number of bounding boxes with the highest weight can be selectively selected for subsequent processing. Thus, the amount of calculation can be reduced.
  • the number of processes here is not limited, and all the identified areas can be processed, so that the selection process here is omitted.
  • the detection device can detect the motion states of multiple objects, that is, the target object can refer to multiple objects.
  • the detection device determines the three-dimensional information of the first three-dimensional feature point of the target object in the world coordinate system according to the first driving environment image.
  • the first three-dimensional feature point may be a three-dimensional feature point corresponding to the first two-dimensional feature point of the target object in the first driving environment image
  • the first two-dimensional feature point may be the target object in the first driving environment image.
  • the detection device may determine the three-dimensional information of the first three-dimensional feature point of the target object in the world coordinate system according to the point cloud sensor, or determine the first three-dimensional feature point of the target object in the world coordinates according to the binocular image.
  • the binocular image refers to an image captured by a binocular camera device.
  • the binocular image may include a left vision image and a right vision image.
  • the three-dimensional feature point may be a feature point with three-dimensional information
  • the two-dimensional feature point may be a feature point with two-dimensional information.
  • the detection device determines the three-dimensional information of the second three-dimensional feature point of the target object in the world coordinate system according to the second driving environment image, and the second three-dimensional feature point matches the first three-dimensional feature point.
  • the second three-dimensional feature point may be a three-dimensional feature point corresponding to the second two-dimensional feature point of the target object in the second driving environment image.
  • the second two-dimensional feature point may be a two-dimensional feature point that matches the first two-dimensional feature point in the second driving environment image, so the second three-dimensional feature point matches the first three-dimensional feature point. That is, the second two-dimensional feature and the first two-dimensional feature point may refer to the two-dimensional feature point at the same location on the target object, and the second three-dimensional feature and the first three-dimensional feature point may refer to the same location on the target object.
  • the detection device can determine the three-dimensional information of the second three-dimensional feature point of the target object in the world coordinate system through the point cloud sensor, or determine the second three-dimensional feature point of the target object in the world coordinate according to the binocular image.
  • the detection device determines the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the detection device can determine the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system; It can control its own operating state according to the operating state of the target object, realize automatic driving, avoid traffic accidents, and improve the safety of vehicle driving.
  • the motion state of the target object includes the moving speed and/or rotation direction of the target object. If the target object is a target vehicle, the moving speed refers to the traveling speed, and the rotation direction refers to the traveling direction.
  • step S205 may include the following steps s11 to s13.
  • the detection device projects the three-dimensional information of the first three-dimensional feature point in the world coordinate system to the bird's-eye view angle to obtain the first bird's-eye view visual coordinates.
  • the detection device projects the three-dimensional information of the second three-dimensional feature point in the world coordinate system to the bird's-eye view angle to obtain the second bird's-eye view visual coordinates.
  • the detection device determines the motion state of the target object according to the first bird's-eye view visual coordinate and the second bird's-eye view visual coordinate.
  • the detection device can determine the motion state of the target object according to the bird's-eye view coordinates. Further, the detection device may determine the motion state of the target object according to the first bird's-eye view visual coordinate and the second bird's-eye view visual coordinate.
  • the first bird’s-eye view visual coordinates and the second bird’s-eye view visual coordinates are both two-dimensional coordinates, which only include longitude and latitude
  • the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the second three-dimensional feature point in the world All three-dimensional information in the coordinate system includes three-dimensional coordinates, that is, height, longitude and latitude. Therefore, determining the motion state of the target object through the first bird's-eye view visual coordinates and the second bird's-eye view visual coordinates can save resources and improve calculation efficiency.
  • step s13 includes: the detection device determines the displacement information of the target object in the first direction and the second direction according to the first bird's-eye view visual coordinates and the second bird's-eye view visual coordinates, and/or in the first winding direction According to the rotation angle information of the target object, the movement state of the target object is determined according to the displacement information and/or the rotation angle information of the target object.
  • the detection device may determine the displacement information of the target object in the first direction and the second direction, and/or the rotation angle information in the first winding direction according to the first bird's-eye view coordinate and the second bird's-eye view coordinate.
  • the first direction and the second direction can be the horizontal direction of the target object, that is, the x-axis direction (such as the front or back of the target object, that is, the longitudinal direction) and the y-axis direction (the left or right of the target object, that is, Lateral direction), the front of the target object refers to the heading of the target vehicle.
  • the first winding direction is a direction around the z-axis (that is, perpendicular to the ground), that is, it can be considered as the direction of the moving speed of the target object, that is, the heading of the target vehicle.
  • the detection device may determine the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
  • the movement state of the target object includes the movement speed and/or rotation angle of the target object
  • the detection device can determine the movement speed of the target object according to the displacement information, and determine the rotation of the target movement according to the rotation angle information of the target object direction.
  • the relationship between the displacement information of the target object and the rotation angle information can be expressed by formula (1), and the detection device can calculate formula (1) through an optimal solution method to obtain the optimal displacement information and the optimal The rotation angle information.
  • P 1i is the i-th first bird's-eye view visual coordinate
  • P 2i is the i-th second bird's-eye view visual coordinate
  • t is the target object in the first direction and the second direction
  • the displacement information on, R is the rotation angle information in the first direction of the target object.
  • step S205 may include the following steps s21 to s23.
  • the detection device determines the displacement information of the target object in the first direction and the second direction according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system , And/or rotation angle information in the first winding upward.
  • the detection device determines the motion state of the target object according to the displacement information and/or the rotation angle information of the target object.
  • the detection device can directly determine the target object in the first direction and according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the displacement information in the second direction, and/or the rotation angle information in the first winding direction; and the movement state of the target object is determined according to the displacement information and/or the rotation angle information of the target object.
  • the relationship between the displacement information of the target object and the rotation angle information can be expressed by formula (2), and the detection device can calculate formula (2) through an optimal solution method to obtain the optimal displacement information and the optimal The rotation angle information.
  • the Q 1i is the ith three-dimensional information of the first three-dimensional feature in the world coordinate system
  • the Q 2i is the ith three-dimensional information of the second three-dimensional feature point in the world coordinate system
  • the t is the target Displacement information of the object in the first direction and the second direction
  • R is the rotation angle information in the first direction of the target object.
  • the motion state includes the moving speed or/and the rotation direction of the target object
  • determining the motion state of the target object according to the displacement information and/or the rotation angle information of the target object includes: The displacement information of the target object determines the moving speed of the target object; and/or the rotation direction of the target object is determined according to the rotation angle information.
  • the displacement information of the target movement may include the movement distance of the target object from the first time to the second time.
  • the rotation angle information may include the rotation angle of the target object in the first winding direction from the first time to the second time, that is, the rotation angle of the target vehicle in the moving speed direction from the first time to the second time.
  • the detection device may determine the movement duration of the target object according to the first time and the second time, and divide the movement distance of the target object from the first time to the second time by the movement duration to obtain the movement speed. Further, the detection device may directly use the calculated moving speed as the moving speed of the target object, or filter the calculated moving speed to obtain the moving speed of the target object. For example, assuming that the target object is a target vehicle, the distance that the target vehicle moves from the first time to the second time is 1 m. The first time is 0.1s of the travel time of the current vehicle, and the second time is 0.2s of the travel time of the vehicle.
  • the rotation angle of the target object in the first winding direction that is, the direction of the moving speed
  • the moving speed direction or the moving speed direction of the target object at the second time determines the rotation direction of the target object. If the rotation angle of the target object in the first winding direction from the first time to the second time is greater than or equal to the preset angle, it indicates that the moving speed direction of the target object has changed, which can be determined according to the moving speed direction of the target object at the second time The direction of rotation of the target object.
  • the preset angle is 90 degrees
  • the moving speed of the target vehicle at the first time is the x-axis direction (ie, the direction of 90 degrees)
  • the moving speed of the target vehicle at the second time The direction is the direction deviated by 3 degrees to the left of the x-axis (ie, the 93-degree direction).
  • the rotation angle of the target vehicle in the moving speed direction from the first time to the second time is 3 degrees, and the rotation angle is less than the preset angle, indicating that the target vehicle has not changed its driving direction;
  • the direction of the speed or the direction of the moving speed at the second time determines the direction of travel of the target vehicle.
  • the traveling direction of the target vehicle is the x-axis direction, that is, the target vehicle is traveling in the same direction as the host vehicle.
  • the moving speed direction of the target vehicle at the first time is the x-axis direction (that is, the direction of 90 degrees)
  • the moving speed direction of the target vehicle at the second time is the direction opposite to the x-axis (that is, behind the own vehicle).
  • the rotation angle of the target vehicle in the direction of the moving speed from the first time to the second time is 180 degrees, and the rotation angle is greater than the preset angle, indicating that the target vehicle changes the direction of travel;
  • the direction of the moving speed determines the direction of travel of the target vehicle. Therefore, it can be determined that the direction of travel of the target vehicle is behind the vehicle. That is, the target vehicle is driving towards the host vehicle.
  • the foregoing determining the moving speed of the target object according to the displacement information of the target object includes: determining the target moving speed according to the displacement information of the target object, and determining the speed obtained after filtering the target moving speed as the The moving speed of the target object.
  • the displacement information of the target object includes the moving distance of the target object in the first direction from the first time to the second time, and the moving distance of the target object in the second direction from the first time to the second time. Therefore, the moving speed of the target object calculated according to the displacement information includes the moving speed in the first direction and the moving speed in the second direction.
  • the detection device can determine the movement duration of the target object according to the first time and the second time, and calculate according to the movement information and movement duration of the target object to obtain the target movement speed.
  • the target movement speed includes the speed in the first direction and the second time. Speed in both directions. There is a lot of noise in the target speed calculated in this way. Therefore, the detection device can determine the speed obtained by filtering the target moving speed as the moving speed of the target object, so as to improve the accuracy of the moving speed of the target object.
  • the detection device can filter the target moving speed through a filter to obtain the moving speed of the target object.
  • the filter can be a Kalman filter or a wideband filter or the like.
  • the detection device may determine the three-dimensional information of the first three-dimensional feature point of the target object in the world coordinate system according to the first driving environment image, and determine the second three-dimensional information of the target object according to the second driving environment image
  • the three-dimensional information of the feature point in the world coordinate system determines the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the first three-dimensional feature point and the second three-dimensional feature point are matched.
  • the motion state of the target object can be determined according to the three-dimensional information of the matching three-dimensional feature point on the target object, and the target movement does not need to be determined by the center of mass of the target object
  • the movement state of the target object can improve the accuracy of detecting the movement state of the target object. It can improve the safety of vehicle driving and make vehicle driving more automatic and intelligent.
  • the embodiment of the present invention improves another method for detecting the motion state of the target object, please refer to FIG. 3.
  • the method may be executed by a detection device, which may include a camera device, and the camera device may include a main camera device and a binocular camera device, and the binocular camera device includes a left vision camera device and a right vision camera device.
  • the method for detecting the movement state of the target object may include the following steps.
  • the detection device acquires a first driving environment image at the first time.
  • the detection device acquires a second driving environment image at a second time, where the first time is a time before the second time.
  • the detection device may obtain the first driving environment image by the left vision camera at the first time, and obtain the second driving environment image by the left vision camera at the second time.
  • the detection device extracts a first two-dimensional feature point of the target object from the first driving environment image.
  • the detection device may extract all the feature points of the target object from the first driving environment image, or extract the key feature points of the target object.
  • the first two-dimensional feature point is any feature point among all feature points or key feature points of the target object.
  • the detection device may use a corner detection algorithm to extract the corner points of the first feature area.
  • the corner points may be the key feature points, and the key feature points are regarded as the first two points.
  • the corner detection algorithm may be any one of features from accelerate segment test (FAST), edge detection (small univalue segment assimilating nucleus, SUSAN), or Harris corner detection algorithm. Taking the corner detection algorithm as the Harris corner detection algorithm as an example, the detection device can obtain any two-dimensional feature point of the first driving environment image
  • the construction tensor can be expressed by formula (3).
  • A is the construction tensor
  • I x and I y are the gradient information of the point (u, v) on the first driving environment image in the x-axis and y-axis directions
  • w(u, v ) Indicates the type of window sliding on the first driving environment image
  • angle brackets ⁇ > indicate averaging.
  • the function can be determined point M C (u, v) is a critical feature point, when M C> M th, to determine the point (u, v) is the key feature points; when M C ⁇ M th, determining a point (u,v) is not a key feature point.
  • M C may function using the equation (4), M th is a threshold value set.
  • k in formula (4) represents the parameter for adjusting sensitivity
  • k can be an empirical value, such as k can be any value in the range [0.04, 0.15]
  • det(A) is the determinant of matrix A
  • trace( A) is the trace of matrix A.
  • step S303 includes the following steps s31 and s32.
  • the detection device determines the first characteristic area of the target object in the first driving environment image.
  • the detection device extracts the first two-dimensional feature point of the target object from the first feature area.
  • the detection device may determine the first characteristic area of the target object in the first driving environment image, where the first characteristic area may include all or part of the characteristic information of the target object, and the first The characteristic area can be obtained by projection. Further, extract the first two-dimensional feature point of the target object from the first feature region. Wherein, the first two-dimensional feature point can be any feature point in the first feature region, or any key feature point in the first feature region.
  • a target image is captured by a camera device, and the second characteristic area of the target object is determined in the target image.
  • Step s32 includes: projecting the second characteristic area into the first driving environment image, and The obtained projection area is determined as the first characteristic area of the target object.
  • the target image including the target object can be captured by the main camera first, and then the second characteristic region of the target object can be determined in the target image.
  • the second characteristic area may be a bounding box BandingBox, and then the second characteristic area is projected into the first driving environment image taken by the binocular camera device to obtain the first characteristic area.
  • the image captured by the binocular camera device is a grayscale image. The grayscale image is helpful for extracting feature points. Therefore, the subsequent detection equipment extracts the first two-dimensional feature point of the target object from the first driving environment image to obtain the first two-dimensional feature The three-dimensional information of the first three-dimensional feature point corresponding to the point in the world coordinate system.
  • the detection device can obtain the target image through the main camera at the first time.
  • the target image can be processed according to the preamble detection algorithm to obtain the second feature regions of multiple objects, and at least one second feature region from the second feature regions of the multiple objects can be selected as the second feature region of the target object .
  • the detection algorithm of the preamble may be a detection algorithm based on a Convolutional Neural Network (CNN) and so on.
  • CNN Convolutional Neural Network
  • the detection device projects the second characteristic area into the first driving environment image, and determines the obtained projection area as the first characteristic area of the target object.
  • the target image is shown in FIG. 4, and the area of the white frame in FIG. 4 is the characteristic area of multiple objects.
  • the feature area with the largest size in FIG. 4 is the second feature area of the target object.
  • the detection device projects the second characteristic area of the target object into the first driving environment image, and the first characteristic area obtained by the projection may be the area corresponding to the rightmost frame in FIG. 5.
  • the detection device may perform a process on the second feature region of the target object before projecting the second feature region of the target object in the target image onto the first driving environment image. filter.
  • the detection device may screen the projected first characteristic area of the target object after projecting the second characteristic area of the target object in the target image to the first driving environment image. That is, the execution sequence of the step of projecting the second characteristic region in the target image and the step of screening the characteristic region may not be limited.
  • the execution sequence of these two steps may be set by the user. In another embodiment, the execution sequence of the two steps may be set according to the characteristics of the imaging device of the detection equipment.
  • the projection step can be performed first, and then the screening step; when the viewing angle difference between the main camera device and the binocular camera device is small, the difference between the two cameras may not be limited. Order of execution.
  • the detection device determines whether the first parameter of the target object satisfies a preset condition; if so, it executes the projection of the second characteristic area into the first driving environment image, and determines the obtained projection area Is the step of the first feature region of the target object.
  • the first parameter includes at least one of an attribute corresponding to the second characteristic area, a lane where the target object is located, and a driving direction of the target object, and the attribute includes the size and/or shape of the second characteristic area.
  • the detection device Before projecting the second characteristic area of the target object in the target image into the first driving environment image, in order to reduce the amount of calculation of the detection device and save resources, the detection device can filter the characteristic area of the object in the target image. In order to filter out the wrong characteristic areas, as well as filter out the characteristic areas of objects that have less impact on the driving safety of the vehicle. Specifically, the detection device can determine whether the first parameter of the target object meets a preset condition. If the preset condition is met, it indicates that the operating state of the target object has a greater impact on the driving safety of the vehicle.
  • the second characteristic area of the target object is used as the effective area, and the step of projecting the second characteristic area into the first driving environment image is executed; if the preset conditions are not met, it indicates that the running state of the target object is safe for the vehicle.
  • the impact is small, or it indicates that the second feature area is not the area where the object is located, that is, the second feature area is the wrong feature area, then the second feature area of the target object in the target image can be regarded as an invalid area, and the The second characteristic area is screened out.
  • the target image includes the second feature region of object 1, the second feature region of object 2, and the second feature region of object 3.
  • the shape of the second feature area of the object 1 is a slender bar, it indicates that the object 1 is not a vehicle, and it is determined that the parameters of the object 1 do not meet the preset conditions, and the second feature area of the object 1 can be regarded as an invalid feature area , Screen out the second characteristic area of object 1.
  • the shapes of the second feature regions of object 2 and object 3 are both rectangular, it indicates that object 2 and object 3 are vehicles, and it is determined that the parameters of object 2 and object 3 meet the preset conditions.
  • the second characteristic area is used as the effective characteristic area, and the second characteristic area of the target object may be any of the second characteristic areas of the object 2 and the object 3.
  • the detection device can filter the characteristic regions of the object in the target image according to the size of the characteristic region of the object, the lane where the object is located, and the driving direction of the object. Specifically, the detection device may set the first weight value for the second characteristic area according to the size of the second characteristic area of the target object in the target image, and set the second weight value for the second characteristic area according to the lane where the target object in the target image is located. The driving direction of the target object in the target image sets a third weight for the second feature area. The first weight, the second weight, and the third weight are summed to obtain the total weight of the second characteristic area.
  • the second characteristic is determined The area is an effective feature area; if the total weight of the second feature area is less than or equal to the preset value, the second feature area is determined to be an invalid feature area, and the second feature area is filtered out.
  • the larger the size of the second feature area the closer the distance between the target object and the vehicle, that is, the operating state of the target object has a higher impact on the driving safety of the vehicle, and the first weight can be set to a higher value. Larger value (such as 5); conversely, the smaller the size of the second feature area, the longer the distance between the target object and the vehicle, that is, the operating state of the target object has a lower impact on the driving safety of the vehicle.
  • a weight value is set to a smaller value (such as 2).
  • the target object is the target vehicle.
  • the driving safety impact of is higher, the second weight can be set to a larger value (such as 3).
  • the greater the distance between the lane where the target vehicle is located and the lane where the vehicle is located for example, if the target vehicle is located in the first lane and the vehicle is located in the third lane, the operating state of the target vehicle has a greater impact on the driving safety of the vehicle.
  • the second weight can be set to a smaller value (such as 1).
  • the target object is the target vehicle. If the driving direction of the target vehicle in the target image is the same as the driving direction of the host vehicle, the probability of a rear-end collision or collision between the target vehicle and the host vehicle is greater, that is, the operating state of the target vehicle affects the host vehicle. Driving safety has a higher impact, and the third weight can be set to a larger value (such as 3). Conversely, if the driving direction of the target vehicle in the target image is opposite to the driving direction of the host vehicle, the probability of a rear-end collision or collision between the target vehicle and the host vehicle is small, that is, the operating state of the target vehicle has a lower impact on the driving safety of the host vehicle , You can set the third weight to a smaller value (such as 2).
  • the method further includes: determining whether the second parameter of the target object meets a preset condition, and if so, performing the extraction of the first two-dimensional feature point from the first feature region A step of.
  • the second parameter includes at least one of an attribute corresponding to the first characteristic area, a lane where the target object is located, and a driving direction of the target object, and the attribute includes the size and/or shape of the first characteristic area.
  • the detection device After the detection device projects the second characteristic region into the first driving environment image, and obtains the first characteristic region of the target object, in order to reduce the calculation amount of the detection device and save resources, the detection device can perform the first driving environment image.
  • the feature area of the object in the image is filtered, so as to filter out the wrong feature area and the feature area of the object that has a small impact on the driving safety of the vehicle.
  • the detection device can determine whether the second parameter of the target object satisfies a preset condition.
  • the first parameter can be The first feature area of the target object in the environmental image is taken as the effective area, and step S304 is executed; if the preset conditions are not met, it indicates that the operating state of the target object has little impact on the driving safety of the vehicle, or indicates the first feature If the area is not the area where the movement is located, that is, the first characteristic area is a wrong characteristic area, then the first characteristic area of the target object in the first driving environment image may be regarded as an invalid area, and the first characteristic area may be filtered out.
  • the second parameter includes the size of the first characteristic area
  • the first driving environment image includes the first characteristic area of object 1, the first characteristic area of object 2, and the first characteristic area of object 3 obtained by projection. If the size of the first feature area of the object 1 is smaller than the preset size, that is, the parameters of the object 1 do not meet the preset conditions, it indicates that the distance between the object 1 and the own vehicle is relatively long, that is, it indicates that the running state of the object 1 affects the driving of the own vehicle The security impact is small.
  • the first characteristic area of the object 1 in the first driving environment image may be regarded as an invalid characteristic area, and the first characteristic area of the object 1 may be filtered out.
  • the size of the first feature area of object 1 and object 2 is greater than or equal to the preset size, that is, the parameters of object 2 and object 3 meet the preset conditions, it indicates that the distance between object 2 and object 3 and the vehicle is relatively close, which means The operating states of the object 2 and the object 3 have little impact on the driving safety of the vehicle.
  • the first feature regions of the object 2 and the object 3 in the first driving environment image can be used as effective feature regions, and the first feature region of the target object can be any region of the feature regions of the object 2 and the object 3.
  • s32 includes: extracting a plurality of two-dimensional feature points from the first feature region, and selecting the first two-dimensional feature points from the plurality of two-dimensional feature points according to a preset algorithm.
  • the detection device may screen multiple two-dimensional feature points extracted from the first feature area. Specifically, the detection device may extract multiple two-dimensional feature points from the first feature area, and use a preset algorithm to exclude non-compliant feature points among the multiple two-dimensional feature points in the first feature area to obtain compliance.
  • the first two-dimensional feature point of the rule may be Random sample consensus (RANSAC), etc. If the position distribution of multiple two-dimensional feature points in the first feature region is shown in Figure 6, the first feature is RANSAC. The multiple two-dimensional feature points of the region are fitted to obtain a line segment.
  • the line segment can be as shown in Figure 7.
  • the feature point on the line segment is a compliant feature point, and it is reserved, that is, the first two-dimensional feature point is any feature point on the line segment; the feature points not on the line segment For non-compliant feature points, non-compliant feature points can be excluded.
  • the detection device determines the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in the world coordinate system.
  • the detection device may determine the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in the world coordinate system through a point cloud sensor or a binocular image.
  • the detection device determines a second two-dimensional feature point matching the first two-dimensional feature point from the second driving environment image.
  • the detection device may extract all feature points in the second driving environment image, or extract key feature points in the second driving environment image.
  • the first two-dimensional feature point is compared with the feature points extracted from the second driving environment image to determine the second two-dimensional feature point that matches the first two-dimensional feature point from the second driving environment image.
  • the matching of the second two-dimensional feature with the first two-dimensional feature point may mean that the pixel information of the second two-dimensional feature is the same as or similar to the pixel information of the first two-dimensional feature point and is greater than a preset threshold, that is, the second two-dimensional feature point
  • the matching of the two-dimensional feature with the first two-dimensional feature point may refer to the feature point at the same position on the target object.
  • the detection device determines the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system.
  • the detection device may determine the three-dimensional information of the second three-dimensional feature point corresponding to the first two-dimensional feature point in the world coordinate system through a point cloud sensor or a binocular image.
  • the detection device determines the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the detection device can determine whether the first three-dimensional feature point is The three-dimensional information in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system determine the motion state of the target object.
  • the first two-dimensional feature point is p1
  • the second two-dimensional feature point is p2
  • the first three-dimensional feature point is D1
  • the second three-dimensional feature point is D2.
  • p1 and p2 are two-dimensional feature points that match each other
  • D1 is the three-dimensional feature point corresponding to p1
  • D2 is the three-dimensional feature point corresponding to p2. Therefore, D1 and D2 match each other, that is, D1 and D2 are the target objects 3D points in the same location at different moments.
  • the difference between the three-dimensional information of D1 in the world coordinate system and the three-dimensional information of D2 in the world coordinate system is caused by the translation or rotation of the target object during the movement. Therefore, it can be based on the three-dimensional information of D1 in the world coordinate system and The three-dimensional information of D2 in the world coordinate system determines the operating state of the target object.
  • the detection device extracts the first two-dimensional feature point of the target object from the first driving environment image, and determines the second two-dimensional feature point that matches the first two-dimensional feature point from the second driving environment image.
  • Dimensional feature points Determine the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in the world coordinate system, and determine the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system.
  • the detection device can determine the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the matched three-dimensional feature points are determined by the matched two-dimensional feature points, and the operating state of the target object is determined according to the three-dimensional information of the matched three-dimensional feature points in the world coordinate system. It is not necessary to determine the motion state of the target movement through the center of mass of the target object, which can improve the accuracy of detecting the motion state of the target object. It can improve the safety of vehicle driving and make vehicle driving more automatic and intelligent.
  • an embodiment of the present invention provides yet another method for detecting the motion state of a target object, please refer to FIG. 9.
  • the method may be executed by a detection device, which may include a camera device, and the camera device may include a main camera device and a binocular camera device, and the binocular camera device includes a left vision camera device and a right vision camera device.
  • the method for detecting the motion state of the target object may include the following steps.
  • the detection device acquires a first driving environment image at the first time.
  • the detection device acquires a second driving environment image at a second time, where the first time is a time before the second time.
  • step S901 and step S902 please refer to the explanation of step S301 and S302 in FIG. 3, and the repetition will not be repeated.
  • the detection device extracts the first two-dimensional feature point of the target object from the first driving environment image.
  • the detection device determines a second two-dimensional feature point that matches the first two-dimensional feature point from the second driving environment image.
  • the detection device determines the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in the camera coordinate system.
  • the detection device may determine the three-dimensional information of the first three-dimensional feature point in the camera coordinate system according to a tracking algorithm (Kanade-Lucas-Tomasi feature tracker, KLT) and a camera model.
  • KLT Kinade-Lucas-Tomasi feature tracker
  • step S905 includes: determining a third two-dimensional feature point matching the first two-dimensional feature point from a third driving environment image, and the third driving environment image and the first driving environment image are dual Two images taken by the eye camera device at the same time.
  • the first depth information is determined according to the first two-dimensional feature point and the third two-dimensional feature point, and the three-dimensional information of the first three-dimensional feature point in the camera coordinate system is determined according to the first depth information.
  • the first driving environment image may be captured by the left vision camera of the binocular camera at the first time, that is, the first driving environment image may also be referred to as the first left image.
  • the third driving environment image is captured by the right vision camera of the binocular camera at the first time, that is, the third driving image may also be referred to as the first right image.
  • the first two-dimensional feature point is p1
  • the first three-dimensional feature point is D1.
  • the detection device can determine the third two-dimensional feature point matching the p1 from the first right image by a feature point matching algorithm, and the feature point matching algorithm can be a KLT algorithm or the like.
  • the first depth information is determined according to p1 and the third two-dimensional feature point, and the three-dimensional information of D1 in the camera coordinate system can be determined according to the first depth information and the camera model.
  • the camera model may be a model used to indicate the conversion relationship between the depth information and the three-dimensional information in the camera coordinate system of the three-dimensional feature points.
  • the detection device determines the three-dimensional information of the first three-dimensional feature point in the world coordinate system according to the three-dimensional information of the first three-dimensional feature point in the camera coordinate system.
  • the detection device can convert the relationship between the three-dimensional coordinate information of the three-dimensional feature point in the camera coordinate system and the three-dimensional information of the three-dimensional feature point in the world coordinate system, and the first three-dimensional feature point in the camera coordinate system
  • the three-dimensional information below determines the three-dimensional information of the first three-dimensional feature point in the world coordinate system.
  • the detection device determines the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the camera coordinate system.
  • the detection device may determine the three-dimensional information of the second three-dimensional feature point in the camera coordinate system according to the tracking algorithm and the camera model.
  • step S907 includes: determining a fourth two-dimensional feature point matching the second two-dimensional feature point from a fourth driving environment image, and the fourth driving environment image and the second driving environment image are dual Two images taken by the eye camera device at the same time.
  • the second depth information is determined according to the second two-dimensional feature point and the fourth two-dimensional feature point, and the three-dimensional information of the second three-dimensional feature point in the camera coordinate system is determined according to the second depth information.
  • the second driving environment image may be captured by the left vision camera of the binocular camera at the second time, that is, the second driving environment image may also be referred to as the second left image.
  • the fourth driving environment image is captured by the right vision camera of the binocular camera at the second time, that is, the fourth driving image may also be called the second right image.
  • the second two-dimensional feature point is p2
  • the second three-dimensional feature point is D2.
  • the detection device may determine a fourth two-dimensional feature point matching the p2 from the second right image by a feature point matching algorithm.
  • the second depth information is determined according to p2 and the fourth two-dimensional feature point, and the three-dimensional information of D2 in the camera coordinate system can be determined according to the second depth information and the camera model.
  • the detection device determines the three-dimensional information of the second three-dimensional feature point in the world coordinate system according to the three-dimensional information of the second three-dimensional feature point in the camera coordinate system.
  • the detection device can convert the relationship between the three-dimensional coordinate information of the three-dimensional feature point in the camera coordinate system and the three-dimensional information of the three-dimensional feature point in the world coordinate system, and the second three-dimensional feature point in the camera coordinate system
  • the three-dimensional information below determines the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the detection device determines the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the detection device can determine the three-dimensional information of the first three-dimensional feature point in world coordinates according to the three-dimensional information of the first three-dimensional feature point in the camera coordinate system, and according to the second three-dimensional feature point in the camera coordinate system
  • the three-dimensional information determines the three-dimensional information of the second three-dimensional feature point in world coordinates.
  • the motion state of the target object is determined according to the three-dimensional information of the first three-dimensional feature point in world coordinates and the three-dimensional information of the second three-dimensional feature point in world coordinates. The accuracy of obtaining the motion state of the target object can be improved.
  • an embodiment of the present invention provides yet another method for detecting the motion state of a target object.
  • the method may be executed by a detection device.
  • the method for detecting the motion state of the target object may include the following steps.
  • the detection device acquires a first driving environment image at the first time.
  • the detection device acquires a second driving environment image at a second time, where the first time is a time before the second time.
  • the detection device extracts a first two-dimensional feature point of the target object from the first driving environment image.
  • the detection device determines a second two-dimensional feature point that matches the first two-dimensional feature point from the second driving environment image.
  • the detection device acquires the three-dimensional information of the three-dimensional point in the world coordinate system obtained by the point cloud sensor at the first time.
  • the detection device can obtain the three-dimensional information of the three-dimensional point in the world coordinate system obtained by the point cloud sensor at the first time.
  • the point cloud sensor can be a radar sensor, a binocular stereo vision sensor, a structured light, and so on.
  • the detection device determines the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in the world coordinate system according to the three-dimensional information of the three-dimensional point acquired at the first time.
  • the three-dimensional information of the first three-dimensional feature point in the world coordinate system can be determined according to the three-dimensional information of the three-dimensional point matching the first three-dimensional feature point among the three-dimensional points acquired at the first time.
  • step S115 includes: the detection device projects the three-dimensional points acquired by the point cloud sensor at the first time into two-dimensional feature points, and determines from the two-dimensional feature points obtained by the projection that they correspond to the first two-dimensional feature points.
  • the fifth two-dimensional feature point matching the feature point, the three-dimensional information of the three-dimensional point corresponding to the fifth two-dimensional feature point is determined as the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in the world coordinate system information.
  • the detection device may project the three-dimensional points obtained by the point cloud sensor at the first time into two-dimensional feature points, and determine the fifth two-dimensional matching the first two-dimensional feature points from the two-dimensional feature points obtained by the projection. Feature points. Since the first two-dimensional feature point matches the fifth two-dimensional feature point, and the first two-dimensional feature point and the fifth two-dimensional feature point are both two-dimensional feature points acquired at the first time, the fifth two-dimensional feature point The corresponding three-dimensional point matches the first three-dimensional feature point corresponding to the first two-dimensional feature point.
  • the three-dimensional information of the three-dimensional point corresponding to the fifth two-dimensional feature point may be determined as the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in the world coordinate system.
  • the detection device obtains the three-dimensional information of the three-dimensional point in the world coordinate system obtained by the point cloud sensor at the second time.
  • the detection device can obtain the three-dimensional information of the three-dimensional point in the world coordinate system obtained by the point cloud sensor at the second time.
  • the detection device determines the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system according to the three-dimensional information of the three-dimensional point acquired at the second time.
  • the three-dimensional information of the first three-dimensional feature point in the world coordinate system can be determined according to the three-dimensional information of the three-dimensional point matching the first three-dimensional feature point among the three-dimensional points acquired at the second time.
  • step S117 includes: the detection device projects the three-dimensional points acquired by the point cloud sensor at the second time into two-dimensional feature points, and determines from the two-dimensional feature points obtained by the projection that they are the same as the second two-dimensional feature points.
  • the sixth two-dimensional feature point that matches the feature point, the three-dimensional information of the three-dimensional point corresponding to the sixth two-dimensional feature point is determined as the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system information.
  • the detection device may project the three-dimensional points obtained by the point cloud sensor at the second time into two-dimensional feature points, and determine the sixth two-dimensional point that matches the second two-dimensional feature point from the two-dimensional feature points obtained by the projection. Feature points. Since the second two-dimensional feature point matches the sixth two-dimensional feature point, and the second two-dimensional feature point and the sixth two-dimensional feature point are both two-dimensional feature points acquired at the second time, the sixth two-dimensional feature point The corresponding three-dimensional point matches the second three-dimensional feature point corresponding to the second two-dimensional feature point.
  • the three-dimensional information of the three-dimensional point corresponding to the sixth two-dimensional feature point may be determined as the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system.
  • the detection device determines the motion state of the target object according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the detection device can obtain the three-dimensional information through the point cloud sensor to determine the three-dimensional information of the first three-dimensional feature point in world coordinates and the three-dimensional information of the second three-dimensional feature point in world coordinates.
  • the operating state of the target object is determined according to the three-dimensional information of the first three-dimensional feature point in the world coordinates and the three-dimensional information of the second three-dimensional feature point in the world coordinates, which can improve the accuracy of acquiring the motion state of the target object.
  • an embodiment of the present invention provides yet another method for detecting the motion state of a target object.
  • the method may be executed by a detection device.
  • the method for detecting the motion state of the target object may include the following steps.
  • the detection device acquires the first driving environment image at the first time.
  • the detection device acquires a second driving environment image at a second time, where the first time is a time before the second time.
  • the detection device can obtain the Nth driving environment image (ie, the Nth left image) taken by the left vision camera at the Nth time, and obtain the N+1th driving environment image by the left vision camera at the N+1th time. (Ie the N+1th left image).
  • N can be a positive integer.
  • the detection device can obtain the first driving environment image (ie, the first left image) taken by the left vision camera at the first time, and obtain the second driving image by the left vision camera at the second time.
  • Environment image ie the Nth second left image).
  • the detection device extracts at least one first two-dimensional feature point of the target object from the first driving environment image. For example, as shown in FIG. 13, the detection device may extract at least one N- th two-dimensional feature point of the target object from the N- th left image, and the at least one N- th two-dimensional feature point may be represented as p n .
  • step S132 includes: determining a first characteristic area of the target object in the first driving environment image, and extracting at least one first two-dimensional characteristic point of the target object from the first characteristic area.
  • the detection device may use the preceding detection algorithm to determine the first characteristic area of the target object in the first driving environment image, and extract at least one first two-dimensional characteristic point of the target object from the first characteristic area.
  • the detection device establishes an object model of the target object according to the at least one first two-dimensional feature point, and the object model includes the at least one first two-dimensional feature point.
  • the detection device may establish an object model of the target object according to the at least one first two-dimensional feature point.
  • the object model includes the at least one first two-dimensional feature point, which means that the object model is generated based on the at least one first two-dimensional feature point, and the object model can match the first two-dimensional feature point Feature points.
  • Figure 14A assuming that the range of the three-dimensional points in the first driving environment image is as shown by the dotted line in the figure, three first two-dimensional feature points of the target object can be extracted at this time, namely p1, p2, and p3 .
  • the moving model can match the two-dimensional feature points p1, p2, and p3, that is, the moving model includes at least two-dimensional feature points p1, p2, and p3.
  • the detection device extracts at least one second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object, where the second two-dimensional feature point is the second driving environment image
  • the two-dimensional feature points of the target object other than the two-dimensional feature points in the object model.
  • the range of the three-dimensional points in the second driving environment image is shown by the dotted line in the figure.
  • the two-dimensional feature points of the target object at this time may include p2, p3, p4, p5, and p6, and the object model of the target object includes p1, p2, and p3.
  • p2 and p3 are repeated two-dimensional feature points. The repeated two-dimensional feature points are removed from the two-dimensional feature points of the target object in the second driving environment image to obtain at least one second two-dimensional feature point of the target object, Including p4, p5 and p6.
  • the detection set can extract all the feature points of the target object from the N+1th left image, and the extracted feature points can be expressed as p'n +1 .
  • Two-dimensional feature points, which can be expressed as p” n+1 Two-dimensional feature points, which can be expressed as p” n+1 .
  • the feature points of p” n+1 are added to the object model of the target object to update the object model of the target object.
  • a new object can be generated based on at least five two-dimensional feature points p2 to p6 Model, thereby updating the original object model of the target object.
  • step S134 includes: determining a second characteristic area of the target object in the second driving environment image according to the object model of the target object, and extracting at least one first characteristic area of the target object from the second characteristic area. Two-dimensional feature points.
  • the detection device can determine the feature area with the largest number of feature points matching the object model of the target object in the second driving environment image as the second feature area of the target object, and extract the target from the second feature area One or more second two-dimensional feature points of the object.
  • determining the second characteristic region of the target object in the second driving environment image according to the object model of the target object includes: acquiring at least one characteristic region of the object in the second driving environment image, and determining The number of feature points in the feature area of each object that matches the feature points in the object model of the target object, and the feature area of the object whose number of feature points in the at least one feature area of the object is greater than the target preset value is determined as The second characteristic area of the target object.
  • the target threshold can be set by the user or defaulted by the detection device system.
  • the target threshold may be 3, and the second driving environment image includes the characteristic regions of the object 1 and the object 2.
  • the characteristic area of the object 1 is determined as the second characteristic area of the target object.
  • the detection device updates the object model of the target object according to the second two-dimensional feature point to obtain the updated object model of the target object.
  • updating the object model of the target object according to the second two-dimensional feature point can mean adding the second two-dimensional feature point to the object model of the target object; that is, it refers to the object model based on the original two-dimensional
  • the feature points and the added two-dimensional feature points regenerate the object model, and the newly generated object model (ie, the updated object model) can match the original two-dimensional feature points and the newly added two-dimensional feature points. That is, the updated object model can match the first two-dimensional feature point and the second two-dimensional feature point. For example, as shown in FIGS.
  • the second two-dimensional feature points include p4, p5, and p6, and the object model of the target object includes p1, p2, and p3.
  • the updated object model of the target object can match the original two-dimensional feature points p2, p3, and the newly added two-dimensional feature points p4, p5, and p6, namely
  • the updated object model includes at least p2, p3, p4, p5, and p6.
  • the updated object model may also include the original two-dimensional feature points, that is, p1 at the same time; in other embodiments, the updated object model may not include the original two-dimensional feature points. That is, p1 is not included.
  • the detection device determines the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point matching the first two-dimensional feature point in the second driving environment image. For example, as shown in Figure 13, if the two-dimensional feature point p n+1 in the N +1 th left image matches the N th two-dimensional feature point p n in the N th left image, it can be calculated from the N th right image
  • the Nth depth information, the Nth left image and the Nth right image are simultaneously captured by the binocular camera.
  • N-dimensional information of the two-dimensional depth information is determined corresponding to the feature point p n P n-dimensional feature point coordinates in the world.
  • the N+1th depth information can be calculated from the N+1th right image, and the N+1th left image and the N+1th right image are simultaneously captured by the binocular camera.
  • the three-dimensional information of the three-dimensional feature point P n+1 corresponding to the two-dimensional feature point p n+1 in the world coordinates is determined according to the N+1th depth information. Further, the three-dimensional information of P n+1 in the world coordinates and the three-dimensional information of P n in the world coordinates determine the motion state of the target object.
  • the detection device can determine the motion state of the target object according to the first two-dimensional feature point and the two-dimensional feature point in the second left image that matches the first two-dimensional feature point.
  • the detection device acquires a third driving environment image at a third time, the second time being the time before the third time, and the second driving environment image and the third driving environment image that match The two-dimensional feature points of the target object update the motion state of the target object.
  • the detection device can update the motion state of the target object in real time according to the situational environment image, so as to improve the accuracy of acquiring the motion state of the target object.
  • the detection device may update the motion state of the target object according to the two-dimensional feature points of the target object that match in the second driving environment image and the third driving environment image.
  • the detection device may establish an object model of the target object based on at least one first two-dimensional feature point of the target object in the first driving environment image, and based on at least one second two-dimensional feature in the second driving environment image Click to update the object model of the target object.
  • the motion state of the target object is determined according to the first two-dimensional feature point and the two-dimensional feature point in the second driving environment image that matches the first two-dimensional feature point.
  • the continuous motion state of the target object can be obtained, and the accuracy of obtaining the motion state of the target object can be improved.
  • FIG. 15 is a schematic structural diagram of a detection device provided by an embodiment of the present invention.
  • the detection equipment includes: a processor 101, a memory 102, and a camera 103.
  • the memory 102 may include a volatile memory (volatile memory); the memory 102 may also include a non-volatile memory (non-volatile memory); the memory 102 may also include a combination of the foregoing types of memories.
  • the processor 101 may be a central processing unit (CPU).
  • the processor 801 may further include a hardware chip.
  • the aforementioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • the foregoing PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), or any combination thereof.
  • the camera 103 can be used to capture images or videos.
  • the camera device 103 mounted on the vehicle can capture images or videos of the surrounding environment while the vehicle is running.
  • the memory is used to store program instructions; the processor calls the program instructions stored in the memory to perform the following steps:
  • a first driving environment image is captured at a first time by the camera device;
  • a second driving environment image is captured at a second time, the first time being the time before the second time, and the first driving environment
  • the image and the second driving environment image include a target object;
  • the motion state of the target object is determined according to the three-dimensional information of the first three-dimensional feature point in the world coordinate system and the three-dimensional information of the second three-dimensional feature point in the world coordinate system.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the determining the three-dimensional information of the second three-dimensional feature point of the target object in the world coordinate system according to the second driving environment image includes:
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the attribute includes the size and/or shape of the second characteristic area
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the second parameter of the target object satisfies a preset condition, where the second parameter includes at least one of an attribute corresponding to the first feature region, a lane where the target object is located, and a driving direction of the target object ,
  • the attribute includes the size and/or shape of the first characteristic region
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the determining the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system includes:
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • a third two-dimensional feature point that matches the first two-dimensional feature point is determined from a third traveling environment image, and the third traveling environment image and the first traveling environment image are two images simultaneously captured by a binocular camera.
  • the determining the three-dimensional information of the second three-dimensional feature point in the camera coordinate system includes:
  • a fourth two-dimensional feature point that matches the second two-dimensional feature point is determined from a fourth driving environment image, and the fourth driving environment image and the second driving environment image are two images simultaneously captured by a binocular camera.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system is determined according to the three-dimensional information of the three-dimensional point acquired at the second time.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the three-dimensional information of the three-dimensional point corresponding to the fifth two-dimensional feature point is determined as the three-dimensional information of the first three-dimensional feature point corresponding to the first two-dimensional feature point in the world coordinate system.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the three-dimensional information of the three-dimensional point corresponding to the sixth two-dimensional feature point is determined as the three-dimensional information of the second three-dimensional feature point corresponding to the second two-dimensional feature point in the world coordinate system.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the motion state of the target object is determined according to the first bird's-eye view visual coordinates and the second bird's-eye view visual coordinates.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the motion state of the target object is determined according to the displacement information and/or the rotation angle information of the target object.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the motion state of the target object is determined according to the displacement information and/or the rotation angle information of the target object.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the speed obtained after filtering the target moving speed is determined as the moving speed of the target object.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the first two-dimensional feature point is obtained by filtering from the plurality of two-dimensional feature points according to a preset algorithm.
  • the number of the target objects is one or more.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the updated object model of the target object includes the at least one first two-dimensional feature point and the at least one second two-dimensional feature point.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the motion state of the target object is updated according to the two-dimensional feature points of the target object that are matched in the second driving environment image and the third driving environment image.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • Extracting the second two-dimensional feature point of the target object from the second driving environment image according to the object model of the target object includes:
  • At least one second two-dimensional feature point of the target object is extracted from the second feature region.
  • the processor invokes the program instructions stored in the memory to perform the following steps:
  • the feature region of the object whose number of feature point matches in the at least one feature region of the object is greater than the target preset value is determined as the second feature region of the target object.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the present invention is realized.
  • Figure 2, Figure 3, Figure 9 The method for detecting the movement state of the target object described in the embodiment corresponding to FIG. 11 can also implement the detection device of the embodiment of the invention described in FIG. 15, and details are not described herein again.
  • the computer-readable storage medium may be the internal storage unit of the detection device described in any of the foregoing embodiments, such as the hard disk or memory of the device.
  • the computer-readable storage medium may also be an external storage device of the vehicle control device, such as a plug-in hard disk equipped on the device, a smart memory card (SMC), or a secure digital (SD) ) Card, Flash Card, etc.
  • the computer-readable storage medium may also include both an internal storage unit of the device and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the test device.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
  • the program can be stored in a computer readable storage medium.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种目标物体运动状态检测方法、检测设备及存储介质,其中,方法包括:在第一时间获取第一行驶环境图像(S201),在第二时间获取第二行驶环境图像(S202),根据所述第一行驶环境图像确定所述目标物体的第一三维特征点在世界坐标系下的三维信息(S203),根据所述第二行驶环境图像确定所述目标物体的第二三维特征点在世界坐标系下的三维信息(S204),根据所述第一三维坐标点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体的运动状态(S205)。可以提高对车辆的运动状态检测的准确度。

Description

目标物体运动状态检测方法、设备及存储介质 技术领域
本发明实施例涉及智能控制领域,尤其涉及一种目标物体运动状态检测方法、检测设备及存储介质。
背景技术
随着智能控制技术的不断迭代发展,越来越多的车辆配置了自动驾驶系统或辅助驾驶系统,自动驾驶系统或辅助驾驶系统给驾驶员带来许多便利。在自动驾驶系统或辅助驾驶系统中,一个很重要的功能是预估周围车辆的运动状态。例如,车辆的运动状态可包括车辆的位置信息、速度信息和运动方向等。现有的自动驾驶系统或辅助驾驶系统是通过观测周围车辆的质心来预估周围车辆的运动状态。然而现有的自动驾驶系统或辅助驾驶系统通常不能准确地确定周围车辆的质心。例如,如图1所示,图1是一种车载激光雷达的3D扫描点投影到鸟瞰(bird view)视角的示意图。开始的时候,目标车辆在本车辆的右前方,本车辆的车载激光雷达只能扫描到目标车辆的左后侧。如图1所示,图1中虚线部分为目标车辆的3D扫描点在鸟瞰视角的投影范围的示意,其中图1中车辆的框为人为标注。如图1所示,此时如果根据3D扫描点求取平均值计算质心,则质心位于车辆的左下角。
如果不能准确地确定周围车辆的质心,会导致对周围车辆的运动状态预估不准确。因此,如何准确地预估周围车辆的运动状态是目前亟待解决的问题。
发明内容
本发明实施例提供了一种目标物体运动状态检测方法、检测设备及存储介质,可以提高对车辆的运动状态检测的准确度。
第一方面,本发明实施例提供了一种目标物体运动状态检测方法,所述方法包括:
在第一时间获取第一行驶环境图像;
在第二时间获取第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像包括目标物体;
根据所述第一行驶环境图像确定所述目标物体的第一三维特征点在世界坐标系下的三维信息;
根据所述第二行驶环境图像确定所述目标物体的第二三维特征点在世界坐标系下的三维信息,所述第二三维特征点与所述第一三维特征点相匹配;
根据所述第一三维特征点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体的运动状态。
第二方面,本发明实施例提供了另一种目标物体运动状态检测方法,所述方法包括:
在第一时间获取第一行驶环境图像;
在第二时间获取第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像中包括目标物体;
从所述第一行驶环境图像中提取目标物体的至少一个第一二维特征点;
根据所述至少一个第一二维特征点建立所述目标物体的物体模型物体;
根据所述目标物体的物体模型从所述第二行驶环境图像中提取所述目标物体的至少一个第二二维特征点;
根据所述第二二维特征点更新所述目标物体的物体模型,得到更新后的所述目标物体的物体模型;
根据所述第一二维特征点以及所述第二行驶环境图像中与所述第一二维特征点相匹配的二维特征点确定所述目标物体的运动状态。
第三方面,本发明实施例提供了一种检测设备,所述检测设备包括存储器、处理器、摄像装置;
所述摄像装置,用于获取行驶环境图像;
所述存储器,用于存储程序指令;
所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
通过所述摄像装置在第一时间拍摄得到第一行驶环境图像;在第二时间拍摄得到第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像包括目标物体;
根据所述第一行驶环境图像确定所述目标物体的第一三维特征点在世界坐标系下的三维信息;
根据所述第二行驶环境图像确定所述目标物体的第二三维特征点在世界坐标系下的三维信息,所述第二三维特征点与所述第一三维特征点相匹配;
根据所述第一三维特征点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体的运动状态。
第四方面,本发明实施例提供了一种检测设备,所述检测设备包括:所述设备包括:存储器、处理器、摄像装置;
所述摄像装置,用于获取行驶环境图像;
所述存储器,用于存储程序指令;
所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
通过所述摄像装置在第一时间获取第一行驶环境图像;在第二时间获取第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像中包括目标物体;
从所述第一行驶环境图像中提取目标物体的至少一个第一二维特征点;
根据所述至少一个第一二维特征点建立所述目标物体的物体模型物体;
根据所述目标物体的物体模型从所述第二行驶环境图像中提取所述目标物体的至少一个第二二维特征点;
根据所述第二二维特征点更新所述目标物体的物体模型,得到更新后的所述目标物体的物体模型;
根据所述第一二维特征点以及所述第二行驶环境图像中与所述第一二维特征点相匹配的二维特征点确定所述目标物体的运动状态。
本发明实施例中,检测设备可以根据该第一行驶环境图像确定该目标物体的第一三维特征点在世界坐标系下的三维信息,根据该第二行驶环境图像确定该目标物体的第二三维特征点在世界坐标系下的三维信息,根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体的运动状态。第一三维特征点与第二三维特征点是相匹配的,即可根据目标物体上相匹配的三维特征点的三维坐标信息确定该目标物体的运动状态,不需要通过目标物体的质心确定该目标移动的运动状态,可以提高对目标物体的运动状态检测的准确度。可提高车辆行驶的安全性,使车辆驾驶更加自动化、智能化。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种车载激光雷达的3D扫描点投影到鸟瞰视角的示意图;
图2是本发明实施例提供的一种目标物体运动状态检测方法的流程示意图;
图3是本发明实施例提供的另一种目标物体运动状态检测方法的流程示意图;
图4是本发明实施例提供的一种目标图像的示意图;
图5是本发明实施例提供的一种第一行驶环境图像的示意图;
图6是本发明实施例提供的一种目标物体的移动速度分布的示意图;
图7是本发明实施例提供的一种对目标物体的移动速度进行拟 合的示意图;
图8是本发明实施例提供的一种二维特征点与三维特征点的关系示意图;
图9是本发明实施例提供的又一种目标物体运动状态检测方法的流程示意图;
图10是本发明实施例提供的另一种二维特征点与三维特征点的关系示意图;
图11是本发明实施例提供的又一种目标物体运动状态检测方法的流程示意图;
图12是本发明实施例提供的又一种目标物体运动状态检测方法的流程示意图;
图13是本发明实施例提供的又一种二维特征点与三维特征点的关系示意图;
图14A是本发明实施例提供的一种行驶环境图像的示意图;
图14B是本发明实施例提供的另一种行驶环境图像的示意图;
图15是本发明实施例提供的一种检测设备的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供的目标物体运动状态检测方法可以应用于检测设备,该检测设备可以是部署于物体上的设备,例如,行车记录仪等。或者,该检测设备是与物体连接的且处于物体中的设备,例如手机、平板电脑等。检测设备所在物体周围存在目标物体。检测设备所在的物体及目标物体可以是指车辆、移动机器人、无人机等,车辆可以为智能电动车、滑板车、平衡车、轿车、汽车、货车或机器车等等。检测设备所在的物体及目标物体可以相同,例如,均为车辆。检测设 备所在的物体及目标物体可以不相同,如,目标物体为车辆,检测设备所在的物体可以为地面上移动的移动机器人。本发明实施例以检测设备所在的物体及目标物体均为车辆进行说明,为了便于区分,检测设备所在的物体可以称为本车辆,该目标物体对应的车辆可以称为目标车辆。
下面进一步对本申请所提供的目标物体运动状态检测方法及相关设备进行介绍。
请参见图2,图2是本发明实施例提供的一种目标物体运动状态检测方法的流程示意图,可选的,所述方法可以由检测设备执行。如图2所示,所述目标物体运动状态检测方法可以包括如下步骤。
S201、检测设备在第一时间获取第一行驶环境图像。
S202、检测设备在第二时间获取第二行驶环境图像。
其中,该第一时间为该第二时间之前的时间,第一行驶环境图像和该第二行驶环境图像包括目标物体。
在S201和S202中,检测设备可以包括摄像装置,可通过摄像装置对行驶环境进行拍摄得到行驶环境图像。在一个实施例中,检测设备可通过摄像装置按照固定时间间隔对行驶环境进行拍摄得到行驶环境图像。例如,该固定时间间隔可以为0.1s(秒),检测设备可以从本车辆启动时开始计时,通过摄像装置在本车辆的行驶时间为第0.1s(即第一时间)时拍摄得到第一行驶环境图像,并通过摄像装置在本车辆的行驶时间为第0.2s(即第二时间)时拍摄得到第二行驶环境图像,以此类推。其中,该固定时间间隔可以是用户设置的,也可以是检测设备自动设置的。检测设备还可通过摄像装置按照随机时间间隔对行驶环境进行拍摄得到行驶环境图像。例如,该随机时间间隔可为0.2s、0.1s等等,通过摄像装置在本车辆的行驶时间为第0.2s(即第一时间)时拍摄得到第一行驶环境图像,并通过摄像装置在本车辆的行驶时间为第0.3s(即第二时间)时拍摄得到第二行驶环境图像等等。
在另外一个实施例中,检测设备可通过摄像装置对行驶环境进行 拍摄得到视频流,并在视频流中选取特定帧进行检测。例如,视频流中特定帧的选取可以是连续相邻帧,也可以是按照固定帧数间隔进行选取,还可以是按照随机帧数间隔进行选取,此处与前述时间间隔类似,不再赘述。
其中,该第一行驶环境图像和第二行驶环境图像中可以包括一个或多个物体。目标物体可以是第一行驶环境图像中的任意一个物体。目标物体可以是行驶环境图像中的移动物,例如周围的行驶车辆、非机动车、行走行人等等,目标物体还可以是行驶环境中的非移动物,例如周围的静止车辆或行人、路面固定物等等。当目标物体是移动物时,检测设备检测得到的目标物体运动状态可以包含速度信息;当目标物体是非移动物时,可以认为检测设备检测得到的目标物体运动状态包含速度为零的速度信息。
以目标物体为路面其他车辆为例,检测设备通过摄像装置获得行驶环境图像,如拍摄得到的图像或视频流中的某一帧图像,对该行驶环境图像进行视觉识别(例如使用CNN神经网络对图像进行车辆识别)从而得到图像中的认为是车辆的区域,并且为这些每个区域标记一个识别后的边界框。在得到可能是车辆的边界框后,依据预设的参考阈值(如尺寸、大小、比例等阈值)对边界框进行筛选,去除明显不是车辆的边界框,例如去除细长条形的边界框。然后针对剩余的边界框,对于越大的边界框,认为其距离本车辆越近或车辆越巨大,赋予越高的权重;对于越靠近本车道的边界框,认为其越接近本车辆所处车道或位于本车辆所处车道,赋予越高的权重。权重越大则认为边界框对应的识别的其他车辆相对本车辆的潜在危险性越高,从而在后续处理中,可以有选择性地选取权重最高的预定数量的边界框的区域来进行后续处理,从而可以降低计算量。当然,一些其他实施例中,并不限制此处的处理数量,可以对识别后的区域全部都进行处理,从而略去此处的选择过程。
在一个实施例中,检测设备可以检测多个物体的运动状态,即目标物体可以是指多个物体。
S203、检测设备根据该第一行驶环境图像确定该目标物体的第一 三维特征点在世界坐标系下的三维信息。
本发明实施例中,该第一三维特征点可以是该目标物体在第一行驶环境图像中的第一二维特征点对应的三维特征点,第一二维特征点可以为该目标物体在第一行驶环境图像中的多个二维特征点中的任一特征点。检测设备可以根据点云传感器确定该目标物体的第一三维特征点在世界坐标系下的三维信息,或根据双目图像确定该目标物体的第一三维特征点在世界坐标。其中,双目图像是指通过双目摄像装置拍摄得到图像,如双目图像可以包括左视觉图像和右视觉图像。三维特征点可以为具有三维信息的特征点,二维特征点可以为具有二维信息的特征点。
S204、检测设备根据该第二行驶环境图像确定该目标物体的第二三维特征点在世界坐标系下的三维信息,该第二三维特征点与该第一三维特征点相匹配。
本发明实施例中,该第二三维特征点可以是该目标物体在第二行驶环境图像中的第二二维特征点对应的三维特征点。该第二二维特征点可以为在第二行驶环境图像中与第一二维特征点相匹配的二维特征点,所以该第二三维特征点与该第一三维特征点相匹配。即该第二二维特点与该第一二维特征点可以是指目标物体上同一位置的二维特征点,该第二三维特点与该第一三维特征点可以是指目标物体上同一位置的三维特征点。检测设备可以通过点云传感器确定该目标物体的第二三维特征点在世界坐标系下的三维信息,或根据双目图像确定该目标物体的第二三维特征点在世界坐标。
S205、检测设备根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体的运动状态。
本发明实施例中,检测设备可以根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体的运动状态;以便本车辆可以根据目标物体的运行状态控制自身的运行状态,可实现自动驾驶,并可避免交通事故发生,提高车辆驾驶的安全性。其中,该目标物体的运动状态包括目标物体 的移动速度和/或旋转方向,若该目标物体为目标车辆,则移动速度是指行驶速度,旋转方向是指行驶方向。
在一个实施例中,步骤S205可以包括如下步骤s11~s13。
s11、检测设备将该第一三维特征点在世界坐标系下的三维信息投影至鸟瞰视角下,得到第一鸟瞰视觉坐标。
s12、检测设备将该第二三维特征点在世界坐标系下的三维信息投影至鸟瞰视角下,得到第二鸟瞰视觉坐标。
s13、检测设备根据该第一鸟瞰视觉坐标和该第二鸟瞰视觉坐标确定该目标物体的运动状态。
在步骤s11~s13中,若直接根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息计算该目标物体的运动状态,则计算量比较大,需要消耗较多资源,且效率较低。因此,为了提高计算效率、节省资源,检测设备可以根据鸟瞰视觉坐标确定该目标物体的运动状态。进一步,检测设备可以根据该第一鸟瞰视觉坐标和该第二鸟瞰视觉坐标确定该目标物体的运动状态。由于第一鸟瞰视觉坐标和该第二鸟瞰视觉坐标均为二维坐标,即只包括经度和维度,而该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息均包括三维坐标,即包括高度、经度和维度。因此,通过该第一鸟瞰视觉坐标和该第二鸟瞰视觉坐标确定该目标物体的运动状态,可节省资源,提高计算效率。
在一个实施中,步骤s13包括:检测设备根据该第一鸟瞰视觉坐标和该第二鸟瞰视觉坐标确定该目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息,根据该目标物体的该位移信息和/或该旋转角度信息确定该目标物体的运动状态。
当该目标物体为在地面上移动的物体时,通常目标物体是与地面保持水平的,因此可以忽略目标物体在高度方向的位移,并忽略目标物体与地面的倾斜角度。检测设备可以根据该第一鸟瞰视觉坐标和该第二鸟瞰视觉坐标确定该目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息。这里最多只需要求解该 目标物体在三个方向上的信息,可以大大降低计算量。其中,第一方向和第二方向可以为该目标物体的水平方向,即x轴方向(如目标物体的前方或后方,即纵向方向)和y轴方向(目标物体的左方或右方,即横向方向),目标物体的前方是指目标车辆的车头朝向。第一绕向为绕z轴(即垂直于地面)的方向,即可以认为是该目标物体的移动速度方向,即目标车辆的车头朝向。进一步,检测设备可以根据该目标物体的该位移信息和/或该旋转角度信息确定该目标物体的运动状态。例如,该目标物体的运动状态包括该目标物体的移动速度和/或旋转角度,检测设备可以根据位移信息确定该目标物体的移动速度,并根据该目标物体的旋转角度信息确定该目标移动的旋转方向。
在一个实施例中,该目标物体的位移信息与旋转角度信息的关系可采用公式(1)表示,检测设备可以通过最优求解方法对公式(1)进行计算得到最优的位移信息和最优的旋转角度信息。
P 2i=R·P 1i+t    (1)
其中,在公式(1)中,P 1i为第i个该第一鸟瞰视觉坐标,该P 2i为第i个该第二鸟瞰视觉坐标,该t为该目标物体在第一方向和第二方向上的位移信息,R为在该目标物体第一绕向上的旋转角度信息。
在另一个实施例中,步骤S205可以包括如下步骤s21~s23。
s21、检测设备根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息。
s22、检测设备根据该目标物体的该位移信息和/或该旋转角度信息确定该目标物体的运动状态。
在步骤s21~s22中,检测设备可以直接根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息;并根据该目标物体的该位移信息和/或该旋转角度信息确定该目标物体的运动状态。
在一个实施例中,该目标物体的位移信息与旋转角度信息的关系 可采用公式(2)表示,检测设备可以通过最优求解方法对公式(2)进行计算得到最优的位移信息和最优的旋转角度信息。
Q 2i=R·Q 1i+t    (2)
其中,该Q 1i为第i个该第一三维特点在世界坐标系下的三维信息,该Q 2i为第i个该第二三维特征点在世界坐标系下的三维信息,该t为该目标物体在第一方向和第二方向上的位移信息,该R为在该目标物体第一绕向上的旋转角度信息。
在一个实施例中,该运动状态包括该目标物体的移动速度或/和旋转方向,上述根据该目标物体的该位移信息和/或该旋转角度信息确定该目标物体的运动状态,包括:根据该目标物体的该位移信息确定该目标物体的移动速度;和/或,根据该旋转角度信息确定该目标物体的旋转方向。
其中,该目标移动的位移信息可以包括该目标物体从第一时间到第二时间的移动距离。旋转角度信息可以包括目标物体从第一时间到第二时间在第一绕向上的旋转角度,即目标车辆从第一时间到第二时间的在移动速度方向上的旋转角度。
检测设备可以根据第一时间和第二时间确定该目标物体的移动时长,并对该目标物体从第一时间到第二时间的移动距离与该移动时长进行相除操作,得到移动速度。进一步,检测设备可以直接将该计算得到移动速度作为目标物体的移动速度,或者对该计算得到移动速度进行滤波得到该目标物体的移动速度。例如,假设该目标物体为目标车辆,目标车辆从第一时间到第二时间所移动的距离为1m。第一时间为本车辆的行驶时间的第0.1s,第二时间为本车辆的行驶时间的第0.2s。根据第一时间和第二时间计算得到该目标车辆的移动时长,为0.1s,根据移动距离和移动时长计算得到该目标车辆的移动速度,即为1m/0.1s=10m/s=36km/h。
若目标物体从第一时间到第二时间在第一绕向(即移动速度方向)的旋转角度小于预设角度,则表明目标物体的移动速度方向未改变,可以根据目标物体在第一时间的移动速度方向或目标物体在第二时间的移动速度方向确定该目标物体的旋转方向。若该目标物体从第一 时间到第二时间在第一绕向的旋转角度大于或等于预设角度,则表明目标物体的移动速度方向改变,可以根据目标物体在第二时间的移动速度方向确定该目标物体的旋转方向。
例如,假设该目标物体为目标车辆,该预设角度为90度,该目标车辆在第一时间的移动速度方向为x轴方向(即90度方向),该目标车辆在第二时间的移动速度方向为向x轴的左边偏离3度的方向(即93度方向)。则该目标车辆从第一时间到第二时间在移动速度方向的旋转角度为3度,该旋转角度小于预设角度,表明该目标车辆未改变行驶方向;则可以根据该目标车辆在第一时间的速度方向或在第二时间的移动速度方向确定该目标车辆的行驶方向。因此,可确定该目标车辆的行驶方向为x轴方向,即该目标车辆与本车辆同向行驶。假设该目标车辆在第一时间的移动速度方向为x轴方向(即90度方向),该目标车辆在第二时间的移动速度方向为与x轴向相反的方向(即本车辆的后方)。则该目标车辆从第一时间到第二时间在移动速度方向的旋转角度为180度,该旋转角度大于预设角度,表明该目标车辆改变行驶方向;则可以根据该目标车辆在第二时间的移动速度方向确定该目标车辆的行驶方向,因此,可确定该目标车辆的行驶方向为本车辆的后方。即该目标车辆与本车辆相向行驶。
在一个实施例中,上述根据该目标物体的该位移信息确定该目标物体的移动速度,包括:根据该目标物体的位移信息确定目标移动速度,对该目标移动速度滤波后得到的速度确定为该目标物体的移动速度。
其中,该目标物体的位移信息包括该目标物体从第一时间到第二时间在第一方向上的移动距离,以及该目标物体从第一时间到第二时间在第二方向上的移动距离。因此,根据该位移信息计算得到的目标物体的移动速度包括第一方向上的移动速度和第二方向上的移动速度。
检测设备可以根据第一时间和第二时间确定该目标物体的移动时长,根据该目标物体的移动信息及移动时长进行计算,得到目标移动速度,该目标移动速度包括第一方向上的速度及第二方向上的速度。 这样计算得到的目标速度中存在较大噪声,因此,检测设备可以对该目标移动速度滤波后得到的速度确定为该目标物体的移动速度,以提高目标物体的移动速度的准确度。例如,检测设备可以通过滤波器对该目标移动速度进行滤波,得到该目标物体的移动速度。该滤波器可以为卡尔曼滤波器或宽带滤波器等等。
本发明实施例中,检测设备可以根据该第一行驶环境图像确定该目标物体的第一三维特征点在世界坐标系下的三维信息,根据该第二行驶环境图像确定该目标物体的第二三维特征点在世界坐标系下的三维信息,根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体的运动状态。第一三维特征点与第二三维特征点是相匹配的,即可根据目标物体上相匹配的三维特征点的三维信息确定该目标物体的运动状态,不需要通过目标物体的质心确定该目标移动的运动状态,可以提高对目标物体的运动状态检测的准确度。可提高车辆行驶的安全性,使车辆驾驶更加自动化、智能化。
基于上述对目标物体运动状态检测方法的描述,本发明实施例提高另一种目标物体运动状态检测方法,请参见图3。可选的,所述方法可以由检测设备执行,检测设备可以包括摄像装置,该摄像装置可以包括主摄像装置及双目摄像装置,双目摄像装置包括左视觉摄像装置和右视觉摄像装置。如图3所示,所述目标物体运动状态检测方法可以包括如下步骤。
S301、检测设备在第一时间获取第一行驶环境图像。
S302、检测设备在第二时间获取第二行驶环境图像,该第一时间为该第二时间之前的时间。
步骤S301~S302中,检测设备可以在第一时间通过左视觉摄像装置拍摄得到第一行驶环境图像,并在第二时间通过左视觉摄像装置拍摄得到第二行驶环境图像。
S303、检测设备从该第一行驶环境图像中提取目标物体的第一二维特征点。
本发明实施例中,检测设备可以从该第一行驶环境图像中提取目标物体的所有特征点,或者提取该目标物体的关键特征点。第一二维特征点为该目标物体的所有特征点或关键特征点中的任一特征点。
在另一个实施例中,为了降低检测设备的计算量,检测设备可以采用角点检测算法提取该第一特征区域的角点,角点可以为关键特征点,将该关键特征点作为第一二维特征点。其中,该角点检测算法可以为加速分段检测算法(features from accelerate segment test,FAST)、边缘检测算法(small univalue segment assimilating nucleus,SUSAN)或Harris角点检测算法中的任一种。以角点检测算法为Harris角点检测算法为例,检测设备可以获取该第一行驶环境图像任一二维特征点
Figure PCTCN2019074014-appb-000001
的构造张量,构造张量可以采用公式(3)表示。
Figure PCTCN2019074014-appb-000002
其中,公式(3)中A为构造张量,I x和I y分别为第一行驶环境图像上的点(u,v)在x轴和y轴方向上的梯度信息,w(u,v)表示在第一行驶环境图像上滑动的窗口类型,尖括号<>表示求平均值。
进一步,可采用函数M C判断点(u,v)是否为关键特征点,当M C>M th时,确定点(u,v)为关键特征点;当M C≤M th时,确定点(u,v)不为关键特征点。函数M C可以采用公式(4)表示,M th为设定的阈值。
M C=det(A)-k·trace 2(A)    (4)
其中,公式(4)中k表示调节灵敏度的参数,k可以为一个经验值,如k可以为范围[0.04,0.15]内的任一值,det(A)为矩阵A的行列式,trace(A)为矩阵A的迹。
在一个实施例中,步骤S303包括如下步骤s31和s32。
s31、检测设备在该第一行驶环境图像中确定目标物体的第一特征区域。
s32、检测设备从该第一特征区域中提取该目标物体的第一二维特征点。
在步骤s31和s32中,检测设备可以在该第一行驶环境图像中确定目标物体的第一特征区域,其中,该第一特征区域可以包括该目标 物体所有特征信息或部分特征信息,该第一特征区域可以是通过投影得到。进一步,从该第一特征区域中提取该目标物体的第一二维特征点。其中,该第一二维特征点可以为该第一特征区域的任一特征点,或为该第一特征区域的任一关键特征点。
在一个实施例中,通过摄像装置拍摄目标图像,在该目标图像中确定该目标物体的第二特征区域,步骤s32包括:将该第二特征区域投影到该第一行驶环境图像中,并将得到的投影区域确定为该目标物体的第一特征区域。
由于主摄像装置(即主相机)拍摄得到的图像像素较高,检测设备更容易从主摄像装置拍摄出的图像中识别出目标物体(如车辆)。因此,可先通过主摄像装置拍摄包括目标物体的目标图像,再在该目标图像中确定该目标物体的第二特征区域。该第二特征区域可以是边界框BandingBox,再将第二特征区域投影到双目摄像装置拍摄的第一行驶环境图像中,得到第一特征区域。双目摄像装置拍摄得到图像为灰度图像,灰度图像有利于提取特征点,因此后续检测设备从第一行驶环境图像中提取目标物体的第一二维特征点,以便得到第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息。
具体的,检测设备可以在第一时间通过主摄像装置拍摄得到目标图像。可以根据前序的检测算法对目标图像进行处理,得到多个物体的第二特征区域,并从该多个物体的第二特征区域中选择至少一个第二特征区域作为目标物体的第二特征区域。其中,前序的检测算法可以为基于卷积神经网络(Convolutional Neural Network,CNN)的检测Detection算法等等。检测设备将该第二特征区域投影到该第一行驶环境图像中,将得到的投影区域确定为该目标物体的第一特征区域。例如,该目标图像如图4所示,图4中的白色框的区域为多个物体的特征区域。假设,图4中尺寸最大的特征区域为目标物体的第二特征区域。检测设备将目标物体的第二特征区域投影至第一行驶环境图像中,投影得到的第一特征区域可以是图5中最右侧框所对应的区域。
在一个实施例中,为了降低检测设备的计算量,节省资源,检测设备可以在将目标图像中目标物体的第二特征区域投影至第一行驶 环境图像之前,对目标物体的第二特征区域进行筛选。或者,检测设备可以在将目标图像中目标物体的第二特征区域投影至第一行驶环境图像之后,对投影得到的目标物体的第一特征区域进行筛选。即对目标图像中的第二特征区域进行投影的步骤,与对特征区域进行筛选的步骤的执行先后顺序可以不限定。在一个实施例中,这两个步骤的执行先后顺序可以是用户设置的。在另一个实施例中,这两个步骤的执行先后顺序可以是根据检测设备的摄像装置的特性设置的。例如,当主摄像装置与双目摄像装置的视角差异较大时,可以先执行投影的步骤,然后执行筛选的步骤;当主摄像装置与双目摄像装置的视角差异较小时,可以不限定两者的执行顺序。
下面对在将目标图像中目标物体的第二特征区域投影至第一行驶环境图像之前,对目标物体的第二特征区域进行筛选的方式进行介绍:
在一个实施例中,检测设备确定该目标物体的第一参数是否满足预设条件;若是,则执行该将该第二特征区域投影到该第一行驶环境图像中,并将得到的投影区域确定为该目标物体的第一特征区域的步骤。
其中,该第一参数包括该第二特征区域对应的属性、该目标物体所在车道和该目标物体的行驶方向中的至少一种,该属性包括该第二特征区域的尺寸和/或形状。
在将该目标图像中该目标物体的第二特征区域投影至第一行驶环境图像中之前,为了降低检测设备的计算量,节省资源,检测设备可以对该目标图像中物体的特征区域进行筛选,以便滤除错误的特征区域,以及滤除对本车辆的行驶安全性影响较小的物体的特征区域。具体的,检测设备可以判断该目标物体的第一参数是否满足预设条件,若满足预设条件,表明该目标物体的运行状态对本车辆的行驶安全性影响较大,则可以将该目标图像中该目标物体的第二特征区域作为有效区域,并执行将第二特征区域投影至第一行驶环境图像中的步骤;若不满足预设条件,表明该目标物体的运行状态对本车辆的行驶安全性影响较小,或者,表明第二特征区域不为物体所在的区域,即第二 特征区域为错误的特征区域,则可以将该目标图像中该目标物体的第二特征区域作为无效区域,将该第二特征区域筛除。
例如,假设属性包括第二特征区域的形状,该目标图像中包括物体1的第二特征区域、物体2的第二特征区域,及物体3的第二特征区域。若物体1的第二特征区域的形状为细长条形,则表明该物体1不为车辆,则确定物体1的参数不满足预设条件,可以将物体1的第二特征区域作为无效特征区域,将物体1的第二特征区域筛除。若物体2和物体3的第二特征区域的形状均为矩形,则表明该物体2和物体3为车辆,则确定物体2和物体3的参数满足预设条件,可以将物体2和物体3的第二特征区域作为有效特征区域,目标物体的第二特征区域可以为物体2和物体3的第二特征区域中的任一区域。
在一个实施例中,检测设备可以根据物体的特征区域的尺寸、该物体所在车道和该物体的行驶方向对目标图像中物体的特征区域进行筛选。具体的,检测设备可以根据目标图像中目标物体的第二特征区域的尺寸为第二特征区域设置第一权值,根据目标图像中目标物体所在车道为第二特征区域设置第二权值,根据目标图像中目标物体的行驶方向为第二特征区域设置第三权值。对第一权值及第二权值及第三权值进行求和,得到第二特征区域的总权值,若该第二特征区域的总权值大于预设值,则确定该第二特征区域为有效特征区域;若第二特征区域的总权值小于或等于预设值,则确定该第二特征区域为无效特征区域,将该第二特征区域筛除。
其中,第二特征区域的尺寸越大,表明目标物体与本车辆的距离越近,即该目标物体的运行状态对本车辆的行驶安全性影响较高,则可以将第一权值设置为一个较大值(如5);反之,第二特征区域的尺寸越小,表明目标物体与本车辆的距离越远,即该目标物体的运行状态对本车辆的行驶安全性影响较低,则可以将第一权值设置为一个较小值(如2)。
其中,目标物体为目标车辆,目标车辆所在车道与本车辆所在的车道之间的距离越近,如,目标车辆与本车辆位于同车道或位于相邻车道,则该目标车辆的运行状态对本车辆的行驶安全性影响较高,可 以将第二权值设置为一个较大值(如3)。反之,目标车辆所在车道与本车辆所在的车道之间的距离越远,例如,目标车辆位于第一车道,本车辆位于第三车道,则该目标车辆的运行状态对本车辆的行驶安全性影响较低,可以将第二权值设置为一个较小值(如1)。
其中,目标物体为目标车辆,若目标图像中目标车辆的行驶方向与本车辆的行驶方向相同,则目标车辆与本车辆发生追尾或碰撞的概率较大,即该目标车辆的运行状态对本车辆的行驶安全性影响较高,可以将第三权值设置为一个较大值(如3)。反之,若目标图像中目标车辆的行驶方向与本车辆的行驶方向相反,则目标车辆与本车辆发生追尾或碰撞的概率较小,即该目标车辆的运行状态对本车辆的行驶安全性影响较低,则可以将第三权值设置为一个较小值(如2)。
下面对在将目标图像中目标物体的第二特征区域投影至第一行驶环境图像之后,对目标物体的第一特征区域进行筛选的方式进行介绍:
在另一个实施例中,步骤s32之后,还包括:确定所述目标物体的第二参数是否满足预设条件,若是,则执行所述从所述第一特征区域中提取第一二维特征点的步骤。
其中,该第二参数包括该第一特征区域对应的属性、该目标物体所在车道和该目标物体的行驶方向中的至少一种,该属性包括该第一特征区域的尺寸和/或形状。
在检测设备将该第二特征区域投影到该第一行驶环境图像中,得到该目标物体的第一特征区域之后,为了降低检测设备的计算量,节省资源,检测设备可以对该第一行驶环境图像中物体的特征区域进行筛选,以便滤除错误的特征区域以及对本车辆的行驶安全性影响较小的物体的特征区域。具体的,检测设备可以判断该目标物体的第二参数是否满足预设条件,若满足预设条件,表明该目标物体的运行状态对本车辆的行驶安全性影响较大,则可以将该第一行驶环境图像中该目标物体的第一特征区域作为有效区域,并执行步骤S304;若不满足预设条件,表明该目标物体的运行状态对本车辆的行驶安全性影响较小,或者,表明第一特征区域不为移动所在的区域,即第一特征区 域为错误的特征区域,则可以将该第一行驶环境图像中该目标物体的第一特征区域作为无效区域,将该第一特征区域筛除。
例如,第二参数包括第一特征区域的尺寸,该第一行驶环境图像中包括投影得到的物体1的第一特征区域、物体2的第一特征区域,及物体3的第一特征区域。若物体1的第一特征区域的尺寸小于预设尺寸,即物体1的参数不满足预设条件,则表明物体1与本车辆的距离较远,即表明该物体1的运行状态对本车辆的行驶安全性影响较小。可以将该第一行驶环境图像中物体1的第一特征区域作为无效特征区域,将物体1的第一特征区域筛除。若物体1、物体2的第一特征区域的尺寸大于或等于预设尺寸,即物体2和物体3的参数满足预设条件,则表明物体2和物体3与本车辆的距离较近,即表明该物体2、及物体3的运行状态对本车辆的行驶安全性影响较小。可以将该第一行驶环境图像中物体2和物体3的第一特征区域作为有效特征区域,目标物体的第一特征区域可以为物体2和物体3的特征区域中的任一区域。
在一个实施例中,s32包括:从所述第一特征区域中提取多个二维特征点,根据预设算法从所述多个二维特征点中筛选得到第一二维特征点。
为了降低检测设备的计算量,节省资源,检测设备可以对从该第一特征区域中提取的多个二维特征点进行筛选。具体的,检测设备可以从该第一特征区域中提取多个二维特征点,采用预设算法来排除该第一特征区域的多个二维特征点中不合规的特征点,以得到合规的第一二维特征点。例如,预设算法可以为随机抽样一致算法(Random sample consensus,RANSAC)等等,若该第一特征区域的多个二维特征点的位置分布如图6所示,采用RANSAC对该第一特征区域的多个二维特征点进行拟合,得到一条线段。该线段可以如图7所示,位于该线段上的特征点为合规特征点,将其保留,即第一二维特征点为该线段上的任一特征点;未在线段上的特征点为不合规特征点,可以将不合规的特征点排除。
S304、检测设备确定该第一二维特征点对应的第一三维特征点在 世界坐标系下的三维信息。
本发明实施例中,检测设备可以通过点云传感器或双目图像确定该第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息。
S305、检测设备从该第二行驶环境图像中确定与该第一二维特征点相匹配的第二二维特征点。
本发明实施例中,检测设备可以提取该第二行驶环境图像中的所有特征点,或者,提取该第二行驶环境图像中的关键特征点。将该第一二维特征点与该第二行驶环境图像中提取得到的特征点进行比对,以从该第二行驶环境图像中确定出与该第一二维特征点相匹配的第二二维特点。该第二二维特点与该第一二维特征点相匹配可以是指该第二二维特点的像素信息与该第一二维特征点的像素信息相同或相似大于预设阈值,即该第二二维特点与该第一二维特征点相匹配可以是指目标物体上同一位置处的特征点。
S306、检测设备确定该第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
本发明实施例中,检测设备可以通过点云传感器或双目图像确定该第一二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
S307、检测设备根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体的运动状态。
本发明实施例中,由于第一二维特征点与第二二维特征点相匹配,则第一三维特征点与第二三维特征点相匹配,则检测设备可以根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体的运动状态。
例如,如图8所示,该第一二维特征点为p1,该第二二维特征点为p2,第一三维特征点为D1,第二三维特征点为D2。由于p1与p2为相互匹配的二维特征点,而D1是p1对应的三维特征点,D2为p2对应的三维特征点,因此,D1和D2是相互匹配的,即D1和D2 为目标物体上同一位置在不同时刻的3D点。D1在世界坐标系下的三维信息与D2在世界坐标系下的三维信息不相同是由于目标物体在移动过程中发生了平移或者旋转造成的,因此可以根据D1在世界坐标系下的三维信息和D2在世界坐标系下的三维信息确定该目标物体的运行状态。
本发明实施例中,检测设备从该第一行驶环境图像中提取目标物体的第一二维特征点,从该第二行驶环境图像中确定与该第一二维特征点相匹配的第二二维特征点。确定该第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息,并确定该第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。检测设备可以根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体的运动状态。通过相匹配的二维特征点确定相匹配的三维特征点,并根据相匹配的三维特征点在世界坐标系下的三维信息确定目标物体的运行状态。不需要通过目标物体的质心确定该目标移动的运动状态,可以提高对目标物体的运动状态检测的准确度。可提高车辆行驶的安全性,使车辆驾驶更加自动化、智能化。
基于上述对目标物体运动状态检测方法的描述,本发明实施例提供又一种目标物体运动状态检测方法,请参见图9。可选的,所述方法可以由检测设备执行,检测设备可以包括摄像装置,该摄像装置可以包括主摄像装置及双目摄像装置,双目摄像装置包括左视觉摄像装置和右视觉摄像装置。本发明实施例如图9所示,所述目标物体运动状态检测方法可以包括如下步骤。
S901、检测设备在第一时间获取第一行驶环境图像。
S902、检测设备在第二时间获取第二行驶环境图像,该第一时间为该第二时间之前的时间。
本发明实施例中,对步骤S901和步骤S902的解释说明可以参见图3中对步骤S301和S302的解释说明,重复之处,不再赘述。
S903、检测设备从该第一行驶环境图像中提取目标物体的第一二 维特征点。
S904、检测设备从该第二行驶环境图像中确定与该第一二维特征点相匹配的第二二维特征点。
S905、检测设备确定该第一二维特征点对应的第一三维特征点在相机坐标系下的三维信息。
本发明实施例中,检测设备可以根据跟踪算法(Kanade–Lucas–Tomasi feature tracker,KLT)以及相机模型确定该第一三维特征点在相机坐标系下的三维信息。
在一个实施例中,步骤S905包括:从第三行驶环境图像中确定与该第一二维特征点匹配的第三二维特征点,该第三行驶环境图像和该第一行驶环境图像为双目摄像装置同时拍摄的两张图像。根据该第一二维特征点和该第三二维特征点确定第一深度信息,根据该第一深度信息确定该第一三维特征点在相机坐标系下的三维信息。
例如,假设第一行驶环境图像可以为双目摄像装置的左视觉摄像装置在第一时间拍摄得到,即第一行驶环境图像也可称为第一左图像。第三行驶环境图像为双目摄像装置的右视觉摄像装置在第一时间拍摄得到,即第三行驶图像也可称为第一右图像。如图10所示,第一二维特征点为p1,第一三维特征点为D1。检测设备可以特征点匹配算法从第一右图像中确定与该p1匹配的第三二维特征点,该特征点匹配算法可以为KLT算法等等。并根据p1和该第三二维特征点确定第一深度信息,可以根据该第一深度信息及相机模型确定D1在相机坐标系下的三维信息。其中,该相机模型可以是用于指示深度信息与三维特征点的相机坐标系下的三维信息的转换关系的模型。
S906、检测设备根据该第一三维特征点在相机坐标系下的三维信息确定该第一三维特征点在世界坐标系下的三维信息。
本发明实施例中,检测设备可以三维特征点在相机坐标系下的三维坐标信息与三维特征点在世界坐标系下的三维信息之间的转换关系,以及该第一三维特征点在相机坐标系下的三维信息确定该第一三维特征点在世界坐标系下的三维信息。
S907、检测设备确定该第二二维特征点对应的第二三维特征点在 相机坐标系下的三维信息。
本发明实施例中,检测设备可以根据跟踪算法以及相机模型确定该第二三维特征点在相机坐标系下的三维信息。
在一个实施例中,步骤S907包括:从第四行驶环境图像中确定与该第二二维特征点匹配的第四二维特征点,该第四行驶环境图像和该第二行驶环境图像为双目摄像装置同时拍摄的两张图像。根据该第二二维特征点和该第四二维特征点确定第二深度信息,根据该第二深度信息确定该第二三维特征点在相机坐标系下的三维信息。
例如,假设第二行驶环境图像可以为双目摄像装置的左视觉摄像装置在第二时间拍摄得到,即第二行驶环境图像也可称为第二左图像。第四行驶环境图像为双目摄像装置的右视觉摄像装置在第二时间拍摄得到,即第四行驶图像也可称为第二右图像。如图10所示,第二二维特征点为p2,第二三维特征点为D2。检测设备可以特征点匹配算法从第二右图像中确定与该p2匹配的第四二维特征点。并根据p2和该第四二维特征点确定第二深度信息,可以根据该第二深度信息及相机模型确定D2在相机坐标系下的三维信息。
S908、检测设备根据该第二三维特征点在相机坐标系下的三维信息确定该第二三维特征点在世界坐标系下的三维信息。
本发明实施例中,检测设备可以三维特征点在相机坐标系下的三维坐标信息与三维特征点在世界坐标系下的三维信息之间的转换关系,以及该第二三维特征点在相机坐标系下的三维信息确定该第二三维特征点在世界坐标系下的三维信息。
S909、检测设备根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体的运动状态。
本发明实施例中,检测设备可以根据第一三维特征点在相机坐标系下的三维信息确定该第一三维特征点在世界坐标下的三维信息,并根据第二三维特征点在相机坐标系下的三维信息确定该第二三维特征点在世界坐标下的三维信息。进一步,根据该第一三维特征点在世界坐标下的三维信息及该第二三维特征点在世界坐标下的三维信息 确定目标物体的运动状态。可提高获取目标物体的运动状态的准确度。
基于上述对目标物体运动状态检测方法的描述,本发明实施例提供又一种目标物体运动状态检测方法,请参见图11,所述方法可以由检测设备执行。本发明实施例如图11所示,所述目标物体运动状态检测方法可以包括如下步骤。
S110、检测设备在第一时间获取第一行驶环境图像。
S111、检测设备在第二时间获取第二行驶环境图像,该第一时间为该第二时间之前的时间。
S112、检测设备从该第一行驶环境图像中提取目标物体的第一二维特征点。
S113、检测设备从该第二行驶环境图像中确定与该第一二维特征点相匹配的第二二维特征点。
S114、检测设备获取在该第一时间通过点云传感器得到的三维点在世界坐标系下的三维信息。
本发明实施例中,检测设备可以获取在该第一时间通过点云传感器得到的三维点在世界坐标系下的三维信息。其中,该点云传感器可以为雷达传感器、双目立体视觉传感器以及结构光等等。
S115、检测设备根据在该第一时间获取的三维点的三维信息确定该第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息。例如,可以根据在该第一时间获取的三维点中与第一三维特征点相匹配的三维点的三维信息,确定该第一三维特征点在世界坐标系下的三维信息。
在一个实施例中,步骤S115包括:检测设备将在该第一时间通过点云传感器获取得到的三维点投影为二维特征点,从投影得到的二维特征点中确定与该第一二维特征点相匹配的第五二维特征点,将该第五二维特征点对应的三维点的三维信息确定为该第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息。
检测设备可以将在该第一时间通过点云传感器获取得到的三维点投影为二维特征点,从投影得到的二维特征点中确定与该第一二维 特征点相匹配的第五二维特征点。由于第一二维特征点与第五二维特征点相匹配,且第一二维特征点与第五二维特征点均为第一时间获取的二维特征点,因此第五二维特征点对应的三维点与第一二维特征点对应的第一三维特征点相匹配。可以将该第五二维特征点对应的三维点的三维信息确定为该第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息。
S116、检测设备获取在该第二时间通过该点云传感器得到的三维点在世界坐标系下的三维信息。
本发明实施例中,检测设备可以获取在该第二时间通过点云传感器得到的三维点在世界坐标系下的三维信息。
S117、检测设备根据在该第二时间获取的三维点的三维信息确定该第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。例如,可以根据在该第二时间获取的三维点中与第一三维特征点相匹配的三维点的三维信息,确定该第一三维特征点在世界坐标系下的三维信息。
在一个实施例中,步骤S117包括:检测设备将在该第二时间通过点云传感器获取得到的三维点投影为二维特征点,从投影得到的二维特征点中确定与该第二二维特征点相匹配的第六二维特征点,将该第六二维特征点对应的三维点的三维信息确定为该第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
检测设备可以将在该第二时间通过点云传感器获取得到的三维点投影为二维特征点,从投影得到的二维特征点中确定与该第二二维特征点相匹配的第六二维特征点。由于第二二维特征点与第六二维特征点相匹配,且第二二维特征点与第六二维特征点均为第二时间获取的二维特征点,因此第六二维特征点对应的三维点与第二二维特征点对应的第二三维特征点相匹配。可以将该第六二维特征点对应的三维点的三维信息确定为该第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
S118、检测设备根据该第一三维特征点在世界坐标系下的三维信息及该第二三维特征点在世界坐标系下的三维信息确定该目标物体 的运动状态。
本发明实施例中,检测设备可以通过点云传感器得到三维信息确定第一三维特征点在世界坐标下的三维信息及第二三维特征点在世界坐标下的三维信息。根据第一三维特征点在世界坐标下的三维信息及第二三维特征点在世界坐标下的三维信息确定该目标物体的运行状态,可提高获取目标物体的运动状态的准确度。
基于上述对目标物体运动状态检测方法的描述,本发明实施例提供又一种目标物体运动状态检测方法,请参见图12,所述方法可以由检测设备执行。本发明实施例如图12所示,所述目标物体运动状态检测方法可以包括如下步骤。
S130、检测设备在第一时间获取第一行驶环境图像。
S131、检测设备在第二时间获取第二行驶环境图像,该第一时间为该第二时间之前的时间。例如,检测设备可以在第N时间通过左视觉摄像装置拍摄得到第N行驶环境图像(即第N左图像),并在第N+1时间通过左视觉摄像装置拍摄得到第N+1行驶环境图像(即第N+1左图像)。其中,N可以为正整数。相应的,N可以为一,检测设备可以在第一时间通过左视觉摄像装置拍摄得到第一行驶环境图像(即第一左图像),并在第二时间通过左视觉摄像装置拍摄得到第二行驶环境图像(即第N二左图像)。
S132、检测设备从该第一行驶环境图像中提取目标物体的至少一个第一二维特征点。例如,图13所示,检测设备可以从第N左图像中提取目标物体的至少一个第N二维特征点,该至少一个第N二维特征点可以表示为p n
在一个实施例中,步骤S132中包括:在该第一行驶环境图像中确定目标物体的第一特征区域,从该第一特征区域中提取该目标物体的至少一个第一二维特征点。
检测设备可以采用前序的检测算法在该第一行驶环境图像中确定目标物体的第一特征区域,从该第一特征区域中提取该目标物体的至少一个第一二维特征点。
S133、检测设备根据该至少一个第一二维特征点建立该目标物体的物体模型,该物体模型中包括该至少一个第一二维特征点。
本发明实施例中,为了持续更新目标物体的运动状态,检测设备可以根据该至少一个第一二维特征点建立该目标物体的物体模型。可以理解的是,该物体模型中包括该至少一个第一二维特征点,指的是该物体模型基于该至少一个第一二维特征点而生成,且该物体模型能够匹配该第一二维特征点。例如,如图14A所示,假设第一行驶环境图像中三维点的范围如图中虚线所示意,此时可以提取到目标物体的三个第一二维特征点,分别为p1、p2及p3。根据目标物体的三个第一二维特征点建立物体模型中,该移动模型能够匹配二维特征点p1、p2及p3,即该移动模型至少包括二维特征点p1、p2及p3。
S134、检测设备根据该目标物体的物体模型从该第二行驶环境图像中提取该目标物体的至少一个第二二维特征点,该第二二维特征点为该第二行驶环境图像中除该物体模型中的二维特征点之外的该目标物体的二维特征点。例如,如图14B所示,该第二行驶环境图像中三维点的范围如图中虚线所示意,此时由于本车辆的前行目标物体的虚线示意范围也发生了变化,更偏向于相对于本车辆的一侧,此时该目标物体的二维特征点可以包括p2、p3、p4、p5及p6,该目标物体的物体模型包括p1、p2及p3。p2和p3为重复的二维特征点,从该第二行驶环境图像中该目标物体的二维特征点中去除重复的二维特征点,得到该目标物体的至少一个第二二维特征点,包括p4、p5及p6。
回到图13,检测设可以从第N+1左图像中提取目标物体的所有特征点,提取的特征点可以表示为p’ n+1。将p’ n+1中的特征点与目标物体的物体模型中的特征点进行比对,移除p’ n+1中与目标物体的物体模型中重复的特征点,得到至少一个第N+1二维特征点,可表示为p” n+1。并将p” n+1的特征点添加至目标物体的物体模型中,以更新目标物体的物体模型。因此,针对图14B所示,目标物体的物体模型在添加了新的p4、p5和p6三个第二二维特征点后,可以至少基于p2至p6五个二维特征点而生成新的物体模型,从而更新原有的目标 物体的物体模型。
在一个实施例中,步骤S134包括:根据该目标物体的物体模型在该第二行驶环境图像中确定该目标物体的第二特征区域,从该第二特征区域中提取该目标物体的至少一个第二二维特征点。
检测设备可以将第二行驶环境图像中与该目标物体的物体模型中相匹配的特征点数量最多的特征区域,确定为该目标物体的第二特征区域,从该第二特征区域中提取该目标物体的一个或多个第二二维特征点。
在一个实施例中,上述根据该目标物体的物体模型在该第二行驶环境图像中确定该目标物体的第二特征区域,包括:获取该第二行驶环境图像中的至少一个物体特征区域,确定每个物体特征区域中的特征点与该目标物体的物体模型中的特征点相匹配的数量,将该至少一个物体特征区域中特征点匹配的数量大于目标预设值的物体的特征区域确定为该目标物体的第二特征区域。该目标阈值可以是用户设置的,或检测设备系统默认的。例如,目标阈值可以为3,该第二行驶环境图像中的包括物体1和物体2的特征区域。若物体1的特征区域中的特征点与该目标物体的物体模型中的特征点相匹配的数量为5,物体2的特征区域中的特征点与该目标物体的物体模型中的特征点相匹配的数量为2。则将物体1的特征区域确定为该目标物体的第二特征区域。
S135、检测设备根据该第二二维特征点更新该目标物体的物体模型,得到更新后的该目标物体的物体模型。可理解的是,根据该第二二维特征点更新该目标物体的物体模型,可以是指将第二二维特征点添加至该目标物体的物体模型;即指的是基于原有的二维特征点和添加的二维特征点重新生成物体模型,并且新生成的物体模型(即更新后的物体模型)能够匹配原有二维特征点以及新添加的二维特征点。即更新后的物体模型能够匹配第一二维特征点及第二二维特征点。例如,图14A和图14B所示,第二二维特征点包括p4、p5及p6,该目标物体的物体模型包括p1、p2及p3。采用p4、p5及p6更新该目标物体的物体模型,该更新后的目标物体的物体模型能够匹配原有二维 特征点p2、p3,以及新添加的二维特征点p4、p5及p6,即更新后的物体模型至少包括p2、p3、p4、p5及p6。在一些实施例下,更新后的物体模型还可以包括原有的二维特征点,即同时包括p1;在另一些实施例下,更新后的物体模型可以不包括原有的二维特征点,即不包括p1。
S136、检测设备根据该第一二维特征点以及该第二行驶环境图像中与该第一二维特征点相匹配的二维特征点确定该目标物体的运动状态。例如,如图13所示,若第N+1左图像中的二维特征点p n+1与第N左图像中的第N二维特征点p n相匹配,可根据第N右图像计算第N深度信息,第N左图像与第N右图像为双目摄像装置同时拍摄得到的。根据第N深度信息确定二维特征点p n对应的三维特征点P n在世界坐标下的三维信息。可根据第N+1右图像计算第N+1深度信息,第N+1左图像与第N+1右图像为双目摄像装置同时拍摄得到的。根据第N+1深度信息确定二维特征点p n+1对应的三维特征点P n+1在世界坐标下的三维信息。进一步,P n+1在世界坐标下的三维信息及P n在世界坐标下的三维信息确定该目标物体的运动状态。相应的,检测设备可以根据该第一二维特征点以及该第二左图像中与该第一二维特征点相匹配的二维特征点确定该目标物体的运动状态。
在一个实施例中,检测设备在第三时间获取第三行驶环境图像,该第二时间为该第三时间之前的时间,根据该第二行驶环境图像和该第三行驶环境图像中相匹配的该目标物体的二维特征点更新该目标物体的运动状态。
检测设备可以根据形势环境图像实时更新目标物体的运动状态,以便提高获取目标物体的运动状态的准确度。具体的,检测设备可以根据该第二行驶环境图像和该第三行驶环境图像中相匹配的该目标物体的二维特征点更新该目标物体的运动状态。
本发明实施例中,检测设备可以根据第一行驶环境图像中目标物体的至少一个第一二维特征点建立目标物体的物体模型,并根据第二行驶环境图像中的至少一个第二二维特征点更新目标物体的物体模型。并根据该第一二维特征点以及该第二行驶环境图像中与该第一二 维特征点相匹配的二维特征点确定该目标物体的运动状态。可以得到该目标物体持续的运动状态,可提高获取目标物体的运动状态的准确度。
请参见图15,图15是本发明实施例提供的检测设备的结构示意图。具体的,所述检测设备包括:处理器101、存储器102及摄像装置103。
所述存储器102可以包括易失性存储器(volatile memory);存储器102也可以包括非易失性存储器(non-volatile memory);存储器102还可以包括上述种类的存储器的组合。所述处理器101可以是中央处理器(central processing unit,CPU)。所述处理器801还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA)或其任意组合。
摄像装置103可以用于拍摄图像或视频。例如,车辆上搭载的摄像装置103可以在车辆行驶过程中拍摄行驶周围环境图像或者视频。
在一个实施例中,所述存储器,用于存储程序指令;所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
通过所述摄像装置在第一时间拍摄得到第一行驶环境图像;在第二时间拍摄得到第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像包括目标物体;
根据所述第一行驶环境图像确定所述目标物体的第一三维特征点在世界坐标系下的三维信息;
根据所述第二行驶环境图像确定所述目标物体的第二三维特征点在世界坐标系下的三维信息,所述第二三维特征点与所述第一三维特征点相匹配;
根据所述第一三维特征点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体的运动状态。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
从所述第一行驶环境图像中提取目标物体的第一二维特征点;
确定所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息;
所述根据所述第二行驶环境图像确定所述目标物体的第二三维特征点在世界坐标系下的三维信息,包括:
从所述第二行驶环境图像中确定与所述第一二维特征点相匹配的第二二维特征点;
确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
在所述第一行驶环境图像中确定目标物体的第一特征区域;
从所述第一特征区域中提取所述目标物体的第一二维特征点。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
通过摄像装置拍摄目标图像;
在所述目标图像中确定所述目标物体的第二特征区域;
所述在所述第一行驶环境图像中确定目标物体的第一特征区域,包括:
将所述第二特征区域投影到所述第一行驶环境图像中,并将得到的投影区域确定为所述目标物体的第一特征区域。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
确定所述目标物体的第一参数是否满足预设条件,所述第一参数包括所述第二特征区域对应的属性、所述目标物体所在车道和所述目 标物体的行驶方向中的至少一种,所述属性包括所述第二特征区域的尺寸和/或形状;
若是,则执行所述将所述第二特征区域投影到所述第一行驶环境图像中,并将得到的投影区域确定为所述目标物体的第一特征区域的步骤。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
确定所述目标物体的第二参数是否满足预设条件,所述第二参数包括所述第一特征区域对应的属性、所述目标物体所在车道和所述目标物体的行驶方向中的至少一种,所述属性包括所述第一特征区域的尺寸和/或形状;
若是,则执行所述从所述第一特征区域中提取第一二维特征点的步骤。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
确定所述第一二维特征点对应的第一三维特征点在相机坐标系下的三维信息;
根据所述第一三维特征点在相机坐标系下的三维信息确定所述第一三维特征点在世界坐标系下的三维信息;
所述确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息,包括:
确定所述第二二维特征点对应的第二三维特征点在相机坐标系下的三维信息;
根据所述第二三维特征点在相机坐标系下的三维信息确定所述第二三维特征点在世界坐标系下的三维信息。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
从第三行驶环境图像中确定与所述第一二维特征点匹配的第三二维特征点,所述第三行驶环境图像和所述第一行驶环境图像为双目摄像装置同时拍摄的两张图像;
根据所述第一二维特征点和所述第三二维特征点确定第一深度信息;
根据所述第一深度信息确定所述第一三维特征点在相机坐标系下的三维信息;
所述确定所述第二三维特征点在相机坐标系下的三维信息,包括:
从第四行驶环境图像中确定与所述第二二维特征点匹配的第四二维特征点,所述第四行驶环境图像和所述第二行驶环境图像为双目摄像装置同时拍摄的两张图像;
根据所述第二二维特征点和所述第四二维特征点确定第二深度信息;
根据所述第二深度信息确定所述第二三维特征点在相机坐标系下的三维信息。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
获取在所述第一时间通过点云传感器得到的三维点在世界坐标系下的三维信息;
根据在所述第一时间获取的三维点的三维信息确定所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
获取在所述第二时间通过所述点云传感器得到的三维点在世界坐标系下的三维信息;
根据在所述第二时间获取的三维点的三维信息确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
将在所述第一时间通过所述点云传感器获取得到的三维点投影为二维特征点;
从投影得到的二维特征点中确定与所述第一二维特征点相匹配的第五二维特征点;
将所述第五二维特征点对应的三维点的三维信息确定为所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
将在所述第二时间通过所述点云传感器获取得到的三维点投影为二维特征点;
从投影得到的二维特征点中确定与所述第二二维特征点相匹配的第六二维特征点;
将所述第六二维特征点对应的三维点的三维信息确定为所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
将所述第一三维特征点在世界坐标系下的三维信息投影至鸟瞰视角下,得到第一鸟瞰视觉坐标;
将所述第二三维特征点在世界坐标系下的三维信息投影至鸟瞰视角下,得到第二鸟瞰视觉坐标;
根据所述第一鸟瞰视觉坐标和所述第二鸟瞰视觉坐标确定所述目标物体的运动状态。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
根据所述第一鸟瞰视觉坐标和所述第二鸟瞰视觉坐标确定所述目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息;
根据所述目标物体的所述位移信息和/或所述旋转角度信息确定所述目标物体的运动状态。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
根据所述第一三维特征点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息;
根据所述目标物体的所述位移信息和/或所述旋转角度信息确定所述目标物体的运动状态。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
根据所述目标物体的所述位移信息确定所述目标物体的移动速度;和/或,根据所述旋转角度信息确定所述目标物体的旋转方向。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
根据所述目标物体的位移信息确定目标移动速度;
将对所述目标移动速度滤波后得到的速度确定为所述目标物体的移动速度。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
从所述第一特征区域中提取多个二维特征点;
根据预设算法从所述多个二维特征点中筛选得到第一二维特征点。
可选的,所述目标物体的数量为一个或多个。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
通过所述摄像装置在第一时间获取第一行驶环境图像;在第二时间获取第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像中包括目标物体;
从所述第一行驶环境图像中提取目标物体的至少一个第一二维特征点;
根据所述至少一个第一二维特征点建立所述目标物体的物体模型物体;
根据所述目标物体的物体模型从所述第二行驶环境图像中提取所述目标物体的至少一个第二二维特征点;
根据所述第二二维特征点更新所述目标物体的物体模型,得到更新后的所述目标物体的物体模型;
根据所述第一二维特征点以及所述第二行驶环境图像中与所述第一二维特征点相匹配的二维特征点确定所述目标物体的运动状态;
可选的,更新后的所述目标物体的物体模型包括所述至少一个第一二维特征点和所述至少一个第二二维特征点。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
在第三时间获取第三行驶环境图像,所述第二时间为所述第三时间之前的时间;
根据所述第二行驶环境图像和所述第三行驶环境图像中相匹配的所述目标物体的二维特征点更新所述目标物体的运动状态。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
在所述第一行驶环境图像中确定目标物体的第一特征区域;
从所述第一特征区域中提取所述目标物体的至少一个第一二维特征点;
根据所述目标物体的物体模型从所述第二行驶环境图像中提取所述目标物体的第二二维特征点,包括:
根据所述目标物体的物体模型在所述第二行驶环境图像中确定所述目标物体的第二特征区域;
从所述第二特征区域中提取所述目标物体的至少一个第二二维特征点。
可选的,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
获取所述第二行驶环境图像中的至少一个物体特征区域;
确定每个物体特征区域中的特征点与所述目标物体的物体模型中的特征点相匹配的数量;
将所述至少一个物体特征区域中特征点匹配的数量大于目标预设值的物体的特征区域确定为所述目标物体的第二特征区域。
在本发明的实施例中还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行 时实现本发明图2、图3、图9和图11所对应实施例中描述的目标物体运动状态检测方法,也可实现图15所述发明实施例的检测设备,在此不再赘述。
所述计算机可读存储介质可以是前述任一实施例所述的检测设备的内部存储单元,例如设备的硬盘或内存。所述计算机可读存储介质也可以是所述车辆控制装置的外部存储设备,例如所述设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述计算机可读存储介质还可以既包括所述设备的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序以及所述测试设备所需的其他程序和数据。所述计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,所述程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (45)

  1. 一种目标物体运动状态检测方法,其特征在于,所述方法包括:
    在第一时间获取第一行驶环境图像;
    在第二时间获取第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像包括目标物体;
    根据所述第一行驶环境图像确定所述目标物体的第一三维特征点在世界坐标系下的三维信息;
    根据所述第二行驶环境图像确定所述目标物体的第二三维特征点在世界坐标系下的三维信息,所述第二三维特征点与所述第一三维特征点相匹配;
    根据所述第一三维特征点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体的运动状态。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一行驶环境图像确定所述目标物体的第一三维特征点在世界坐标系下的三维信息,包括:
    从所述第一行驶环境图像中提取目标物体的第一二维特征点;
    确定所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息;
    所述根据所述第二行驶环境图像确定所述目标物体的第二三维特征点在世界坐标系下的三维信息,包括:
    从所述第二行驶环境图像中确定与所述第一二维特征点相匹配的第二二维特征点;
    确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
  3. 根据权利要求2所述的方法,其特征在于,所述从所述第一 行驶环境图像中提取目标物体的第一二维特征点,包括:
    在所述第一行驶环境图像中确定目标物体的第一特征区域;
    从所述第一特征区域中提取所述目标物体的第一二维特征点。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    通过摄像装置拍摄目标图像;
    在所述目标图像中确定所述目标物体的第二特征区域;
    所述在所述第一行驶环境图像中确定目标物体的第一特征区域,包括:
    将所述第二特征区域投影到所述第一行驶环境图像中,并将得到的投影区域确定为所述目标物体的第一特征区域。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    确定所述目标物体的第一参数是否满足预设条件,所述第一参数包括所述第二特征区域对应的属性、所述目标物体所在车道和所述目标物体的行驶方向中的至少一种,所述属性包括所述第二特征区域的尺寸和/或形状;
    若是,则执行所述将所述第二特征区域投影到所述第一行驶环境图像中,并将得到的投影区域确定为所述目标物体的第一特征区域的步骤。
  6. 根据权利要求4所述的方法,其特征在于,所述将得到的投影区域确定为所述目标物体的第一特征区域之后,所述方法还包括:
    确定所述目标物体的第二参数是否满足预设条件,所述第二参数包括所述第一特征区域对应的属性、所述目标物体所在车道和所述目标物体的行驶方向中的至少一种,所述属性包括所述第一特征区域的尺寸和/或形状;
    若是,则执行所述从所述第一特征区域中提取第一二维特征点的步骤。
  7. 根据权利要求2-6中任一项所述的方法,其特征在于,所述确定所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息,包括:
    确定所述第一二维特征点对应的第一三维特征点在相机坐标系下的三维信息;
    根据所述第一三维特征点在相机坐标系下的三维信息确定所述第一三维特征点在世界坐标系下的三维信息;
    所述确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息,包括:
    确定所述第二二维特征点对应的第二三维特征点在相机坐标系下的三维信息;
    根据所述第二三维特征点在相机坐标系下的三维信息确定所述第二三维特征点在世界坐标系下的三维信息。
  8. 根据权利要求7所述的方法,其特征在于,所述确定所述第一二维特征点对应的第一三维特征点在相机坐标系下的三维信息,包括:
    从第三行驶环境图像中确定与所述第一二维特征点匹配的第三二维特征点,所述第三行驶环境图像和所述第一行驶环境图像为双目摄像装置同时拍摄的两张图像;
    根据所述第一二维特征点和所述第三二维特征点确定第一深度信息;
    根据所述第一深度信息确定所述第一三维特征点在相机坐标系下的三维信息;
    所述确定所述第二二维特征点对应的第二三维特征点在相机坐标系下的三维信息,包括:
    从第四行驶环境图像中确定与所述第二二维特征点匹配的第四二维特征点,所述第四行驶环境图像和所述第二行驶环境图像为双目摄像装置同时拍摄的两张图像;
    根据所述第二二维特征点和所述第四二维特征点确定第二深度 信息;
    根据所述第二深度信息确定所述第二三维特征点在相机坐标系下的三维信息。
  9. 根据权利要求2-6任一项所述的方法,其特征在于,所述确定所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息,包括:
    获取在所述第一时间通过点云传感器得到的三维点在世界坐标系下的三维信息;
    根据在所述第一时间获取的三维点的三维信息确定所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息;
    所述确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息,包括:
    获取在所述第二时间通过所述点云传感器得到的三维点在世界坐标系下的三维信息;
    根据在所述第二时间获取的三维点的三维信息确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
  10. 根据权利要求9所述的方法,其特征在于,所述根据在所述第一时间获取的三维点的三维信息确定所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息,包括:
    将在所述第一时间通过所述云传感器获取得到的三维点投影为二维特征点;
    从投影得到的二维特征点中确定与所述第一二维特征点相匹配的第五二维特征点;
    将所述第五二维特征点对应的三维点的三维信息确定为所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息;
    所述根据在所述第二时间获取的三维点的三维信息确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息,包括:
    将在所述第二时间通过所述云传感器获取得到的三维点投影为二维特征点;
    从投影得到的二维特征点中确定与所述第二二维特征点相匹配的第六二维特征点;
    将所述第六二维特征点对应的三维点的三维信息确定为所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
  11. 根据权利要求1所述的方法,其特征在于,所述根据所述第一三维特征点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体的运动状态,包括:
    将所述第一三维特征点在世界坐标系下的三维信息投影至鸟瞰视角下,得到第一鸟瞰视觉坐标;
    将所述第二三维特征点在世界坐标系下的三维信息投影至鸟瞰视角下,得到第二鸟瞰视觉坐标;
    根据所述第一鸟瞰视觉坐标和所述第二鸟瞰视觉坐标确定所述目标物体的运动状态。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述第一鸟瞰视觉坐标和所述第二鸟瞰视觉坐标确定所述目标物体的运动状态,包括:
    根据所述第一鸟瞰视觉坐标和所述第二鸟瞰视觉坐标确定所述目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息;
    根据所述目标物体的所述位移信息和/或所述旋转角度信息确定所述目标物体的运动状态。
  13. 根据权利要求1所述的方法,其特征在于,所述根据所述第一三维特征点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体的运动状态,包括:
    根据所述第一三维特征点在世界坐标系下的三维信息及所述第 二三维特征点在世界坐标系下的三维信息确定所述目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息;
    根据所述目标物体的所述位移信息和/或所述旋转角度信息确定所述目标物体的运动状态。
  14. 根据权利要求12或13所述的方法,其特征在于,所述运动状态包括所述目标物体的移动速度或/和旋转方向,所述根据所述目标物体的所述位移信息和/或所述旋转角度信息确定所述目标物体的运动状态,包括:
    根据所述目标物体的所述位移信息确定所述目标物体的移动速度;和/或,根据所述旋转角度信息确定所述目标物体的旋转方向。
  15. 根据权利要求14所述的方法,所述根据所述目标物体的所述位移信息确定所述目标物体的移动速度,包括:
    根据所述目标物体的位移信息确定目标移动速度;
    将对所述目标移动速度滤波后得到的速度确定为所述目标物体的移动速度。
  16. 根据权利要求3-6任一项所述的方法,其特征在于,所述从所述第一特征区域中提取第一二维特征点,包括:
    从所述第一特征区域中提取多个二维特征点;
    根据预设算法从所述多个二维特征点中筛选得到第一二维特征点。
  17. 根据权利要求1-6任一项所述的方法,其特征在于,所述目标物体的数量为一个或多个。
  18. 一种目标物体运动状态检测方法,其特征在于,所述方法包括:
    在第一时间获取第一行驶环境图像;
    在第二时间获取第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像中包括目标物体;
    从所述第一行驶环境图像中提取目标物体的至少一个第一二维特征点;
    根据所述至少一个第一二维特征点建立所述目标物体的物体模型物体;
    根据所述目标物体的物体模型从所述第二行驶环境图像中提取所述目标物体的至少一个第二二维特征点;
    根据所述第二二维特征点更新所述目标物体的物体模型,得到更新后的所述目标物体的物体模型;
    根据所述第一二维特征点以及所述第二行驶环境图像中与所述第一二维特征点相匹配的二维特征点确定所述目标物体的运动状态。
  19. 根据权利要求18所述的方法,其特征在于,更新后的所述目标物体的物体模型包括所述至少一个第一二维特征点和所述至少一个第二二维特征点。
  20. 根据权利要求18所述的方法,其特征在于,所述方法还包括:
    在第三时间获取第三行驶环境图像,所述第二时间为所述第三时间之前的时间;
    根据所述第二行驶环境图像和所述第三行驶环境图像中相匹配的所述目标物体的二维特征点更新所述目标物体的运动状态。
  21. 根据权利要求18-20任一项所述的方法,其特征在于,所述从所述第一行驶环境图像中提取目标物体的至少一个第一二维特征点,包括:
    在所述第一行驶环境图像中确定目标物体的第一特征区域;
    从所述第一特征区域中提取所述目标物体的至少一个第一二维 特征点;
    根据所述目标物体的物体模型从所述第二行驶环境图像中提取所述目标物体的第二二维特征点,包括:
    根据所述目标物体的物体模型在所述第二行驶环境图像中确定所述目标物体的第二特征区域;
    从所述第二特征区域中提取所述目标物体的至少一个第二二维特征点。
  22. 根据权利要求21所述的方法,其特征在于,所述根据所述目标物体的物体模型在所述第二行驶环境图像中确定所述目标物体的第二特征区域,包括:
    获取所述第二行驶环境图像中的至少一个物体特征区域;
    确定每个物体特征区域中的特征点与所述目标物体的物体模型中的特征点相匹配的数量;
    将所述至少一个物体特征区域中特征点匹配的数量大于目标预设值的物体的特征区域确定为所述目标物体的第二特征区域。
  23. 一种检测设备,其特征在于,所述检测设备包括存储器、处理器、摄像装置;
    所述摄像装置,用于获取行驶环境图像;
    所述存储器,用于存储程序指令;
    所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    通过所述摄像装置在第一时间拍摄得到第一行驶环境图像;在第二时间拍摄得到第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像包括目标物体;
    根据所述第一行驶环境图像确定所述目标物体的第一三维特征点在世界坐标系下的三维信息;
    根据所述第二行驶环境图像确定所述目标物体的第二三维特征点在世界坐标系下的三维信息,所述第二三维特征点与所述第一三维 特征点相匹配;
    根据所述第一三维特征点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体的运动状态。
  24. 根据权利要求23所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    从所述第一行驶环境图像中提取目标物体的第一二维特征点;
    确定所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息;
    所述根据所述第二行驶环境图像确定所述目标物体的第二三维特征点在世界坐标系下的三维信息,包括:
    从所述第二行驶环境图像中确定与所述第一二维特征点相匹配的第二二维特征点;
    确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
  25. 根据权利要求24所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    在所述第一行驶环境图像中确定目标物体的第一特征区域;
    从所述第一特征区域中提取所述目标物体的第一二维特征点。
  26. 根据权利要求25所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    通过摄像装置拍摄目标图像;
    在所述目标图像中确定所述目标物体的第二特征区域;
    所述在所述第一行驶环境图像中确定目标物体的第一特征区域,包括:
    将所述第二特征区域投影到所述第一行驶环境图像中,并将得到的投影区域确定为所述目标物体的第一特征区域。
  27. 根据权利要求26所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    确定所述目标物体的第一参数是否满足预设条件,所述第一参数包括所述第二特征区域对应的属性、所述目标物体所在车道和所述目标物体的行驶方向中的至少一种,所述属性包括所述第二特征区域的尺寸和/或形状;
    若是,则执行所述将所述第二特征区域投影到所述第一行驶环境图像中,并将得到的投影区域确定为所述目标物体的第一特征区域的步骤。
  28. 根据权利要求26所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    确定所述目标物体的第二参数是否满足预设条件,所述第二参数包括所述第一特征区域对应的属性、所述目标物体所在车道和所述目标物体的行驶方向中的至少一种,所述属性包括所述第一特征区域的尺寸和/或形状;
    若是,则执行所述从所述第一特征区域中提取第一二维特征点的步骤。
  29. 根据权利要求24-28中任一项所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    确定所述第一二维特征点对应的第一三维特征点在相机坐标系下的三维信息;
    根据所述第一三维特征点在相机坐标系下的三维信息确定所述第一三维特征点在世界坐标系下的三维信息;
    所述确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息,包括:
    确定所述第二二维特征点对应的第二三维特征点在相机坐标系下的三维信息;
    根据所述第二三维特征点在相机坐标系下的三维信息确定所述第二三维特征点在世界坐标系下的三维信息。
  30. 根据权利要求29所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    从第三行驶环境图像中确定与所述第一二维特征点匹配的第三二维特征点,所述第三行驶环境图像和所述第一行驶环境图像为双目摄像装置同时拍摄的两张图像;
    根据所述第一二维特征点和所述第三二维特征点确定第一深度信息;
    根据所述第一深度信息确定所述第一三维特征点在相机坐标系下的三维信息;
    所述确定所述第二三维特征点在相机坐标系下的三维信息,包括:
    从第四行驶环境图像中确定与所述第二二维特征点匹配的第四二维特征点,所述第四行驶环境图像和所述第二行驶环境图像为双目摄像装置同时拍摄的两张图像;
    根据所述第二二维特征点和所述第四二维特征点确定第二深度信息;
    根据所述第二深度信息确定所述第二三维特征点在相机坐标系下的三维信息。
  31. 根据权利要求24-28任一项所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    获取在所述第一时间通过点云传感器得到的三维点在世界坐标系下的三维信息;
    根据在所述第一时间获取的三维点的三维信息确定所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息;
    所述确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息,包括:
    获取在所述第二时间通过所述点云传感器得到的三维点在世界 坐标系下的三维信息;
    根据在所述第二时间获取的三维点的三维信息确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
  32. 根据权利要求31所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    将在所述第一时间通过所述云传感器获取得到的三维点投影为二维特征点;
    从投影得到的二维特征点中确定与所述第一二维特征点相匹配的第五二维特征点;
    将所述第五二维特征点对应的三维点的三维信息确定为所述第一二维特征点对应的第一三维特征点在世界坐标系下的三维信息;
    所述根据在所述第二时间获取的三维点的三维信息确定所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息,包括:
    将在所述第二时间通过所述云传感器获取得到的三维点投影为二维特征点;
    从投影得到的二维特征点中确定与所述第二二维特征点相匹配的第六二维特征点;
    将所述第六二维特征点对应的三维点的三维信息确定为所述第二二维特征点对应的第二三维特征点在世界坐标系下的三维信息。
  33. 根据权利要求23所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    将所述第一三维特征点在世界坐标系下的三维信息投影至鸟瞰视角下,得到第一鸟瞰视觉坐标;
    将所述第二三维特征点在世界坐标系下的三维信息投影至鸟瞰视角下,得到第二鸟瞰视觉坐标;
    根据所述第一鸟瞰视觉坐标和所述第二鸟瞰视觉坐标确定所述目标物体的运动状态。
  34. 根据权利要求33所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    根据所述第一鸟瞰视觉坐标和所述第二鸟瞰视觉坐标确定所述目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息;
    根据所述目标物体的所述位移信息和/或所述旋转角度信息确定所述目标物体的运动状态。
  35. 根据权利要求23所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    根据所述第一三维特征点在世界坐标系下的三维信息及所述第二三维特征点在世界坐标系下的三维信息确定所述目标物体在第一方向和第二方向上的位移信息,和/或在第一绕向上的旋转角度信息;
    根据所述目标物体的所述位移信息和/或所述旋转角度信息确定所述目标物体的运动状态。
  36. 根据权利要求34或35所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    根据所述目标物体的所述位移信息确定所述目标物体的移动速度;和/或,根据所述旋转角度信息确定所述目标物体的旋转方向。
  37. 根据权利要求36所述的设备,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    根据所述目标物体的位移信息确定目标移动速度;
    将对所述目标移动速度滤波后得到的速度确定为所述目标物体的移动速度。
  38. 根据权利要求25-28任一项所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    从所述第一特征区域中提取多个二维特征点;
    根据预设算法从所述多个二维特征点中筛选得到第一二维特征点。
  39. 根据权利要求23-28任一项所述的设备,其特征在于,所述目标物体的数量为一个或多个。
  40. 一种检测设备,其特征在于,所述设备包括:存储器、处理器、摄像装置;
    所述摄像装置,用于获取行驶环境图像;
    所述存储器,用于存储程序指令;
    所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    通过所述摄像装置在第一时间获取第一行驶环境图像;在第二时间获取第二行驶环境图像,所述第一时间为所述第二时间之前的时间,所述第一行驶环境图像和所述第二行驶环境图像中包括目标物体;
    从所述第一行驶环境图像中提取目标物体的至少一个第一二维特征点;
    根据所述至少一个第一二维特征点建立所述目标物体的物体模型物体;
    根据所述目标物体的物体模型从所述第二行驶环境图像中提取所述目标物体的至少一个第二二维特征点;
    根据所述第二二维特征点更新所述目标物体的物体模型,得到更新后的所述目标物体的物体模型;
    根据所述第一二维特征点以及所述第二行驶环境图像中与所述第一二维特征点相匹配的二维特征点确定所述目标物体的运动状态。
  41. 根据权利要求40所述的设备,其特征在于,更新后的所述目标物体的物体模型包括所述至少一个第一二维特征点和所述至少一个第二二维特征点。
  42. 根据权利要求40所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    在第三时间获取第三行驶环境图像,所述第二时间为所述第三时间之前的时间;
    根据所述第二行驶环境图像和所述第三行驶环境图像中相匹配的所述目标物体的二维特征点更新所述目标物体的运动状态。
  43. 根据权利要求40-42任一项所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    在所述第一行驶环境图像中确定目标物体的第一特征区域;
    从所述第一特征区域中提取所述目标物体的至少一个第一二维特征点;
    根据所述目标物体的物体模型从所述第二行驶环境图像中提取所述目标物体的第二二维特征点,包括:
    根据所述目标物体的物体模型在所述第二行驶环境图像中确定所述目标物体的第二特征区域;
    从所述第二特征区域中提取所述目标物体的至少一个第二二维特征点。
  44. 根据权利要求43所述的设备,其特征在于,所述处理器,调用所述存储器存储的程序指令,执行以下步骤:
    获取所述第二行驶环境图像中的至少一个物体特征区域;
    确定每个物体特征区域中的特征点与所述目标物体的物体模型中的特征点相匹配的数量;
    将所述至少一个物体特征区域中特征点匹配的数量大于目标预设值的物体的特征区域确定为所述目标物体的第二特征区域。
  45. 一种计算机可读存储介质,其特征在于,包括:所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时用于执行如权利要求1至23任一项所述目标物体运动状态检测方法。
PCT/CN2019/074014 2019-01-30 2019-01-30 目标物体运动状态检测方法、设备及存储介质 WO2020154990A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980004912.7A CN111213153A (zh) 2019-01-30 2019-01-30 目标物体运动状态检测方法、设备及存储介质
PCT/CN2019/074014 WO2020154990A1 (zh) 2019-01-30 2019-01-30 目标物体运动状态检测方法、设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/074014 WO2020154990A1 (zh) 2019-01-30 2019-01-30 目标物体运动状态检测方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020154990A1 true WO2020154990A1 (zh) 2020-08-06

Family

ID=70790112

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/074014 WO2020154990A1 (zh) 2019-01-30 2019-01-30 目标物体运动状态检测方法、设备及存储介质

Country Status (2)

Country Link
CN (1) CN111213153A (zh)
WO (1) WO2020154990A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914857A (zh) * 2020-08-11 2020-11-10 上海柏楚电子科技股份有限公司 板材余料的排样方法、装置、系统、电子设备及存储介质
CN112926488A (zh) * 2021-03-17 2021-06-08 国网安徽省电力有限公司铜陵供电公司 基于电力杆塔结构信息的作业人员违章识别方法
CN115641359A (zh) * 2022-10-17 2023-01-24 北京百度网讯科技有限公司 确定对象的运动轨迹的方法、装置、电子设备和介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111829535B (zh) * 2020-06-05 2022-05-03 阿波罗智能技术(北京)有限公司 生成离线地图的方法、装置、电子设备和存储介质
CN111709923B (zh) * 2020-06-10 2023-08-04 中国第一汽车股份有限公司 一种三维物体检测方法、装置、计算机设备和存储介质
CN112115820B (zh) * 2020-09-03 2024-06-21 上海欧菲智能车联科技有限公司 车载辅助驾驶方法及装置、计算机装置及可读存储介质
CN113096151B (zh) * 2021-04-07 2022-08-09 地平线征程(杭州)人工智能科技有限公司 对目标的运动信息进行检测的方法和装置、设备和介质
CN116246235B (zh) * 2023-01-06 2024-06-11 吉咖智能机器人有限公司 基于行泊一体的目标检测方法、装置、电子设备和介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101844545A (zh) * 2009-03-25 2010-09-29 株式会社电装 车辆周边显示装置和用于车辆周边图像的方法
US20160339959A1 (en) * 2015-05-21 2016-11-24 Lg Electronics Inc. Driver Assistance Apparatus And Control Method For The Same
CN107097790A (zh) * 2016-02-19 2017-08-29 罗伯特·博世有限公司 用于阐明车辆的车辆周围环境的方法和设备以及车辆
CN108271408A (zh) * 2015-04-01 2018-07-10 瓦亚视觉有限公司 使用被动和主动测量生成场景的三维地图
CN108596116A (zh) * 2018-04-27 2018-09-28 深圳市商汤科技有限公司 测距方法、智能控制方法及装置、电子设备和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2886947B2 (ja) * 1990-05-25 1999-04-26 スズキ株式会社 凝集パターン判定方法およびその装置
CN100476866C (zh) * 2007-11-09 2009-04-08 华中科技大学 点源目标检测的小虚警率的试验估计方法
CN106127137A (zh) * 2016-06-21 2016-11-16 长安大学 一种基于3d轨迹分析的目标检测识别算法
CN110610127B (zh) * 2019-08-01 2023-10-27 平安科技(深圳)有限公司 人脸识别方法、装置、存储介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101844545A (zh) * 2009-03-25 2010-09-29 株式会社电装 车辆周边显示装置和用于车辆周边图像的方法
CN108271408A (zh) * 2015-04-01 2018-07-10 瓦亚视觉有限公司 使用被动和主动测量生成场景的三维地图
US20160339959A1 (en) * 2015-05-21 2016-11-24 Lg Electronics Inc. Driver Assistance Apparatus And Control Method For The Same
CN107097790A (zh) * 2016-02-19 2017-08-29 罗伯特·博世有限公司 用于阐明车辆的车辆周围环境的方法和设备以及车辆
CN108596116A (zh) * 2018-04-27 2018-09-28 深圳市商汤科技有限公司 测距方法、智能控制方法及装置、电子设备和存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914857A (zh) * 2020-08-11 2020-11-10 上海柏楚电子科技股份有限公司 板材余料的排样方法、装置、系统、电子设备及存储介质
CN111914857B (zh) * 2020-08-11 2023-05-09 上海柏楚电子科技股份有限公司 板材余料的排样方法、装置、系统、电子设备及存储介质
CN112926488A (zh) * 2021-03-17 2021-06-08 国网安徽省电力有限公司铜陵供电公司 基于电力杆塔结构信息的作业人员违章识别方法
CN112926488B (zh) * 2021-03-17 2023-05-30 国网安徽省电力有限公司铜陵供电公司 基于电力杆塔结构信息的作业人员违章识别方法
CN115641359A (zh) * 2022-10-17 2023-01-24 北京百度网讯科技有限公司 确定对象的运动轨迹的方法、装置、电子设备和介质
CN115641359B (zh) * 2022-10-17 2023-10-31 北京百度网讯科技有限公司 确定对象的运动轨迹的方法、装置、电子设备和介质

Also Published As

Publication number Publication date
CN111213153A (zh) 2020-05-29

Similar Documents

Publication Publication Date Title
WO2020154990A1 (zh) 目标物体运动状态检测方法、设备及存储介质
EP3732657B1 (en) Vehicle localization
WO2021259344A1 (zh) 车辆检测方法、装置、车辆和存储介质
US12036979B2 (en) Dynamic distance estimation output generation based on monocular video
JP6442834B2 (ja) 路面高度形状推定方法とシステム
JP2019096072A (ja) 物体検出装置、物体検出方法およびプログラム
Nedevschi et al. A sensor for urban driving assistance systems based on dense stereovision
JP7206583B2 (ja) 情報処理装置、撮像装置、機器制御システム、移動体、情報処理方法およびプログラム
JP2004112144A (ja) 前方車両追跡システムおよび前方車両追跡方法
KR20160062880A (ko) 카메라 및 레이더를 이용한 교통정보 관리시스템
JP7209115B2 (ja) 複数の相対的に接近する様に動いているリジッドなオブジェクトの検出、3d再現および追跡
JP2013109760A (ja) 対象検知方法及び対象検知システム
JP2012185011A (ja) 移動体位置測定装置
CN111937036A (zh) 用于处理传感器数据的方法、设备和具有指令的计算机可读存储介质
US10984263B2 (en) Detection and validation of objects from sequential images of a camera by using homographies
CN108645375B (zh) 一种用于车载双目系统快速车辆测距优化方法
US10984264B2 (en) Detection and validation of objects from sequential images of a camera
Liu et al. Vehicle detection and ranging using two different focal length cameras
CN111967396A (zh) 障碍物检测的处理方法、装置、设备及存储介质
CN112947419A (zh) 避障方法、装置及设备
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
JP2005217883A (ja) ステレオ画像を用いた道路平面領域並びに障害物検出方法
Hultqvist et al. Detecting and positioning overtaking vehicles using 1D optical flow
KR102003387B1 (ko) 조감도 이미지를 이용한 교통 장애물의 검출 및 거리 측정 방법, 교통 장애물을 검출하고 거리를 측정하는 프로그램을 저장한 컴퓨터 판독가능 기록매체
JP2018073275A (ja) 画像認識装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19914164

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19914164

Country of ref document: EP

Kind code of ref document: A1