CN112634320A - Method and system for identifying object motion direction at intersection - Google Patents

Method and system for identifying object motion direction at intersection Download PDF

Info

Publication number
CN112634320A
CN112634320A CN201910907356.6A CN201910907356A CN112634320A CN 112634320 A CN112634320 A CN 112634320A CN 201910907356 A CN201910907356 A CN 201910907356A CN 112634320 A CN112634320 A CN 112634320A
Authority
CN
China
Prior art keywords
track
points
target object
images
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910907356.6A
Other languages
Chinese (zh)
Inventor
石永禄
尹科才
毛河
朱彬
高枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Topplusvision Science & Technology Co ltd
Original Assignee
Chengdu Topplusvision Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Topplusvision Science & Technology Co ltd filed Critical Chengdu Topplusvision Science & Technology Co ltd
Priority to CN201910907356.6A priority Critical patent/CN112634320A/en
Publication of CN112634320A publication Critical patent/CN112634320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/64Analysis of geometric attributes of convexity or concavity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention discloses a method for identifying the motion direction of an object at an intersection, belonging to the technical field of image processing. The method for identifying the moving direction of the object at the intersection comprises the following steps: acquiring a plurality of images of an intersection, wherein at least one of the plurality of images comprises a target object; obtaining a plurality of track points on the motion track of the target object based on the coordinate positions of the target object in the plurality of images; and analyzing the position relation of the plurality of track points according to a preset algorithm, and determining the motion direction of the target object. The method of the invention can master the motion trail of the target object or is convenient for counting the object motion information in different motion directions, and the like.

Description

Method and system for identifying object motion direction at intersection
Technical Field
The invention relates to the technical field of image processing, in particular to a method for identifying the motion direction of an object at an intersection.
Background
The traffic information of the intersection can be used for making a signal lamp strategy of the intersection, and the reasonable signal lamp strategy can achieve the purposes of reducing congestion and blockage, ensuring the smoothness of urban roads, avoiding traffic accidents and the like. With the continuous development of computer technology, image processing, pattern recognition and other technologies, the traffic flow or detection method based on video image processing is more and more widely applied. The application provides a method for identifying the motion directions of objects such as vehicles, pedestrians and the like based on video image processing, so that the motion directions of the objects such as vehicles at intersections can be identified, the motion track of a target object can be mastered, or motion information and the like in different motion directions can be conveniently counted.
Disclosure of Invention
One aspect of the present invention provides a method for identifying a direction of motion of an object at an intersection, the method comprising: acquiring a plurality of images of an intersection, wherein at least one of the plurality of images comprises a target object; obtaining a plurality of track points on the motion track of the target object based on the coordinate positions of the target object in the plurality of images; and analyzing the position relation of the plurality of track points according to a preset algorithm, and determining the motion direction of the target object.
In some embodiments, the plurality of images are taken from video data of the intersection.
In some embodiments, the direction of motion comprises at least one of straight, left turn, or right turn.
In some embodiments, analyzing the position relationship of the plurality of track points according to a preset algorithm, and determining the motion direction of the target object includes: generating one or more trajectory vectors based on the plurality of trajectory points; determining at least one frame of reference; and determining the motion direction of the target object based on the relative position relation of one or more track vectors and at least one reference frame.
In some embodiments, the frame of reference comprises a reference vector; analyzing the position relation of the plurality of track points according to a preset algorithm, and determining the driving direction of the target vehicle, wherein the method comprises the following steps: generating one or more trajectory point vectors based on the plurality of trajectory points; calculating included angles related to one or more tracing point vectors and the reference vector to obtain one or more tracing point vector included angles; the one or more trace point vector included angles comprise included angles between one or more trace point vectors and the reference vector or comprise a difference value between a maximum included angle and a minimum included angle in included angles between one or more trace point vectors and the reference vector; comparing the included angle of the one or more tracing point vectors with a preset angle threshold value; and determining that the target object is in a straight line in response to the fact that the included angles of the one or more track points are smaller than a preset angle threshold value.
In some embodiments, the reference vector reflects an extension direction of at least one entry road of the target object in the plurality of images; the target object enters the intersection through the entrance road.
In some embodiments, the one or more track point vectors include vectors pointing from a specified track point of the plurality of track points to the other one or more track points.
In some embodiments, the specified trajectory point is a start trajectory point.
In some embodiments, the frame of reference comprises a first frame of reference; in response to the one or more trace point included angles not being less than a preset angle threshold, executing the following steps: generating starting and ending point vectors based on the plurality of track points; a quadrant of the start and end point vector in a first reference coordinate system is the quadrant where the start and end point vector is located when the start and end point vector is translated to enable the start point of the start and end point vector to coincide with the origin of the first reference coordinate system; determining the concavity and convexity of the motion trail of the target object relative to the straight line where the start and end point vectors are located according to the plurality of track points; and determining that the target object turns left or right based on the starting and ending point vector in the quadrant and the concavity and convexity of the motion trail.
In some embodiments, the frame of reference comprises a first frame of reference; analyzing the position relationship of the plurality of track points according to a preset algorithm, and determining the motion direction of the vehicle comprises: generating starting and ending point vectors based on the plurality of track points; determining a quadrant of the starting point track vector in a first reference coordinate system; a quadrant of the start and end point vector in a first reference coordinate system is the quadrant where the start and end point vector is located when the start and end point vector is translated to enable the start point of the start and end point vector to coincide with the origin of the first reference coordinate system; determining the concavity and convexity of the motion trail of the target object relative to the straight line where the start and end point vectors are located according to the plurality of track points; and determining that the target object turns left or right based on the starting and ending point vector in the quadrant and the concavity and convexity of the motion trail.
In some embodiments, at least one coordinate axis of the first reference coordinate system reflects an extension direction of at least one road in the plurality of images.
In some embodiments, at least one coordinate axis of the first reference coordinate system reflects an extension direction of at least one entry road of the target object in the plurality of images; the target object enters the intersection through the entrance road.
In some embodiments, the obtaining a plurality of trajectory points on the motion trajectory of the target object based on the coordinate positions of the target object in the plurality of images further includes: mapping the coordinate positions of the target object in the multiple images to a target coordinate system to obtain a plurality of track points; the extending direction of at least one road in the plurality of images is parallel to a certain coordinate axis thereof in the target coordinate system.
In some embodiments, the obtaining a plurality of trajectory points on the motion trajectory of the target object based on the coordinate positions of the target object in the plurality of images further includes: acquiring a plurality of sampling points and coordinate data thereof in at least one image of a plurality of images; acquiring a plurality of reference points in a target coordinate system and coordinate data thereof; establishing a coordinate transformation matrix based on the coordinate data of the plurality of sampling points and the coordinate data of the plurality of reference points; and calculating the coordinate positions of the target object in the plurality of images and the coordinate transformation matrix to obtain the plurality of track points.
In some embodiments, the acquiring a plurality of sample points and coordinate data thereof in at least one of the plurality of images comprises: receiving a plurality of annotation points depicted by a user in at least one of the plurality of images; the plurality of annotation points are not collinear; generating a canonical rectangle based on the plurality of annotation points; determining vertices of the canonical rectangle and their coordinate data in at least one of the plurality of images as the plurality of sample points and their coordinate data.
In some embodiments, the plurality of annotation points are located on a road marking in at least one of the plurality of images, and the plurality of annotation points enclose a quadrilateral.
Another aspect of the present invention provides a system for identifying a direction of motion of a vehicle at an intersection, the system comprising: an acquisition module for acquiring a plurality of images of an intersection, at least one of the plurality of images including a target object; the track point determining module is used for obtaining a plurality of track points on the target object driving track based on the coordinate positions of the target object in the plurality of images; and the processing module is used for analyzing the position relation of the plurality of track points according to a preset algorithm and determining the motion direction of the target object.
Another aspect of the present invention provides an apparatus for identifying a direction of motion of a vehicle at an intersection, the apparatus comprising at least one processor and at least one memory; the at least one memory is for storing computer instructions; the at least one processor is configured to execute at least a portion of the computer instructions to implement the operations of any of the methods of any of the embodiments in this specification.
Another aspect of the present invention provides a computer-readable storage medium storing computer instructions, at least some of which, when executed by a processor, implement the operations of any one of the methods of any one of the embodiments in this specification.
Drawings
The invention will be further elucidated by means of exemplary embodiments, which will be described in detail by means of the drawing. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a block diagram of an exemplary system according to some embodiments of the invention.
FIG. 2 is an exemplary flow chart illustrating a method of identifying a direction of travel of a vehicle at an intersection according to some embodiments of the invention.
FIG. 3 is an exemplary sub-flow diagram of coordinate transformation shown in accordance with some embodiments of the invention.
FIG. 4 is a schematic diagram of acquiring sample points and their coordinate data according to some embodiments of the invention.
Fig. 5 is an exemplary sub-flow diagram illustrating the determination of straight-going according to some embodiments of the invention.
FIG. 6 is an exemplary sub-flow diagram illustrating the determination of a steering direction according to some embodiments of the invention.
FIG. 7 is a schematic diagram illustrating a decision rule for determining a steering direction according to some embodiments of the present invention.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only examples or embodiments of the invention, from which it is possible for a person skilled in the art, without inventive effort, to apply the invention to other similar contexts. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used herein, and as set forth in the claims, the terms "comprises" and "comprising," unless the context clearly dictates otherwise,
the terms "a", "an" and/or "the" are not intended to be inclusive of the singular, but may include the plural. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in the present invention to illustrate the operations performed by a system according to embodiments of the present invention. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
One or more embodiments of the present application may be applied to target object motion direction detection in different venues. The site to which the present application can be applied includes, but is not limited to, a road intersection, a specific road area, a specific site entrance, and the like. For example, intersections, t-junctions, intersections, multiple intersections, main/sub road gates, parking lot gates, service area gates, and the like. The target object in the present application may be a vehicle, for example, a motor vehicle, or a non-motor vehicle. Including, but not limited to, private cars, taxis, vans, buses, and the like. The non-motor vehicles include but are not limited to electric vehicles for the elderly, two-wheeled battery cars, tricycles, bicycles and the like. In some embodiments, the target object may also be other devices capable of traveling on a road, such as an unmanned vehicle, a mobile robot that performs a particular task (inspection, survey, etc., tasks), and the like. In some embodiments, the target object may also be a pedestrian, an animal, or other movable object. One or more embodiments of the present application may determine traffic information based on the driving direction of the vehicle, where the traffic information includes, but is not limited to, the driving direction of the vehicle at each intersection of the site, the number of vehicles entering at each intersection of the site, the number of vehicles per driving direction at each intersection of the site, time information related to the traffic flow at the site, and the like. It should be understood that the application scenarios of the system and method of one or more embodiments of the present specification are only examples of one or more embodiments of the present specification, and it will be apparent to those of ordinary skill in the art that one or more embodiments of the present specification can also be applied to other similar scenarios according to these drawings without inventive effort. Such as regional intersections within a parking lot. For another example, the detection statistics are performed on the trajectories of pedestrians at intersections.
FIG. 1 is a block diagram of an exemplary system according to some embodiments of the invention. In some embodiments, the system 100 for identifying the direction of motion of an object at an intersection may include an acquisition module 102, a track point determination module 104, and a processing module 106.
The acquisition module 102 may be used to acquire multiple images of an intersection. In some embodiments, at least one of the plurality of images of the intersection includes a target object.
The track point determining module 104 may be configured to determine coordinate positions of the target object in the multiple images, so as to obtain multiple track points on the motion track of the target object.
The processing module 106 may be configured to analyze the position relationship of the multiple track points according to a preset algorithm, and determine the motion direction of the target object. In some embodiments, the direction of motion comprises at least one of straight, left turn, or right turn. In some embodiments, analyzing the position relationship of the plurality of track points according to a preset algorithm, and determining the motion direction of the target object includes: generating one or more trajectory vectors based on the plurality of trajectory points; determining at least one frame of reference; and determining the motion direction of the target object based on the relative position relation of one or more track vectors and at least one reference frame.
It should be understood that the system and its modules shown in FIG. 1 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of one or more embodiments of the present specification may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of hardware circuits and software (e.g., firmware).
The following detailed description will be given taking the detection of the driving direction of the target vehicle as an example, but it should not be construed as limiting the present application. For example, in some embodiments, the target vehicle may be replaced with a pedestrian, a mobile device, or the like.
FIG. 2 is an exemplary flow chart illustrating a method of identifying a direction of travel of a vehicle at an intersection according to some embodiments of the invention. As shown in fig. 2, a method of identifying a driving direction of a vehicle at an intersection may include:
step 202, obtaining a plurality of images of an intersection, wherein at least one of the images comprises a target vehicle; in some embodiments, step 202 may be implemented by acquisition module 102.
In some embodiments, the intersection may be an intersection of roads, such as an intersection, a t-junction, a junction, etc. In some embodiments, the images may be pictures taken at regular intervals by the image capturing device, or may be pictures obtained by filtering and cropping videos taken by the image capturing device. The image capturing device includes, but is not limited to, a general video camera, a 360 ° panoramic camera, a fisheye camera, an afocal camera, a 3D fixed focus camera, a light field zoom camera, and the like. The image acquisition device can be a plurality of or one.
In some embodiments, an image capturing device may be respectively disposed in each intersection direction of the intersection, and each image capturing device may capture an image of the intersection where the image capturing device is located. Each image acquisition device can be arranged at a position over the intersection where the image acquisition device is positioned so as to acquire an intersection image with a correct lane; the intersection image acquisition device can also be arranged at one side of each intersection, the position of the complete image of the intersection can be shot, and the acquired image can have certain deformation, such as oblique deformation.
In some embodiments, an image capture device may be provided at the intersection that can capture a complete image of the entire intersection. The image acquisition device can be a 360-degree panoramic camera arranged at any position of the intersection, or any camera which is arranged above the area near the intersection and can shoot a complete image of the intersection, such as a common video camera. In some embodiments, the image captured by the image capture device may have some distortion, such as tilt distortion, "near-far-near" distortion, and the like. In some embodiments, the image captured by the image capturing device may be non-distorted, for example, the image capturing device is located directly above or near directly above the intersection, and the camera is facing vertically downward, and the device captures the image from a view angle of a front top view or a view angle of a near front top view, and the lane in the image is consistent with or close to the real lane. The corner of the camera can be adjusted while keeping the camera facing downwards vertically, so that the lane in the acquired image is parallel to or perpendicular to the edge of the image.
In some embodiments, the acquiring module may acquire, by the image acquiring device, a plurality of images of the intersection, where the plurality of images may be photographs taken by the image acquiring device at certain time intervals, or image frames taken by the image acquiring device at certain frame intervals in a video. At least one of the plurality of images includes a target object. In some embodiments, the plurality of images may be selected from video data of the intersection. The video data may be captured by an image capture device at the intersection. The plurality of images may be image frames in a video, for example, an image of each frame in a video, and for example, an image of every 5 frames in a video. In some embodiments, the plurality of images may be images filtered for a starting number of frames and an ending number of frames, for example, images filtered for 50 frames each at the start and end.
Step 204, determining the coordinate positions of the target vehicle in the plurality of images, and further obtaining a plurality of track points on the running track of the target vehicle; in some embodiments, step 204 may be implemented by the trace point determination module 104.
In some embodiments, the coordinate location may be a location at which the target vehicle is located in the initial coordinate system. The initial coordinate system may be a rectangular coordinate system determined based on the image. The origin of the initial coordinate system can be the center of an intersection in the image, can be any vertex of the image, and can be any point in the image; the x-axis and the y-axis of the initial coordinate system can be respectively along one road direction of the intersection in the image, and can also be respectively parallel to any two crossed frames of the image.
In some embodiments, a machine learning model may be provided to identify the coordinate position of the target vehicle in each image. In some embodiments, an image processing means may also be adopted to identify the target vehicle in the image, for example, by using a threshold segmentation algorithm, and further determine a coordinate position of at least one pixel point of the area where the target vehicle is located in the image, for example, determine a center pixel point of the area where the target vehicle is located, determine the coordinate position of the center pixel point in the image as the coordinate position of the target vehicle in the image, or determine the coordinate position of the target vehicle in the image based on a plurality of pixel points in the area where the target vehicle is located.
In some embodiments, the travel trajectory of the target vehicle may be a trajectory directly acquired from images of successive frames in the video data. In some embodiments, the driving track of the target vehicle may also be fitted by the target vehicle position points in the plurality of images. The fitting may obtain a trajectory equation f (x) of the target vehicle travel trajectory. The fitting includes linear fitting and non-linear fitting, and the algorithm of the fitting includes but is not limited to least square algorithm, image thinning algorithm, parameter curve model algorithm and the like. Specifically, when the fitting is linear fitting, the fitting algorithm may be a linear least squares problem algorithm; when the fit is a non-linear fit, the algorithm of the fit may be a non-linear least squares problem algorithm, such as a Gauss Newton (Gauss Newton) algorithm, Levenberg-Marquardt (Levenberg-Marquardt) algorithm, or the like.
In some embodiments, the trajectory points may be location points of the target vehicle in the plurality of images. For example, the coordinate positions of the target object in the plurality of images may be directly determined as the plurality of trajectory points. In some embodiments, the trajectory point may also be any point on the trajectory equation f (x) of the target vehicle travel trajectory.
In some embodiments, the trajectory points are filtered trajectory points of the position points of the target vehicle in the plurality of images. The filtering includes, but is not limited to, filtering image edge trace points, filtering trace points that are further from the driving trace. For example, the target vehicle may be directly rejected because the quality of the last frames before the target vehicle is driven out of the image (or the shooting area) is not good.
In some embodiments, the filtering image edge track points may be track points on the image with a preset certain starting frame number and ending frame number, for example, 50 frames are removed from the starting frame number and ending frame number of the plurality of images. In some embodiments, the filtered image edge trace points may also be the trace points with poor quality of the filtered image edge based on an algorithm. In some embodiments, the coordinate position of the target vehicle in the plurality of images may be determined, and when the distance from the abscissa or the ordinate of a certain coordinate position to a certain boundary (e.g., the first row, the last row, the first column or the last column of the image) of the image is smaller than a set threshold, the certain coordinate position is deleted and is not used as one of the track points. The set threshold may be 30 pixels, 50 pixels, 60 pixels, etc.
In some embodiments, filtering the trace points far away from the driving trace may be to manually delete the trace points with larger deviation, and the number of deletions may be manually set, for example, to delete 3 trace points with larger deviation. In some embodiments, filtering the trace points that deviate farther from the driving trajectory may also be based on an algorithm that automatically filters the trace points that deviate farther. The algorithm may be a least squares based data smoothing algorithm, such as a three-point linear smoothing algorithm, a five-point linear smoothing algorithm, a seven-point linear smoothing algorithm, and the like. In some embodiments, the algorithm may also be a preset filtering algorithm. Specifically, in some embodiments, the filtering algorithm is to filter N trace point vectors having the largest included angle with the reference vector, where N is greater than or equal to 1; in some embodiments, the N trace point vectors with the largest included angle with the reference vector and the M trace point vectors with the smallest included angle with the reference vector may also be filtered simultaneously, where N ≧ 1, and M ≧ 1. For example, N-3 and M-3. The reference vector may be a vector established in an initial coordinate system and used for determining the direction or angle of the track point vector, for example, the reference vector may be a vector along the extending direction of the driving road of the vehicle. The trace point vector may be a plurality of vectors formed by connecting a designated trace point to other trace points among the plurality of trace points. The specified trajectory point may be a starting trajectory point. The starting trajectory point may be the earliest trajectory point in time sequence among the plurality of trajectory points of the vehicle.
In some embodiments, the coordinate positions of the target vehicle in the plurality of images may be mapped into a target coordinate system to obtain the plurality of track points; the extending direction of at least one road in the plurality of images is parallel to a certain coordinate axis thereof in the target coordinate system.
In some embodiments, the mapping may be Affine Transformation (Affine Transformation) of the plurality of images. The affine transformation is a linear transformation from two-dimensional coordinates to two-dimensional coordinates. The affine transformations may include Translation (Translation), scaling (Scale), Flip (Flip), Rotation (Rotation), and clipping (Shear). The affine transformation may maintain "flatness" of the two-dimensional image, that is, the affine transformation may make the original image more flat, so as to improve the accuracy of determining the track point in some embodiments of the present specification. For detailed steps of the coordinate position mapping, reference may be made to fig. 3 and 4 in this specification.
And step 206, analyzing the position relation of the plurality of track points according to a preset algorithm, and determining the driving direction of the vehicle. In some embodiments, step 206 may be implemented by processing module 106.
The preset algorithm is an algorithm used for determining the vehicle running direction based on the position relation of a plurality of track points, and comprises but is not limited to a track point fitting algorithm, a track point filtering algorithm, a coordinate conversion algorithm, a vector translation algorithm, an algorithm for judging the running direction of a target vehicle, an algorithm for judging whether the target vehicle runs straight, an algorithm for judging whether the target vehicle turns left or right and a threshold value related to each algorithm.
In some embodiments, the preset algorithm may be adjusted according to the determination result. For example, the actual trajectory of some vehicles may be obtained, and the algorithm for determining the driving direction of the target vehicle and the threshold thereof may be adjusted according to the difference between the driving direction determination result obtained by using the preset algorithm and the actual driving direction of the vehicle, so as to improve the accuracy of the algorithm.
In some embodiments, the direction of travel includes at least one of straight travel, left turn, or right turn. In some embodiments, the driving direction further includes a specific coming direction and a specific going direction, and it may be determined whether the driving direction of the vehicle is straight, left-turning or right-turning according to the coming direction and the going direction. For example, the direction of departure of the vehicle is to the right of the direction of arrival, and the direction of travel of the vehicle is a right turn; the vehicle driving direction is left turn; the vehicle is traveling straight ahead in the direction of arrival.
In some embodiments, analyzing the position relationship of the plurality of track points according to a preset algorithm, and determining the driving direction of the vehicle includes: generating one or more trajectory vectors based on the plurality of trajectory points; at least one frame of reference; determining a direction of travel of the target vehicle based on a relative positional relationship of one or more trajectory vectors to at least one frame of reference.
In some embodiments, the plurality of trace points includes at least a start trace point and an end trace point. The start trace point may be the earliest point in the plurality of trace points in the time sequence, and the end trace point may be the latest point in the time sequence. The time sequence of the trace points may correspond to the acquisition time of the image corresponding to the point. The trajectory vector may include a trajectory point vector and/or an origin point vector. The track point vector can be a plurality of vectors formed by connecting a specified track point in a plurality of track points with other track points. In some embodiments, the specified track point may be a starting track point. The start point vector may be a vector pointing from the start track point to the end point track point.
In some embodiments, the reference frame may be a reference coordinate frame or a reference vector. The objects include, but are not limited to, roads of an intersection, intersection centers, lane lines, and the like. The frame of reference may be determined based on the object. For example, the first reference coordinate system may be determined based on a road and an intersection center of the intersection, and specifically, an origin of the first reference coordinate system may be determined based on the intersection center (e.g., both coincide), and an x-axis and a y-axis of the first reference coordinate system may be determined based on a road direction of the intersection (e.g., both are parallel). As another example, the direction of the reference vector may be determined based on lane lines (e.g., parallel or perpendicular to both). In some embodiments, the first reference coordinate system may be established based on the road and intersection center of the intersection in the initial coordinate system, and may also be established based on the road and intersection center of the intersection in the target coordinate system.
In some embodiments, the direction of travel of the target vehicle includes at least one of straight travel, left turn, or right turn. The driving direction also comprises a specific coming direction and a specific going direction, and the driving direction of the vehicle can be determined to be straight, left-turning or right-turning according to the coming direction and the going direction.
In some embodiments, analyzing the position relationship of the plurality of track points according to a preset algorithm, and determining the driving direction of the vehicle may further include: dividing n intersection areas in advance, wherein n is more than or equal to 1; detecting an intersection area where a starting track point of a target vehicle is located, marking the intersection area as 1, and marking other intersection areas as 2, 3, …, n according to a counterclockwise sequence; detecting a mark value of an intersection area where a track terminal of a target vehicle is located; and calculating the mark value of the intersection where the track end point is located and the mark value of the intersection area where the initial track point is located, and determining the driving direction of the target vehicle.
Specifically, taking an intersection as an example, 4 driving roads of the intersection are divided into 4 intersection areas in advance; detecting the intersection area where the initial track point of the target vehicle is located, and marking the intersection area as ID 01, and marking other intersection areas as 2, 3 and 4 in a counterclockwise sequence; detecting a marker value ID of an intersection region where a trajectory end point of a target vehicle is located1(ii) a Marking value ID of intersection where track end point is located1And the mark value ID of the intersection region where the initial track point is positioned0The following operations are carried out, and the driving direction of the target vehicle is determined according to the corresponding rule:
ID1-ID0determining that the vehicle turns right as 1;
ID1-ID0determining that the vehicle moves straight 2;
ID1-ID0determining that the vehicle turns left as 3;
in some embodiments, different preset algorithms may be employed to determine straight ahead and turning of the vehicle.
In some embodiments, in determining whether the vehicle is moving straight, the frame of reference comprises a reference vector; analyzing the position relation of the plurality of track points according to a preset algorithm, and determining the driving direction of the target vehicle, wherein the method comprises the following steps: generating one or more trajectory point vectors based on the plurality of trajectory points; calculating included angles related to one or more tracing point vectors to obtain one or more tracing point vector included angles; comparing the included angle of the one or more tracing point vectors with a preset angle threshold value; and determining that the target vehicle is in a straight line in response to the fact that the included angles of the one or more track points are smaller than a preset angle threshold value.
In some embodiments, in determining whether the vehicle is traveling straight, the frame of reference comprises a first frame of reference; analyzing the position relationship of the plurality of track points according to a preset algorithm to determine the driving direction of the target vehicle, which may also include: generating coordinates of a starting track point and a track end point; respectively calculating the difference value of the x and y coordinate values of the starting track point and the track end point to obtain two difference values; calculating the two difference values, and comparing the calculation result with a preset threshold value; and in response to the calculation result being larger than a preset threshold value, determining that the target vehicle is moving straight.
Specifically, the origin of the first reference coordinate system may be an intersection center, and the x-axis and the y-axis of the first reference coordinate system may be two intersecting road directions along the intersection, respectively. The coordinate of the initial track point is (x)0,y0) The coordinate of the track end point is (x)1,y1). The difference is the difference delta x of the x coordinates of the starting track point and the track end point and the difference delta y of the y coordinates, and the calculation formula is as follows:
Δx=x1-x0
Δy=y1-y0
the calculation of the two difference values may be performed according to the following formula:
Δ=|Δx|-|Δy|
and | Δ x | is the absolute value of the difference value Δ x between the x coordinates of the starting track point and the x coordinates of the track end point, and | Δ y | is the absolute value of the difference value Δ y between the y coordinates of the starting track point and the y coordinates of the track end point.
The preset threshold may be a numerical value determined based on the intersection width. For example, in the first reference coordinate system, the width of the intersection is 5, and the preset threshold may be 5. And when the calculated result delta is larger than 5, determining that the target vehicle is in straight running.
Further description of the algorithm for determining whether the target vehicle is traveling straight may be found in relation to the description of fig. 5.
In some embodiments, counting the number of vehicles in straight line is further included after the determining that the target vehicle is in straight line. The number may be the number of straight-ahead vehicles in a certain time, for example, the number of straight-ahead vehicles in 1 hour. The number may also be a number within a certain time period, for example, a number of straight vehicles at 7 to 9 am.
In some embodiments, the reference frame comprises a first reference coordinate frame when determining that the vehicle is turning left or right; generating starting and ending point vectors based on the plurality of track points; translating the start and end point vector to enable a start point of the start and end point vector to coincide with an origin of a first reference coordinate system; determining a quadrant of the starting and ending point vector in a first reference coordinate system; determining the unevenness of the running track of the target vehicle relative to the straight line where the starting and ending point vectors are located according to the plurality of track points; and determining that the target vehicle turns left or right based on the quadrant of the start and end point vector and the concavity and convexity of the driving track.
In some embodiments, the determination of the vehicle turning direction may be performed after the vehicle straight-ahead determination is completed. If it is determined that the target vehicle is not traveling straight, it is further determined whether the target vehicle is turning left or turning right.
In some embodiments, the start and end point vector may be a vector connecting the start track point and the track end point, and the start and end point vector is directed from the start track point to the track end point. The translation may be any suitable method for vector translation. The first reference coordinate system is a rectangular coordinate system, and the quadrants of the first reference coordinate system comprise a first quadrant, a second quadrant, a third quadrant and a fourth quadrant. The concave-convex nature of the track means that any arc segment on the track is positioned below the string, and the track is concave; and any arc segment on the track is positioned above the stretched chord, so that the track is convex.
In some embodiments, the determining that the target vehicle turns left or right based on the quadrant of the start and end point vectors and the concavity and convexity of the driving track specifically includes:
when the starting point vector and the ending point vector are positioned in a first quadrant of a first reference coordinate system, if the track is convex, the target vehicle is determined to turn right; if the trajectory is concave, the target vehicle is determined to be turning left.
When the starting point vector and the ending point vector are located in a second quadrant of the first reference coordinate system, if the track is convex, the target vehicle is determined to turn left; if the trajectory is concave, the target vehicle is determined to be turning right.
When the starting point vector and the ending point vector are located in a third quadrant of the first reference coordinate system, if the track is convex, the target vehicle is determined to turn left; if the trajectory is concave, the target vehicle is determined to be turning right.
When the starting point vector and the ending point vector are located in a fourth quadrant of the first reference coordinate system, if the track is convex, the target vehicle is determined to turn right; if the trajectory is concave, the target vehicle is determined to be turning left.
More specific judgment processes and rules can be referred to in the relevant contents of fig. 6 and fig. 7 in the present specification.
In some embodiments, the number of left-turn or right-turn vehicles may be counted after the target vehicle is determined to be left-turn or right-turn. The number may be a number over a period of time, for example, a number of left or right turn vehicles within 1 hour. The number may also be a number within a certain time period, for example, a number of left or right turning vehicles at 7-9 am; also for example, the number of left-turn or right-turn vehicles at 2 pm to 6 pm.
FIG. 3 is an exemplary sub-flow diagram illustrating the coordinate transformation in step 206 according to some embodiments of the invention.
FIG. 4 is a schematic diagram illustrating the acquisition of sample points and their coordinate data in step 206 according to some embodiments of the invention.
Step 302, acquiring a plurality of sampling points and coordinate data thereof in at least one image of a plurality of images.
In some embodiments, the plurality of images may comprise images that are distorted, including but not limited to one of oblique distortion, rotational distortion, shear distortion, and the like. The sample points may be a plurality of points selected by a user in at least one of the plurality of images and their coordinate data in the image (e.g., the initial coordinate system).
With further reference to fig. 4, the acquiring a plurality of sampling points and coordinate data thereof in at least one of the plurality of images specifically includes: receiving a plurality of annotation points 2 depicted by a user in at least one of the plurality of images; the plurality of annotation points 2 are not collinear; generating a canonical rectangle 3 based on the plurality of annotation points 2; the vertices 4 of the canonical rectangle 3 and their coordinate data in at least one of the multiple images are determined as the multiple sampling points and their coordinate data.
In some embodiments, the plurality of annotation points are located on a road marking in at least one of the plurality of images, and the plurality of annotation points enclose a quadrilateral.
The road marking may be a previously marked road boundary line or a previously marked road lane line. Including, but not limited to, rectangles, squares, diamonds, parallelograms, trapezoids, and other trapezoids, and the like.
In some embodiments, the plurality of annotation points 2 may be manually annotated by a user. For example, the user draws a plurality of annotation points 2 in one of the plurality of images through the user terminal. In some embodiments, the user's callable scope may be constrained. For example, the user is constrained from tracing points on the road markings. As another example, the number of constrained user annotation points is 4.
In some embodiments, the multiple edges of the canonical rectangle 3 may contain all of the user annotation points 2. The canonical rectangle 3 may be a minimum bounding rectangle of the plurality of user annotation points 2. At least one vertex 4 of the canonical rectangle 3 coincides with at least one of the plurality of annotation points 2.
In some embodiments, a user may draw a plurality of labeled points for each road region of the intersection, obtain a canonical rectangle for each road region, and further obtain a plurality of sampling points and coordinate data thereof for each road.
Step 304, acquiring a plurality of reference points and coordinate data thereof in the target coordinate system.
In some embodiments, the reference point may be a corresponding point of the sample point in the target coordinate system. The coordinate data of the reference point is coordinate data of the reference point in a target coordinate system.
Step 306, establishing a coordinate transformation matrix based on the coordinate data of the plurality of sampling points and the coordinate data of the plurality of reference points.
In some embodiments, the coordinate transformation matrix may be an affine matrix, and the establishing of the coordinate transformation matrix may be based on affine transformation calculation performed on the plurality of sampling points and the plurality of reference points to obtain the affine matrix of the affine transformation. The affine transformations may include Translation (Translation), scaling (Scale), Flip (Flip), Rotation (Rotation), and clipping (Shear).
Specifically, the affine matrix may be in the form of:
Figure BDA0002213668720000191
wherein (t)x,ty) Indicating the amount of translation, ai(i 1-4) represents changes such as rotation and zooming.
The sampling points and the coordinate data thereof, the reference points and the coordinate data thereof satisfy the following formula:
Figure BDA0002213668720000192
wherein the content of the first and second substances,
Figure BDA0002213668720000193
is a matrix representation of the coordinate positions of the sample points,
Figure BDA0002213668720000194
is a matrix representation of the coordinate positions of the reference points. When enough sampling points and reference points are taken, the coordinate transformation matrix can be calculated. In some embodiments, the amount of translation (t)x,ty) It can take (0, 0), so that the coordinate transformation matrix can be solved by taking 4 sampling points and their coordinate data and the corresponding 4 reference points and their coordinate data.
In some embodiments, an affine transformation matrix may be generated for each road and pre-stored based on a plurality of sampling points of each road and coordinate data thereof, and the pre-stored affine transformation matrix may be applied to coordinate transformation of the corresponding road region. For example, if the intersection is an intersection, 4 affine transformation matrices generated for 4 roads may be generated and pre-stored, and corresponding affine transformation matrices are selected according to the driving roads of the vehicle for processing, thereby improving the detection accuracy.
And 308, calculating the coordinate position of the target vehicle in the plurality of images and the coordinate transformation matrix to obtain the plurality of track points.
In some embodiments, the operating the coordinate position of the target vehicle in the multiple images and the coordinate transformation matrix may be multiplying the coordinate position of the target vehicle and the coordinate transformation matrix, and the specific calculation may be in the form of:
Figure BDA0002213668720000201
wherein
Figure BDA0002213668720000202
Is a matrix representation of the coordinate position of the target vehicle in the initial coordinate system,
Figure BDA0002213668720000203
is a matrix representation of the coordinate position of the target vehicle in the target coordinate system.
Fig. 5 is an exemplary sub-flow diagram illustrating the determination of straight rows in step 106 according to some embodiments of the inventions.
Step 502, generating one or more trajectory point vectors based on the plurality of trajectory points.
In some embodiments, the trace point vector may be a plurality of vectors formed by connecting the designated trace point with all other trace points. The specified trajectory point may be a starting trajectory point.
And step 504, calculating included angles related to one or more track point vectors to obtain one or more track point vector included angles.
In some embodiments, the reference vector may be a vector established in the initial coordinate system or the target coordinate system for determining the direction or angle of the track point vector. The reference vector can also be a vector established in the first reference coordinate system for judging the vector direction or angle of the track point. For example, the reference vector may be a vector in the initial coordinate system or the target coordinate system or the first reference coordinate system that is parallel to the extending direction along the driving road of the vehicle. The reference vector may be a vector along the x-axis of the certain coordinate system, or may be a vector along the y-axis of the certain coordinate system.
In some embodiments, the included angle of the trace point vector may be an included angle between the trace point vector and a reference vector, an included angle between the trace point vector and a unit vector along an x axis, or an included angle between the trace point vector and a unit vector along a y axis. In some embodiments, the trace point vector included angle may also be an included angle between two trace point vectors of the trace point vector group, which are the maximum included angle and the minimum included angle with respect to the reference vector. The included angle between the two track point vectors with the maximum included angle and the minimum included angle of the reference vector in the track point vectors can be calculated by the included angle difference between the two track point vectors and the reference vector. In some embodiments, in order to improve the detection accuracy, N track point vectors having the largest included angle with the reference vector and M track point vectors having the smallest included angle with the reference vector may be filtered simultaneously in the step of obtaining track points (e.g., step 204), where N is greater than or equal to 1, and M is greater than or equal to 1.
Step 506, comparing the included angle of the one or more trace point vectors with a preset angle threshold value.
In some embodiments, the preset angle threshold may be a manually set angle value. In some embodiments, the preset angle threshold is 5 °, 7 °, 9 °, 10 °, and so on. In some embodiments, the angle threshold is suitable for judging the vector included angle of the track points of the target vehicles in all road areas of the intersection. In some embodiments, whether the angle threshold needs to be adjusted may be determined according to whether the determination result is consistent with the actual situation. Specifically, if the judgment result is consistent with the actual situation, the angle threshold value does not need to be adjusted; and if the judgment result does not accord with the actual situation, correspondingly increasing or decreasing the angle threshold value. For example, if a turning vehicle is determined to be traveling straight, the angle threshold should be decreased. For another example, if a straight-ahead vehicle is determined to be turning, the angle threshold should be increased.
And step 508, in response to that the included angles of the one or more track points are all smaller than a preset angle threshold value, determining that the target vehicle is moving straight.
In some embodiments, each of the plurality of trajectory point vector angles may be compared with an angle threshold, and if each of the trajectory point vector angles is smaller than the angle threshold, it is determined that the target vehicle is moving straight; and if any one of the track point vector included angles is larger than the angle threshold value, determining that the target vehicle does not move straight. In some embodiments, a maximum included angle among the included angles of the plurality of trace point vectors may also be compared with an angle threshold, and if the maximum included angle is smaller than the angle threshold, it is determined that the target vehicle is moving straight; and if the maximum included angle is larger than the angle threshold value, determining that the target vehicle does not move straight.
FIG. 6 is an exemplary sub-flow diagram illustrating the determination of a steering direction according to some embodiments of the invention.
FIG. 7 is a schematic diagram illustrating a decision rule for determining a steering direction according to some embodiments of the present invention.
And step 602, generating starting and ending point vectors based on the plurality of track points.
In some embodiments, the start and end point vector may be a vector connecting the start track point and the track end point, and the start and end point vector is directed from the start track point to the track end point. Specifically, the coordinate of the starting track point is (x)0,y0) The coordinate of the track end point is (x)1,y1) The coordinate representation of the start and end point vector may be of the form: (x)1-x0,y1-y0)。
Step 604, determining a quadrant of the start and end point vector in a first reference coordinate system.
The first reference coordinate system is a rectangular coordinate system, and the quadrants are a first quadrant, a second quadrant, a third quadrant and a fourth quadrant respectively. In some embodiments, the quadrant of the start and end point vector in the first reference coordinate system may be understood as the quadrant in which the start and end point vector is located when the start point of the start and end point vector is translated to coincide with the origin of the first reference coordinate system.
In some embodiments, the determining the quadrant of the start and end point vector in the first reference coordinate system may be: the method comprises the steps of calculating products of start and end point vectors and unit vectors along two coordinate axis directions, and determining a quadrant where the start and end point vectors are located based on positive and negative values of the products. The product may be a vector dot product.
More specifically, in some embodiments, the value of the product of the start and end point vectors and the x-axis unit vectors is value _ x, and the value of the product of the start and end point vectors and the y-axis unit vectors is value _ y. The specific rule for determining the quadrant of the start and end point vector based on the positive and negative values of the product is as follows:
value _ x >0 and value _ y <0, are determined to be a quadrant.
value _ x <0 and value _ y <0, are determined as two quadrants.
value _ x <0 and value _ y >0, determined as three quadrants.
value _ x >0 and value _ y >0, determined as four quadrants.
In some embodiments, the determining a quadrant of the start-end point vector in the first reference coordinate system may further be: outputting a coordinate representation (x, y) of the start and end point vector, wherein x equals x1-x0,y=y1-y0,(x0,y0) As the coordinates of the starting track point, (x)1,y1) The coordinates of the end point of the track. And judging the quadrant of the start and end point vector by the positive and negative of x and y. The judgment rule is as follows:
x >0 and y <0, is determined to be a quadrant.
x <0 and y <0, and is determined to be two-quadrant.
x <0 and y >0, determined as three quadrants.
x >0 and y >0, and is determined to be a quadrant.
In some embodiments, the determining a quadrant of the start-end point vector in the first reference coordinate system where the vector end point is located may further be: and translating the starting and ending point vector in a first reference coordinate system, wherein the origin of the first reference coordinate system is the starting point of the starting and ending point vector after translation.
Outputting the end point coordinate value (x ') of the start and end point vector after translation'1,y′1) (ii) a By judging the coordinate value x 'of the terminal point'1,y′1The positive and negative of (2) determines the quadrant of the start and end point vector. The judgment rule is as follows:
x′1>0 and y'1<0, it is determined as one quadrant.
x′1<0 and y'1<0, the result is judged as two quadrants.
x′1<0 and y'1>0, the result is judged as three quadrants.
x′1>0 and y'1>0, four quadrants are determined.
Step 606, determining the concavity and convexity of the running track of the target vehicle relative to the straight line where the starting point vector and the ending point vector are located according to the plurality of track points;
the concave-convex nature of the track means that any arc segment on the track is positioned below the string, and the track is concave; and any arc segment on the track is positioned above the stretched chord, so that the track is convex.
In some embodiments, the determining the unevenness of the vehicle driving track according to the vehicle driving track may specifically be: generating a linear equation based on the initial track point and the terminal track point, if y is ax + b, further obtaining a function f (x) ax + b, selecting a horizontal axis coordinate value of any one track point except the initial track point and the terminal track point from a plurality of track points of the vehicle to be substituted into the function, calculating a function value as a reference longitudinal axis coordinate value of the corresponding track point, comparing the reference longitudinal axis coordinate value with a longitudinal axis coordinate value of the track point, and determining the concavity and convexity of the vehicle driving track based on the comparison result.
More specifically, the sizes of y 'and y can be compared, where y' is a coordinate value of a reference longitudinal axis, and y is a coordinate value of a longitudinal axis of the track point, and the roughness of the track is determined based on the comparison result, and the specific rule is as follows:
if y > y', then the trajectory is convex;
if y < y', the trajectory is concave.
In some embodiments, the determining the unevenness of the vehicle driving track according to the vehicle driving track may specifically further include: fitting the track points to generate a continuous curve; calculating the coordinates of any two points on the curve, and comparing the calculation results; determining the roughness of the trajectory based on the comparison result.
More specifically, in some embodiments, the fitting may obtain a trajectory equation y ═ f (x) for the target vehicle's trajectory. The fitting includes linear fitting and non-linear fitting, and the algorithm of the fitting includes but is not limited to least square algorithm, image thinning algorithm, parameter curve model algorithm and the like. The trajectory equation y ═ f (x) is continuous. The abscissa of any two points on the curve can be x respectively1,x2. The operation may be to calculate a function value based on the abscissa values of the two points
Figure BDA0002213668720000241
Sum function value
Figure BDA0002213668720000242
The specific rule for determining the roughness of the trajectory based on the comparison result is as follows:
if it is
Figure BDA0002213668720000243
Then trace f (x) is concave;
if it is
Figure BDA0002213668720000244
Then the trajectory f (x) is convex.
In some embodiments, the determining the unevenness of the vehicle driving track according to the vehicle driving track may specifically further include: drawing the trace points to generate a continuous curve, and generating a curve equation y ═ f (x); (ii) taking the second derivative f "(x) of the function f (x); the relief of the trajectory is determined based on the positive and negative of the second derivative f "(x).
More specifically, the rule for determining the concavity and convexity of the trajectory based on the positive and negative of the second derivative f "(x) is:
if f "(x) >0, then the trace is concave;
if f "(x) <0, the trajectory is convex.
And 608, determining that the target vehicle turns left or right based on the quadrant of the start and end point vector and the concavity and convexity of the driving track.
Further referring to fig. 7, the specifically determining that the target vehicle turns left or right based on the quadrant where the start and end point vectors are located and the concavity and convexity of the driving track is:
when the starting point vector and the ending point vector are positioned in a first quadrant of a first reference coordinate system, if the track is convex, the target vehicle is determined to turn right; if the trajectory is concave, the target vehicle is determined to be turning left.
When the starting point vector and the ending point vector are located in a second quadrant of the first reference coordinate system, if the track is convex, the target vehicle is determined to turn left; if the trajectory is concave, the target vehicle is determined to be turning right.
When the starting point vector and the ending point vector are located in a third quadrant of the first reference coordinate system, if the track is convex, the target vehicle is determined to turn left; if the trajectory is concave, the target vehicle is determined to be turning right.
When the starting point vector and the ending point vector are located in a fourth quadrant of the first reference coordinate system, if the track is convex, the target vehicle is determined to turn right; if the trajectory is concave, the target vehicle is determined to be turning left. It should be noted that the above description of flow 600 is for purposes of example and illustration only and does not limit the applicability of one or more embodiments of the present description. Various modifications and alterations to flow 600 may occur to those skilled in the art, as guided by one or more of the embodiments of the disclosure herein. However, such modifications and variations are intended to be within the scope of one or more embodiments of the present disclosure. For example, in some embodiments, the order of steps 604, 606 may be interchanged. Such variations are within the scope of one or more embodiments of the present description.
The invention also provides a device for identifying the driving direction of the vehicle at the intersection, which comprises at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least a portion of the computer instructions to implement the method for identifying a driving direction of a vehicle at an intersection as described in any of the above embodiments.
The at least one memory is for storing computer instructions. The memory may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), the like, or any combination thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state disks, and the like. Exemplary removable memory may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like. Exemplary volatile read and write memories can include Random Access Memory (RAM). Exemplary RAM may include Dynamic Random Access Memory (DRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Static Random Access Memory (SRAM), thyristor random access memory (T-RAM), zero capacitance random access memory (Z-RAM), and the like. Exemplary ROMs may include mask read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (PEROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM), digital versatile disk read-only memory, and the like.
The at least one processor is configured to execute at least a portion of the computer instructions to implement the method for identifying a driving direction of a vehicle at an intersection as described in any of the above embodiments. The processor may include at least one hardware processor, such as a microcontroller, microprocessor, Reduced Instruction Set Computer (RISC), Application Specific Integrated Circuit (ASIC), application specific instruction set processor (ASIP), Central Processing Unit (CPU), Graphics Processing Unit (GPU), Physical Processing Unit (PPU), microcontroller unit, Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), high order RISC machine (ARM), Programmable Logic Device (PLD), any circuit or processor capable of performing at least one function, and the like, or any combination thereof.
Based on the method for identifying the driving direction of the vehicle at the intersection, the invention also provides a computer-readable storage medium, wherein the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes the method for identifying the driving direction of the vehicle at the intersection according to any one of the embodiments.
The beneficial effects that the embodiment of the invention may bring include but are not limited to: (1) the method has the advantages of easy processing process and simple calculation; (2) the method can adapt to different camera angles, can process the non-straight vehicle running track, and has wider applicability; (3) the method of the invention can count all directions of the intersection at the same time, and has low complexity. It is to be noted that the advantages that may be produced by different embodiments may be different, and in different embodiments, the advantages that may be produced may be any one or combination of the above, or any other advantages that may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting. Various modifications, improvements and adaptations of the present invention may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed within the present invention and are intended to be within the spirit and scope of the exemplary embodiments of the present invention.
Also, the present invention has been described using specific terms to describe embodiments of the invention. Such as "one embodiment," "an embodiment," and/or "some embodiments" means a feature, structure, or characteristic described in connection with at least one embodiment of the invention. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some of the features, structures, or characteristics of one or more embodiments of the present invention may be combined as suitable.
Moreover, those skilled in the art will appreciate that aspects of the invention may be illustrated and described as embodied in several forms or conditions of patentability, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of the present invention may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present invention may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present invention may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visualbasic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are described, the use of letters or other designations herein is not intended to limit the order of the processes and methods of the invention unless otherwise indicated by the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it should be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments of the invention. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the invention, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to suggest that the claimed subject matter requires more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the invention are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
Each patent, patent invention, patent disclosure, and other materials cited in connection with the present invention, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference in their entirety. Except for those files that are inconsistent with or conflict with the teachings of the present invention, such files that are currently or later become relevant to the broadest scope of the claims to which the invention pertains are also excluded. It is to be understood that the descriptions, definitions and/or use of terms in the appended materials should control if they are inconsistent or contrary to the present disclosure.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of embodiments of the present invention. Other variations are possible within the scope of the invention. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present invention can be viewed as being consistent with the teachings of the present invention. Accordingly, the embodiments of the invention are not limited to only those explicitly described and depicted.

Claims (19)

1. A method for identifying a direction of motion of an object at an intersection, the method comprising:
acquiring a plurality of images of an intersection, wherein at least one of the plurality of images comprises a target object;
obtaining a plurality of track points on the motion track of the target object based on the coordinate positions of the target object in the plurality of images;
and analyzing the position relation of the plurality of track points according to a preset algorithm, and determining the motion direction of the target object.
2. The method of claim 1, wherein the plurality of images are taken from video data of the intersection.
3. The method of claim 1, wherein the direction of motion comprises at least one of a straight line, a left turn, or a right turn.
4. The method according to claim 1, wherein analyzing the position relationship of the plurality of track points according to a preset algorithm, and determining the motion direction of the target object comprises:
generating one or more trajectory vectors based on the plurality of trajectory points;
determining at least one frame of reference;
and determining the motion direction of the target object based on the relative position relation of one or more track vectors and at least one reference frame.
5. The method of claim 4, wherein the frame of reference comprises a reference vector;
analyzing the position relation of the plurality of track points according to a preset algorithm to determine the motion direction of the target object, wherein the method comprises the following steps:
generating one or more trajectory point vectors based on the plurality of trajectory points;
calculating included angles related to one or more tracing point vectors to obtain one or more tracing point vector included angles; the one or more trace point vector included angles comprise included angles between one or more trace point vectors and the reference vector or comprise a difference value between a maximum included angle and a minimum included angle in included angles between one or more trace point vectors and the reference vector;
comparing the included angle of the one or more tracing point vectors with a preset angle threshold value;
and determining that the target object is in a straight line in response to the fact that the included angles of the one or more track points are smaller than a preset angle threshold value.
6. The method of claim 5, wherein the reference vector reflects an extension direction of at least one entry road of the target object in the plurality of images; the target object enters the intersection through the entrance road.
7. The method of claim 5, wherein the one or more track point vectors include vectors pointing from a specified track point of the plurality of track points to one or more other track points.
8. The method of claim 7, wherein the designated track point is a start track point.
9. The method of claim 5, wherein the frame of reference comprises a first frame of reference;
in response to the one or more trace point included angles not being less than a preset angle threshold, executing the following steps:
generating starting and ending point vectors based on the plurality of track points;
determining a quadrant of the starting and ending point vector in a first reference coordinate system; a quadrant of the start and end point vector in a first reference coordinate system is the quadrant where the start and end point vector is located when the start and end point vector is translated to enable the start point of the start and end point vector to coincide with the origin of the first reference coordinate system;
and the number of the first and second groups,
determining the concavity and convexity of the motion trail of the target object relative to the straight line where the start and end point vectors are located according to the plurality of track points;
and determining that the target object turns left or right based on the starting and ending point vector in the quadrant and the concavity and convexity of the motion trail.
10. The method of claim 4, wherein the frame of reference comprises a first frame of reference;
analyzing the position relationship of the plurality of track points according to a preset algorithm, and determining the motion direction of the vehicle comprises:
generating starting and ending point vectors based on the plurality of track points;
determining a quadrant of the starting point track vector in a first reference coordinate system; a quadrant of the start and end point vector in a first reference coordinate system is the quadrant where the start and end point vector is located when the start and end point vector is translated to enable the start point of the start and end point vector to coincide with the origin of the first reference coordinate system;
and the number of the first and second groups,
determining the concavity and convexity of the motion trail of the target object relative to the straight line where the start and end point vectors are located according to the plurality of track points;
and determining that the target object turns left or right based on the starting and ending point vector in the quadrant and the concavity and convexity of the motion trail.
11. The method according to claim 9 or 10, characterized in that at least one coordinate axis of the first reference coordinate system reflects an extension direction of at least one road in the plurality of images.
12. The method according to claim 11, wherein at least one coordinate axis of the first reference coordinate system reflects an extension direction of at least one approach road of the target object in the plurality of images; the target object enters the intersection through the entrance road.
13. The method of claim 1, wherein obtaining a plurality of trajectory points on the motion trajectory of the target object based on the coordinate positions of the target object in the plurality of images further comprises:
mapping the coordinate positions of the target object in the multiple images to a target coordinate system to obtain a plurality of track points; the extending direction of at least one road in the plurality of images is parallel to a certain coordinate axis thereof in the target coordinate system.
14. The method of claim 13, wherein obtaining a plurality of trajectory points on the motion trajectory of the target object based on the coordinate positions of the target object in the plurality of images further comprises:
acquiring a plurality of sampling points and coordinate data thereof in at least one image of a plurality of images;
acquiring a plurality of reference points in a target coordinate system and coordinate data thereof;
establishing a coordinate transformation matrix based on the coordinate data of the plurality of sampling points and the coordinate data of the plurality of reference points;
and calculating the coordinate positions of the target object in the plurality of images and the coordinate transformation matrix to obtain the plurality of track points.
15. The method of claim 14, wherein the acquiring of the plurality of sample points and their coordinate data in at least one of the plurality of images comprises:
receiving a plurality of annotation points depicted by a user in at least one of the plurality of images; the plurality of annotation points are not collinear;
generating a canonical rectangle based on the plurality of annotation points;
determining vertices of the canonical rectangle and their coordinate data in at least one of the plurality of images as the plurality of sample points and their coordinate data.
16. The method of claim 15, wherein the plurality of annotation points are located on road markings in at least one of the plurality of images, and the plurality of annotation points form a quadrilateral.
17. A system for identifying a direction of motion of a vehicle at an intersection, the system comprising:
an acquisition module for acquiring a plurality of images of an intersection, at least one of the plurality of images including a target object;
the track point determining module is used for obtaining a plurality of track points on the motion track of the target object based on the coordinate positions of the target object in the plurality of images;
and the processing module is used for analyzing the position relation of the plurality of track points according to a preset algorithm and determining the motion direction of the target object.
18. An apparatus for identifying a direction of motion of a vehicle at an intersection, the apparatus comprising at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least some of the computer instructions to implement the operations of any of claims 1-16.
19. A computer-storable medium that stores computer instructions, at least some of which, when executed by a processor, perform operations according to any one of claims 1-16.
CN201910907356.6A 2019-09-24 2019-09-24 Method and system for identifying object motion direction at intersection Pending CN112634320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910907356.6A CN112634320A (en) 2019-09-24 2019-09-24 Method and system for identifying object motion direction at intersection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910907356.6A CN112634320A (en) 2019-09-24 2019-09-24 Method and system for identifying object motion direction at intersection

Publications (1)

Publication Number Publication Date
CN112634320A true CN112634320A (en) 2021-04-09

Family

ID=75282853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910907356.6A Pending CN112634320A (en) 2019-09-24 2019-09-24 Method and system for identifying object motion direction at intersection

Country Status (1)

Country Link
CN (1) CN112634320A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113176779A (en) * 2021-04-28 2021-07-27 上海布鲁可积木科技有限公司 Control method and system for motion device, storage medium and motion device
CN113380035A (en) * 2021-06-16 2021-09-10 山东省交通规划设计院集团有限公司 Road intersection traffic volume analysis method and system
CN116721547A (en) * 2023-08-04 2023-09-08 山东科技大学 Safety guidance system and method for large truck in right turn area of intersection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176285A (en) * 2011-02-28 2011-09-07 江苏怡和科技股份有限公司 Method for judging behavior patterns of vehicles in video stream
CN102789642A (en) * 2011-05-16 2012-11-21 索尼公司 Method and device for disappeared direction determination and method and device for camera self-calibration
CN105225248A (en) * 2014-06-27 2016-01-06 株式会社理光 The method and apparatus of the direction of motion of recognition object
CN105788273A (en) * 2016-05-18 2016-07-20 武汉大学 Urban intersection automatic identification method based on low precision space-time trajectory data
CN106951871A (en) * 2017-03-24 2017-07-14 北京地平线机器人技术研发有限公司 Movement locus recognition methods, device and the electronic equipment of operating body
CN107315994A (en) * 2017-05-12 2017-11-03 长安大学 Clustering algorithm based on Spectral Clustering space trackings
CN109307517A (en) * 2017-07-28 2019-02-05 高德信息技术有限公司 Intersection localization method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176285A (en) * 2011-02-28 2011-09-07 江苏怡和科技股份有限公司 Method for judging behavior patterns of vehicles in video stream
CN102789642A (en) * 2011-05-16 2012-11-21 索尼公司 Method and device for disappeared direction determination and method and device for camera self-calibration
CN105225248A (en) * 2014-06-27 2016-01-06 株式会社理光 The method and apparatus of the direction of motion of recognition object
CN105788273A (en) * 2016-05-18 2016-07-20 武汉大学 Urban intersection automatic identification method based on low precision space-time trajectory data
CN106951871A (en) * 2017-03-24 2017-07-14 北京地平线机器人技术研发有限公司 Movement locus recognition methods, device and the electronic equipment of operating body
CN107315994A (en) * 2017-05-12 2017-11-03 长安大学 Clustering algorithm based on Spectral Clustering space trackings
CN109307517A (en) * 2017-07-28 2019-02-05 高德信息技术有限公司 Intersection localization method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113176779A (en) * 2021-04-28 2021-07-27 上海布鲁可积木科技有限公司 Control method and system for motion device, storage medium and motion device
CN113380035A (en) * 2021-06-16 2021-09-10 山东省交通规划设计院集团有限公司 Road intersection traffic volume analysis method and system
CN116721547A (en) * 2023-08-04 2023-09-08 山东科技大学 Safety guidance system and method for large truck in right turn area of intersection
CN116721547B (en) * 2023-08-04 2023-10-20 山东科技大学 Safety guidance system and method for large truck in right turn area of intersection

Similar Documents

Publication Publication Date Title
Song et al. Lane detection and classification for forward collision warning system based on stereo vision
EP3735675B1 (en) Image annotation
CN107341453B (en) Lane line extraction method and device
WO2020098316A1 (en) Visual point cloud-based semantic vector map building method, device, and electronic apparatus
JP7190583B2 (en) Vehicle feature acquisition method and device
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
WO2017041396A1 (en) Driving lane data processing method, device, storage medium and apparatus
CN112634320A (en) Method and system for identifying object motion direction at intersection
CN110879943B (en) Image data processing method and system
CN109815831B (en) Vehicle orientation obtaining method and related device
CN108256445B (en) Lane line detection method and system
CN109544635B (en) Camera automatic calibration method based on enumeration heuristic
US20230005278A1 (en) Lane extraction method using projection transformation of three-dimensional point cloud map
CN110733416B (en) Lane departure early warning method based on inverse perspective transformation
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
Huang et al. Lane marking detection based on adaptive threshold segmentation and road classification
CN106780541A (en) A kind of improved background subtraction method
CN116503818A (en) Multi-lane vehicle speed detection method and system
CN114037977B (en) Road vanishing point detection method, device, equipment and storage medium
Yuan et al. Estimation of vehicle pose and position with monocular camera at urban road intersections
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
CN115601336A (en) Method and device for determining target projection and electronic equipment
Wen et al. Vehicle localization and navigation on region with disappeared lane line marking
JP6266340B2 (en) Lane identification device and lane identification method
Zhang et al. Real-time Lane Detection Method Based On Region Of Interest

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination