CN113075716A - Image-based vehicle positioning method and device, storage medium and electronic equipment - Google Patents

Image-based vehicle positioning method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113075716A
CN113075716A CN202110300526.1A CN202110300526A CN113075716A CN 113075716 A CN113075716 A CN 113075716A CN 202110300526 A CN202110300526 A CN 202110300526A CN 113075716 A CN113075716 A CN 113075716A
Authority
CN
China
Prior art keywords
vehicle
current
determining
reference point
detection frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110300526.1A
Other languages
Chinese (zh)
Inventor
孔兰芳
孙韶言
赵澄玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Horizon Shanghai Artificial Intelligence Technology Co Ltd
Original Assignee
Horizon Shanghai Artificial Intelligence Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Horizon Shanghai Artificial Intelligence Technology Co Ltd filed Critical Horizon Shanghai Artificial Intelligence Technology Co Ltd
Priority to CN202110300526.1A priority Critical patent/CN113075716A/en
Publication of CN113075716A publication Critical patent/CN113075716A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/46Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being of a radio-wave signal type
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Traffic Control Systems (AREA)

Abstract

Disclosed are an image-based vehicle positioning method and apparatus, a storage medium, and an electronic device, the method including: determining at least one history detection frame about the vehicle in at least one history frame image shot by the road monitoring equipment; determining a current detection frame of the vehicle in the current frame image; determining a current driving direction of the vehicle based on at least one history detection frame and the current detection frame; determining a first reference point on the current detection frame based on the current driving direction and the relative position relationship between the vehicle and the road monitoring equipment; determining a positioning reference point in a target coordinate system based on the first reference point; and determining the current position of the vehicle in the target coordinate system according to the positioning reference point, the current driving direction and the size parameter of the vehicle. The driving direction of the vehicle is determined based on the multi-frame images, the positioning reference points are determined based on a preset strategy, and the position of the vehicle is determined by combining the size parameters of the vehicle, so that the vehicle can be positioned without using sensors such as radars and the like, and the cost is reduced.

Description

Image-based vehicle positioning method and device, storage medium and electronic equipment
Technical Field
The application relates to the technical field of intelligent driving, in particular to a vehicle positioning method and device based on images, a storage medium and electronic equipment.
Background
With the innovation of technology, intelligent traffic becomes the inevitable trend of future development, and the vehicle-road cooperation is one of the core technologies of intelligent traffic and is receiving much attention. In the vehicle-road cooperation, the position information of the vehicle in the map is accurately positioned, blind areas in a driving field can be cleared for the vehicle, real-time dynamic information interaction among the vehicle road, the vehicle and the vehicle passenger is realized, and traffic accidents such as vehicle collision at a crowded intersection and the like are effectively avoided. In the prior art, sensors such as a radar and the like are used for obtaining depth information of a vehicle, and then the vehicle is subjected to three-dimensional modeling based on the depth information to realize positioning of the vehicle, so that the cost is high. How to realize the accurate positioning of the vehicle in the map with low cost becomes a difficult problem to be solved urgently in the vehicle-road cooperation technology.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. Embodiments of the present application provide a sum device, a storage medium, and an electronic apparatus.
According to a first aspect of the present application, there is provided an image-based vehicle localization method, comprising: determining at least one history detection frame about the vehicle in at least one history frame image shot by the road monitoring equipment; determining a current detection frame of the vehicle in the current frame image; determining a current driving direction of the vehicle based on at least one history detection frame and the current detection frame; determining a first reference point on the current detection frame based on the current driving direction and the relative position relationship between the vehicle and the road monitoring equipment; determining a positioning reference point in a target coordinate system based on the first reference point; and determining the current position of the vehicle in the target coordinate system according to the positioning reference point, the current driving direction and the size parameter of the vehicle.
According to a second aspect of the present application, there is provided a vehicle positioning device comprising: the road monitoring device comprises a first determining module, a second determining module and a judging module, wherein the first determining module is used for determining at least one history detection frame about a vehicle in at least one history frame image shot by the road monitoring device; the second determining module is used for determining a current detection frame of the vehicle in the current frame image; a direction determination module for determining a current driving direction of the vehicle based on the at least one history detection frame and the current detection frame; the reference point determining module is used for determining a first reference point on the current detection frame based on the current driving direction and the relative position relationship between the vehicle and the road monitoring equipment; the datum point determining module is used for determining a positioning datum point in a target coordinate system based on the first reference point; and the positioning module is used for determining the current position of the vehicle in the target coordinate system according to the positioning reference point, the current driving direction and the size parameter of the target vehicle.
According to a third aspect of the present application, there is provided a computer readable storage medium storing a computer program for executing the image based vehicle localization method provided herein.
According to a fourth aspect of the present application, there is provided an electronic device comprising: a processor and a memory for storing processor-executable instructions. The processor is used for executing the image-based vehicle positioning method provided by the application.
According to the image-based vehicle positioning method and device, the storage medium and the electronic equipment, the current driving direction of the vehicle is determined based on the multi-frame image shot by the road monitoring equipment; then, selecting a first reference point from a current detection frame in the current frame image according to a preset strategy to approximate a point on the vehicle, namely a positioning reference point; and then, the current position of the vehicle is determined according to the current driving direction, the positioning reference point and the size parameter of the vehicle, because the first reference point can be selected based on the current driving direction and the relative position relation between the vehicle and the road monitoring equipment and is mapped to a target coordinate system to obtain the positioning reference point, and the size of the vehicle can be determined according to the parameter of the vehicle, the positioning precision of the vehicle in the target coordinate system can be ensured, in addition, because the vehicle is positioned in an image mode, the positioning of the vehicle can be realized without using sensors such as radar and the like, and the cost is reduced.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a scene diagram to which the image-based vehicle positioning method provided in the present application is applied.
FIG. 2 is a schematic flow chart diagram illustrating an image-based vehicle localization method according to an exemplary embodiment of the present application.
Fig. 3 is a schematic diagram illustrating a superimposed multi-frame image according to an exemplary embodiment of the present application.
FIG. 4 is a schematic flow chart diagram illustrating an image-based vehicle localization method according to another exemplary embodiment of the present application.
Fig. 5a is a schematic diagram of a current frame image according to an exemplary embodiment of the present application.
Fig. 5b is a schematic diagram of a current frame image according to another exemplary embodiment of the present application.
Fig. 5c is a schematic diagram of a current frame image according to another exemplary embodiment of the present application.
Fig. 6 is a schematic diagram of the position relationship of the second reference points of the multi-frame image in fig. 3 in the target coordinate system.
FIG. 7 is a schematic flow chart diagram illustrating an image-based vehicle localization method according to another exemplary embodiment of the present application.
Fig. 8 provides a relative position relationship between the vehicle and the road monitoring device corresponding to the current frame image in fig. 3 in the target coordinate system according to an exemplary embodiment of the present application.
Fig. 9 is a flowchart illustrating an image-based vehicle positioning method according to still another exemplary embodiment of the present application.
Fig. 10 is a schematic diagram of a position of a vehicle in a target coordinate system according to an exemplary embodiment of the present application.
Fig. 11 is a block diagram of a vehicle positioning device according to an exemplary embodiment of the present application.
Fig. 12 is a block diagram of a vehicle positioning device according to another exemplary embodiment of the present application.
FIG. 13 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
Currently, in the field of vehicle-road coordination, in order to realize vehicle positioning, a vehicle positioning technology based on three-dimensional bounding Box (3D Box for short) detection is generally adopted. However, the vehicle positioning technology based on 3D Box detection needs to acquire depth information of a vehicle by using a laser radar, and then three-dimensionally model the vehicle based on the depth information, which has the problems of high cost and large calculation amount. Based on the current situation, the application provides an image-based vehicle positioning method, which comprises the steps of analyzing and processing images shot by a road monitoring device, and calculating the current driving direction of a vehicle by using the displacement of a detection frame surrounding the same vehicle in a current frame image and at least one historical frame image; selecting a point on the current detection frame according to a preset strategy to approximate a point on the vehicle, namely positioning a reference point; and finally, determining the current position of the vehicle by combining the current driving direction, the positioning datum point and the size parameter of the vehicle. Therefore, according to the image-based vehicle positioning method provided by the embodiment of the application, the first reference point can be selected based on the current driving direction and the relative position relationship between the vehicle and the road monitoring equipment, and is mapped to the target coordinate system to obtain the positioning reference point, and the size of the vehicle can be determined according to the parameters of the vehicle, so that the positioning precision of the vehicle in the target coordinate system can be ensured.
Exemplary System
Fig. 1 is a scene diagram to which the image-based vehicle positioning method provided in the present application is applied. The image-based vehicle positioning method is suitable for the field of automatic driving, as shown in fig. 1, an unmanned vehicle 1 runs on a preset road 2, and road monitoring equipment 3 is arranged around the road 2. The road monitoring device 3 can take a picture or record a video of objects passing within its field of view. The image-based vehicle positioning method provided according to the present application may be performed by a positioning device, which may be integrated in the road monitoring apparatus 3, or integrated in a server (not shown in the figure) communicatively connected to the road monitoring apparatus 3, or may be disposed in the vehicle 1 as an on-board device.
After the road monitoring device 3 captures the image of the vehicle 2, the specific implementation process of the image-based vehicle positioning method includes: firstly, analyzing and processing images shot by the road monitoring equipment 3, and calculating the current driving direction of the vehicle according to the displacement of a detection frame surrounding the same vehicle in the current frame image and at least one historical frame image; then, selecting a reference point on the current detection frame according to a preset strategy to approximate a point on the vehicle, namely a positioning reference point; and finally, determining the current position of the vehicle by combining the current driving direction, the positioning reference point and the size parameter of the vehicle, thereby realizing the positioning of the vehicle. Subsequently, the obtained vehicle position information can be used for controlling the vehicle to autonomously run and can also be used for road management, such as vehicle shunting management and control, so that vehicle-road cooperation is realized.
Exemplary method
FIG. 2 is a schematic flow chart diagram illustrating an image-based vehicle localization method according to an exemplary embodiment of the present application. The present embodiment is exemplarily described with reference to fig. 1, and as shown in fig. 2, the image-based vehicle localization method 100 includes the following steps:
in step S110, at least one history detection frame about the vehicle in at least one history frame image captured by the road monitoring device is determined.
Specifically, the road monitoring device is, for example, the road monitoring device 3 shown in fig. 1, and the road monitoring device 3 may further be a camera. The road monitoring equipment can take a picture or record a video of an object passing through the field of view of the road monitoring equipment so as to obtain at least one historical frame image and a subsequent current frame image.
The at least one history detection frame is obtained by utilizing a target detection technology and a target tracking technology and stored in a storage medium when the at least one history frame image is taken as a current frame image, and the at least one history detection frame surrounds the same vehicle in the at least one history frame image. In this embodiment, at least one history detection frame may be retrieved directly from the storage medium for vehicle localization in the current frame image.
Step S120, determining a current detection frame of the vehicle in the current frame image captured by the road monitoring device.
Specifically, after a current frame image is shot by road monitoring equipment, firstly, an object of interest in the current frame image is detected by using a target detection algorithm, and corresponding position coordinates, classification, reliability and the like are obtained; and then, performing feature matching on the detected interested object and the interested object in at least one stored historical frame image, and calculating similarity to determine the vehicle from the interested object in the current frame image so as to obtain a current detection frame surrounding the vehicle.
In step S130, a current driving direction of the vehicle is determined based on at least one of the history detection frame and the current detection frame.
At least one history detection frame and a current detection frame respectively surround the same vehicle, and the arrangement sequence of the plurality of detection frames in the image coordinate system records the driving track of the vehicle, so that the driving direction of the vehicle can be determined based on the plurality of detection frames.
In one embodiment, the moving track of the vehicle 1 is equivalent to the moving track of the same predetermined point on the current detection frame and the at least one historical detection frame, and accordingly, the direction of the predetermined point on the current detection frame is the current driving direction of the vehicle 1.
Specifically, fig. 3 is a schematic diagram illustrating a superimposed multi-frame image according to an exemplary embodiment of the present application. The overlay image shown in fig. 3 includes a current frame image and a current frame image which are sequentially overlaidAt least one history frame image before the frame image, the superimposed image clearly showing the current detection frame S5And four history detection boxes S4-S1The arrangement order of the 5 detection frames in the superimposed image indicates the travel locus L of the vehicle formed between the five frames of images, as indicated by the arrow line in fig. 3, in which the arrow direction indicates the travel direction of the vehicle 1 on the image plane. Referring to fig. 3, when the current detection frame S is determined5And four history detection boxes S4-S1Then, first, the current detection frame S is selected5Central point P of0I.e. the current detection frame S5The intersection point of two groups of relative corner connecting lines m is called four history detection frames S from the storage medium4-S1To obtain five center points. Then, the current traveling direction of the vehicle 1 is calculated from the movement trajectories of the five center points. Alternatively, the current detection box S may also be selected5Left lower corner point P of1Then, four history detection frames S are called from the storage medium4-S1To obtain five lower left corner points, and calculating the current driving direction of the vehicle 1 according to the movement trajectories of the five lower left corner points. Similarly, the lower right corner point P can be utilized2Or the upper left corner point P3Or the upper right corner point P4The current driving direction of the vehicle 1 is calculated according to the movement trajectory, which will not be described in detail herein.
Step S140, determining a first reference point on the current detection frame based on the current driving direction and the relative position relationship between the vehicle and the road monitoring device.
Since the detection frame closely wraps the vehicle, a point can be selected from the detection frame to approximate a point on the vehicle for vehicle localization. Theoretically, the closer the first reference point on the selected detection frame is to the vehicle itself, the higher the positioning accuracy. The inventors have found that different positions of the vehicle relative to the road monitoring device result in different imaging angles and, accordingly, the point on the detection frame closest to the vehicle itself changes. Therefore, in the present embodiment, the first reference point is determined according to the current driving direction of the vehicle 1 and the relative positional relationship between the vehicle 1 and the road monitoring device 3, that is, the selection strategy of the first reference point is changed according to the current driving direction of the vehicle 1 and the relative positional relationship between the vehicle 1 and the road monitoring device 3, so as to ensure that the first reference point selected for each current detection frame is as closest as possible to the vehicle itself, thereby improving the positioning accuracy.
And step S150, determining a positioning reference point in the target coordinate system based on the first reference point.
The first reference point is a point on the current detection frame, i.e., a point in the image coordinate system. In order to realize the positioning of the vehicle 1, the first reference point needs to be mapped into the target coordinate system as a positioning reference point of the vehicle 1. The target coordinate system may be, for example, a map coordinate system or a GPS coordinate system.
And step S160, determining the current position of the vehicle in the target coordinate system according to the positioning reference point, the current driving direction and the size parameter of the vehicle.
The dimensional parameters of the vehicle 1 include the length and width of the vehicle. In one embodiment, the length and width of the vehicle 1 are represented by the length and width of the vehicle chassis, respectively. The current position of the vehicle 1 can be determined by performing proportional transformation on the size parameters of the vehicle 1 and converting the size parameters into a target coordinate system to obtain a rectangular frame obtained by mapping the vehicle 1 into the target coordinate system, and then combining the current driving direction of the vehicle 1 and one point on the vehicle, namely a positioning reference point. In one embodiment, the center point coordinates of the vehicle 1 are selected as the current position information of the vehicle.
According to the image-based vehicle positioning method provided by the embodiment, the current driving direction of the vehicle is determined based on the multi-frame images shot by the road monitoring equipment; then, selecting a first reference point from a current detection frame in the current frame image according to a preset strategy to approximate a point on the vehicle, namely a positioning reference point; and then, the current position of the vehicle is determined according to the current driving direction, the positioning reference point and the size parameter of the vehicle, because the first reference point can be selected based on the current driving direction and the relative position relation between the vehicle and the road monitoring equipment, and is mapped to a target coordinate system to obtain the positioning reference point, and the size of the vehicle can be determined according to the parameter of the vehicle, the positioning precision of the vehicle in the target coordinate system can be ensured, in addition, because the vehicle is positioned in an image mode, the accurate positioning of the vehicle can be realized without using sensors such as radar and the like, and the cost is reduced.
FIG. 4 is a schematic flow chart diagram illustrating an image-based vehicle localization method according to another exemplary embodiment of the present application. The difference between the image-based vehicle positioning method 200 provided in the present embodiment and the image-based vehicle positioning method 100 shown in fig. 2 is that, in the process of determining the current driving direction of the vehicle according to step S130, the current second reference point is determined according to the position of the current detection frame in the current frame image, and then the current driving direction of the vehicle is determined by combining with at least one historical second reference point of at least one historical detection frame. Specifically, continuing with the exemplary description of fig. 1, as shown in fig. 4, in the image-based vehicle positioning method 200, step S130 specifically includes:
step S131, determining a current second reference point on the current detection frame according to the position of the current detection frame in the current frame image.
In this embodiment, a predetermined point on the current detection frame is selected as the current second reference point according to the position of the current detection frame in the current frame image, that is, the selection policy of the current second reference point is adaptively changed according to the position of the current detection frame in the current frame image.
In one embodiment, step S131 includes: and determining the number of pixel points surrounded by the current detection frame. And when the number of the pixel points surrounded by the current detection frame is smaller than a first number threshold, filtering the current detection frame. And when the number of the pixel points surrounded by the current detection frame is greater than or equal to the first number threshold and less than the second number threshold, determining a current second reference point based on two adjacent pixel points used for determining that the abscissa and the ordinate of the current detection frame both change. And when the number of the pixel points surrounded by the current detection frame is larger than or equal to the second number threshold, determining a current second reference point based on the central point of the current detection frame. The first quantity threshold is smaller than the second quantity threshold, and the first quantity threshold and the second quantity threshold are reasonably set according to actual conditions.
Since departments such as road supervision and the like have certain requirements on the installation position of the monitoring equipment, in the case of meeting the relevant requirements, a vehicle which normally runs usually enters the image from four edges of the image, in this case, step S131 may be specifically executed as:
firstly, the number of pixel points surrounded by the current detection frame is determined. The detection frame is determined by four pixel points, and after the detection frame is determined, the number of the pixel points surrounded by the detection frame can be determined according to two opposite angular points of the four pixel points and by combining image resolution.
And secondly, selecting a current second reference point according to different conditions of the number of the pixel points surrounded by the current detection frame.
For example, fig. 5a is a schematic diagram of a current frame image according to an exemplary embodiment of the present application. As shown in fig. 5a, the area of the vehicle 1 smaller than one third of the vehicle body enters the image, and at this time, the number of the pixel points surrounded by the current detection frame is smaller than the first number threshold. In this case, the current frame image does not sufficiently reflect the traveling direction of the vehicle 1, and therefore the current detection frame in the current frame image is filtered out.
As another example, fig. 5b is a schematic diagram of a current frame image according to another exemplary embodiment of the present application. As shown in fig. 5b, the area of the vehicle 1 greater than or equal to one third and smaller than the entire vehicle body enters the image, and at this time, the number of the pixel points surrounded by the current detection frame is greater than or equal to the first number threshold and smaller than the second number threshold. In this case, the midpoint of the connecting line of the two adjacent pixels, where both the abscissa and the ordinate of the current detection frame change, is selected as the current second reference point.
As shown in fig. 5b, in the case where the area of the vehicle 1, which is greater than or equal to one third and smaller than the entire vehicle body, enters the image, one of the abscissa and ordinate of two of the four pixel points forming the detection frame is unchanged, i.e., always at the edge of the image, such as the upper right corner point p4And the lower right corner point p2(ii) a The other two are adjacentThe abscissa and ordinate of the pixel point of (a) are gradually changed, e.g. the upper left corner point p3And the lower left corner point p1. In this case, the upper left corner point p is selected3And the lower left corner point p1The midpoint Q of the line serves as the current second reference point.
For another example, fig. 5c is a schematic diagram of a current frame image according to another exemplary embodiment of the present application. As shown in fig. 5c, the vehicle 1 completely enters the image, and at this time, the number of the pixel points surrounded by the current detection frame is greater than or equal to the second number threshold. In this case, the center point p of the current detection frame is selected0As the current second reference point.
It can be seen that, according to the difference of the position of the current detection frame in the image, the current second reference point may be the central point p of the current detection frame0Or the midpoint Q of two adjacent corner points of the current detection frame.
And step S132, determining the current driving direction of the vehicle according to the current second reference point and by combining at least one historical second reference point on at least one historical detection frame.
Specifically, taking the five-frame image shown in fig. 3 as an example, as shown in fig. 3, all five detection frames in the five-frame image are completely located in the image, and the second reference points of the five detection frames are all the center points of the detection frames. In this case, after the second reference points of the five detection frames shown in fig. 3 are mapped into the target coordinate system, the schematic positional relationship diagram of the five second reference points in the target coordinate system shown in fig. 6 is obtained. For convenience of description, the second reference point is denoted as ai(xi,yi) I is the number of frames of the image, i is 1,2,3,4, 5. Wherein, a5(x5,y5) Is the current second reference point. The process of calculating the current driving direction of the vehicle in the current frame image according to step S112 includes:
first, a rough traveling direction of the vehicle in the current frame image is determined based on the current second reference point and a predetermined history second reference point among the at least one history second reference point.
Taking the predetermined history second reference point as a history second reference point a1(x1,y1) For example, the direction vector b of the rough driving direction of the vehicle in the current frame image5Is (y)5-y1,x5-x1)。
Secondly, determining a weighted average value of the direction vector of the rough driving direction and the direction vector of at least one historical driving direction of the vehicle in at least one historical frame image to obtain the direction vector of the current driving direction.
With continuing reference to fig. 3 and 6, assume that the direction vectors of the four historical frame images before the current frame image are respectively B4、B3、B2And B1The direction vector of the current driving direction of the vehicle is:
B5=p1b5+p2B4=p1b5+p2(p1b4+p2B3)=p1b5+p2(p1b4+p2(p1b3+p2B2))=......=(Y1,X1);p1=1-p2
wherein p is1And p2Is a preset weight.
Subsequently, the steps S120, S130 and S140 are continued, so that the positioning of the vehicle can be realized.
According to the image-based vehicle positioning method provided by the embodiment, the second reference point is determined according to the position of the detection frame in the image, and compared with the situation that the same point is selected as the second reference point for the detection frames on the multi-frame image, the direction information of the vehicle can be represented as good as possible by the second reference point selected for each current detection frame, so that the obtained current driving direction is more accurate, and the positioning precision is improved.
In one embodiment, as shown in fig. 4, step S130 further includes step S130 performed before step S131: and determining the running track of the central point by combining the central point of at least one historical detection frame according to the central point of the current detection frame. And smoothing the running track of the central point. And updating the current detection frame based on the smoothed running track of the central point. Specifically, after the current detection frame is determined by using the target detection and target tracking technology, first, the center point of the current detection frame is calculated. Secondly, the central point of at least one historical detection frame is called, and the running tracks of the central points are calculated by adopting a fitting algorithm in combination with the central point of the current detection frame. And then, smoothing the running track of the central point by adopting a track smoothing algorithm. In one embodiment, a five-point one-order track smoothing algorithm is adopted to smooth the running track of the central point. And finally, updating the center point of the current detection frame according to the running track after smoothing, and re-determining the corner point coordinates of the current detection frame according to the new center point of the current detection frame so as to obtain a new current detection frame.
According to the image-based vehicle positioning method provided by the embodiment, the position jitter can be reduced or even eliminated by preprocessing the current detection frame, so that the error caused by estimating the current driving direction of the vehicle based on the current detection frame and at least one history detection frame is reduced, and the positioning accuracy is further improved.
FIG. 7 is a schematic flow chart diagram illustrating an image-based vehicle localization method according to another exemplary embodiment of the present application. The difference between the image-based vehicle positioning method 300 provided in the present embodiment and the image-based vehicle positioning method 100 shown in fig. 1 is that, when the first reference point is determined according to step S140, the periphery of the vehicle is divided into a plurality of sections with the vehicle as the center, and the corresponding corner point of the current detection frame is selected as the first reference point according to the section where the monitoring device is located. Specifically, continuing with the exemplary description of fig. 1, as shown in fig. 7, step S140 includes:
step S141, calculating an included angle between a normal of a connecting line between a third reference point on the current detection frame and the road monitoring equipment and the current driving direction in the target coordinate system.
The third reference point is a point selected from the current test frame according to a predetermined strategy. In one embodiment, the third reference point is the same as the second reference point in the image-based vehicle localization method 200. For example, according to step S131It can be seen that the current detection frame S is shown in FIG. 35Is the center point P0Then the current detection frame S5Is also the center point P0. Accordingly, the selection policy of the third reference point is the same as the selection policy of the second reference point, which is specifically referred to as step S131 in the image-based vehicle positioning method 200 and will not be described herein again.
Fig. 8 is a relative position relationship between the vehicle and the road monitoring device corresponding to the current frame image in fig. 3 in the target coordinate system according to an exemplary embodiment of the present application. As shown in FIG. 8, the current detection box S shown in FIG. 35Is the center point P0And mapping to a target coordinate system to obtain a point A. The position of the road monitoring device 3 in the target coordinate system is shown as a point B in fig. 8, and the relative positional relationship between the vehicle and the road monitoring device can be represented as an included angle θ, i.e., a normal L of a line connecting the point a and the road monitoring device B and the current driving direction B5The included angle therebetween.
Step S142, determining a first reference point according to the corresponding relation between the included angle and the corner of the current detection frame, wherein the first reference point is the corner of the current detection frame corresponding to the included angle.
With reference to fig. 8, the positions of the detection frames of the vehicles with the difference of 180 ° in the driving direction in the target coordinate system are the same, and the calculated included angle θ is the same. Therefore, in order to reduce the amount of calculation, the 360 ° interval around the vehicle is constrained to the range of [ -90 °, 90 ° ] interval. In this case, in an embodiment, the correspondence between the preset angle interval and the corner of the current detection frame includes: when the included angle theta is [ -90 degrees, -30 degrees ] or [0 degrees, 30 degrees ], a lower right corner point of the current detection frame in the image coordinate system is used as a first reference point; and when the included angle theta is [ -30 degrees, 0 degrees ] or [30 degrees, 90 degrees ], a lower left corner point of the corresponding current detection frame in the image coordinate system is used as a first reference point.
Subsequently, steps S150 and S160 are executed, so that the vehicle can be positioned.
According to the image-based vehicle positioning method provided by the embodiment, the relative position relationship between the road monitoring equipment and the vehicle is measured by utilizing the included angle theta between the normal of the connecting line of the third reference point and the road monitoring equipment and the current driving direction, and the corresponding relationship between the included angle theta and the current detection frame corner point is preset, so that the selected first reference point is ensured to be close to the vehicle as far as possible, and the positioning accuracy is further improved.
In one embodiment, the method 300 further comprises, after step S160: step S370, when the difference value between the included angle and the boundary angle of the preset angle interval is smaller than a preset threshold value, calculating the confidence coefficient of the current position of the vehicle according to the ratio of the difference value between the included angle and the boundary angle of the preset angle interval to the angle interval; and smoothing the current position of the vehicle according to the confidence coefficient.
For example, the current position of the vehicle is represented by the center point coordinates of the vehicle. And when the included angle theta is within 15 degrees from the boundary (-30 degrees, 0 degrees and 30 degrees) of the interval, calculating the confidence coefficient of the central point at the moment.
confidence ═ interval boundary/interval length
Pcenter=Pcenter1*confidence+Pcenter2*(1-confidence)
Wherein, PcenterIndicating the smoothed current position, Pcenter1Indicates the current position before smoothing, Pcenter2Indicating the location in the previous history frame after smoothing.
According to the image-based vehicle positioning method provided by the embodiment, the fluctuation of the motion trail of the vehicle is reduced through the smoothing processing of the vehicle position, and the positioning precision is further improved.
Fig. 9 is a flowchart illustrating an image-based vehicle positioning method according to still another exemplary embodiment of the present application. As shown in fig. 9, the difference between the image-based vehicle localization method 400 and the image-based vehicle localization method 100 shown in fig. 2 is that, in the present embodiment, the process of determining the position of the vehicle in the target coordinate system according to step S160 specifically includes:
step S161, determining that the vehicle is mapped to a corresponding rectangular frame in the target coordinate system according to the acquired length and width of the vehicle.
And carrying out proportional transformation on the length and the width of the acquired vehicle according to a scale between the actual size of the vehicle and the size in the target coordinate system to obtain the mapping of the vehicle in the target coordinate system, and expressing the mapping by using a rectangular frame.
And step S162, determining the pose of the rectangular frame in the target coordinate system according to the positioning reference point, the current driving direction and the direction of the road monitoring equipment relative to the road monitored by the road monitoring equipment. And determining the pose of the rectangular frame in the target coordinate system, namely determining the coordinates of each vertex of the rectangular frame in the target coordinate system.
According to the different directions of the road monitoring equipment relative to the monitored road, after the same angular point of a detection frame surrounding a vehicle in a shot image is mapped into a target coordinate system, the same angular point of the detection frame corresponding to a rectangular frame is mapped. For example, in a road monitoring device located at a north intersection, the lower left corner of a detection frame in a captured image thereof corresponds to the upper right corner of a rectangular frame in a target coordinate system; and the lower right corner of the detection frame in the image shot by the road monitoring equipment positioned at the south side intersection corresponds to the lower right corner of the rectangular frame in the target coordinate system. Therefore, the corresponding relationship between the positioning reference point and the vertex of the rectangular frame needs to be determined according to the orientation of the road monitoring device relative to the road monitored by the road monitoring device.
For example, fig. 10 is a schematic diagram of a position of a vehicle in a target coordinate system according to an exemplary embodiment of the present application. As shown in FIG. 10, assume that the current driving direction of the vehicle is B1(Y1,X1) The selected first reference point is the lower right corner P of the current detection frame2. In this case, taking the road monitoring device at the south crossing as an example, the first reference point is mapped to the lower right corner point q of the corresponding rectangular frame in the target coordinate system1. Then, obtaining other three angular points according to the trigonometric function and the geometric operation, wherein the specific calculation process is as follows:
q2x=q1x-lcosβ
q2y=q1y-lsinβ
q3x=q2x+wsinβ
q3y=q2y-w×cosβ
wherein l and w are the length and width of the rectangular frame, respectively; beta-tan-1(Y1/X1)。
Thus, the pose of the rectangular frame in the target coordinate system is determined.
Step S163, determining the central point q of the rectangular frame in the current pose0As the current position of the vehicle in the target coordinate system.
Continuing to refer to fig. 8, when the rectangular frame is in the current pose, the coordinates of the center point of the rectangular frame are: q. q.s0x=(q1x+q3x)/2;q0y=(q1y+q3y)/2。
According to the image-based vehicle positioning method provided by the embodiment, the pose of the rectangular frame in the target coordinate system is determined according to the positioning reference point, the current driving direction and the direction of the road monitoring equipment relative to the road monitored by the road monitoring equipment, and the algorithm is simple and easy to implement.
Exemplary devices
The present application further provides a vehicle localization apparatus that may be used to perform the image-based vehicle localization method provided in any of the embodiments of the present application. Fig. 11 is a block diagram of a vehicle positioning device according to an exemplary embodiment of the present application. As shown in fig. 11, the vehicle positioning device 90 includes: a first determination module 91, a second determination module 92, an orientation determination module 93, a reference point determination module 94, a reference point determination module 95, and a location module 96. The first determination module 91 is used for determining at least one history detection frame about the vehicle in at least one history frame image captured by the road monitoring device. The second determining module 92 is used for determining a current detection frame of the vehicle in the current frame image. The direction determining module 91 is configured to determine a current driving direction of the vehicle based on the at least one history detection block and the current detection block. The reference point determining module 92 is configured to determine a first reference point on the current detection frame based on the current driving direction and the relative position relationship between the vehicle and the road monitoring device. The reference point determining module 93 is configured to determine a positioning reference point in the target coordinate system based on the first reference point. The positioning module 94 is configured to determine a current position of the vehicle in the target coordinate system according to the positioning reference point, the current driving direction, and the size parameter of the target vehicle.
The vehicle positioning device 90 performs a process including: firstly, a current frame image is obtained according to road monitoring equipment, a current detection frame surrounding a vehicle in the current frame image is determined by utilizing a target tracking technology, and the current driving direction of the vehicle is determined according to the current detection frame and at least one stored historical detection frame. Then, according to the current driving direction and the relative position relationship between the vehicle and the road monitoring device, a first reference point on the current detection frame is selected to approximate a point on the vehicle, namely, a positioning reference point. Then, according to the positioning reference point, the current driving direction and the size parameters of the vehicle, the current position of the vehicle in the target coordinate system can be determined, and therefore vehicle positioning is achieved.
Fig. 12 is a block diagram of a vehicle positioning device according to another exemplary embodiment of the present application. As shown in fig. 12, in the present embodiment, the direction determining module 93 in the vehicle positioning device 100 specifically includes: a third determining module 931 and a fourth determining module 932.
The third determining module 931 is configured to determine a current second reference point on the current detection frame according to the position of the current detection frame in the current frame image. Specifically, in one embodiment, the third determining module 931 is configured to determine the number of pixel points surrounded by the current detection frame; when the number is less than a first number threshold, filtering out the current detection frame. And when the number is greater than or equal to the first number threshold and less than the second number threshold, determining a current second reference point based on two adjacent pixel points used for determining that the abscissa and the ordinate of the current detection frame are changed. When the number is greater than or equal to a second number threshold, a current second reference point is determined based on the center point of the current detection box.
The fourth determining module 932 is configured to determine a current driving direction of the vehicle according to the current second reference point and by combining with the at least one historical second reference point on the at least one historical detection frame. Specifically, in one embodiment, the rough travel direction of the vehicle is determined based on the current second reference point and a predetermined historical second reference point of the at least one historical second reference point; and determining a weighted average value of the direction vector of the rough driving direction and the direction vector of at least one historical driving direction of the vehicle in at least one historical frame image to obtain the current driving direction.
In one embodiment, the direction determining module 93 further includes a track smoothing module 933, configured to determine, according to the center point of the current detection box, a running track of the center point in combination with the center point of the at least one historical detection box; carrying out smoothing processing on the running track of the central point; and updating the current detection frame based on the smoothed running track of the central point.
In one embodiment, as shown in FIG. 12, the reference point determination module 94 includes: a calculation module 941 and a fifth determination module 942. The calculating module 941 is configured to calculate an included angle between a normal of a connection line between the third reference point on the current detection frame and the road monitoring device and the current driving direction in the target coordinate system. In one embodiment, the third reference point and the second reference point are the same. The fifth determining module 942 is configured to determine a first reference point according to the corresponding relationship between the included angle and the corner of the current detection frame, where the first reference point is the corner of the current detection frame corresponding to the included angle.
In this case, in one embodiment, the vehicle positioning device 90 further includes a position smoothing module 97, configured to calculate a confidence of the current position of the vehicle according to a ratio of a difference between the included angle and the boundary angle of the preset angle interval to the angle interval when the difference between the included angle and the boundary angle of the preset angle interval is smaller than a preset threshold; and smoothing the current position of the vehicle according to the confidence coefficient.
In one embodiment, the positioning module 96 specifically includes: a sixth determination module 961, a seventh determination module 962, and an eighth determination module 963. The sixth determining module 961 is configured to determine, according to the obtained length and width of the vehicle, that the vehicle is mapped to a corresponding rectangular frame in the target coordinate system. The seventh determining module 962 is configured to determine the pose of the rectangular frame in the target coordinate system according to the positioning reference point, the current driving direction, and the position of the road monitoring device relative to the road monitored by the road monitoring device. The eighth determining module 963 is configured to determine the central point of the rectangular frame in the pose as the current position of the vehicle in the target coordinate system.
The vehicle positioning device provided in any embodiment of the present application and the image-based vehicle positioning method provided in any embodiment of the present application belong to the same inventive concept, and details that are not described in the embodiment of the vehicle positioning device may refer to the embodiment of the image-based vehicle positioning method, and may obtain the same beneficial effects as the image-based vehicle positioning method, and are not described herein again.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 13. The electronic device may be any one or both of the in-vehicle device, the road monitoring device, or the server in fig. 1.
FIG. 13 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 13, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by processor 11 to implement the image-based vehicle positioning methods of the various embodiments of the present application described above and/or other desired functionality. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is a road monitoring device, the input device 13 may be a camera for capturing an image signal. In case the electronic device is a vehicle mounted device, the input means 13 may be a communication network connector for transmitting signals between the road monitoring device and the server.
The output device 14 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 13, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the image-based vehicle localization method according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the image-based vehicle localization method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. An image-based vehicle localization method, comprising:
determining at least one history detection frame about the vehicle in at least one history frame image shot by the road monitoring equipment;
determining a current detection frame of the vehicle in a current frame image shot by the road monitoring equipment;
determining a current driving direction of the vehicle based on the at least one history detection box and the current detection box;
determining a first reference point on the current detection frame based on the current driving direction and the relative position relationship between the vehicle and the road monitoring equipment;
determining a positioning reference point in a target coordinate system based on the first reference point;
and determining the current position of the vehicle in the target coordinate system according to the positioning reference point, the current driving direction and the size parameter of the vehicle.
2. The image-based vehicle localization method of claim 1, wherein the determining a current direction of travel of the vehicle based on the at least one historical detection box and the current detection box comprises:
determining a current second reference point on the current detection frame according to the position of the current detection frame in the current frame image;
and determining the current driving direction of the vehicle according to the current second reference point and by combining with at least one historical second reference point on the at least one historical detection frame.
3. The image-based vehicle positioning method of claim 2, wherein said determining a current second reference point of the current detection frame according to the position of the current detection frame in the current frame image comprises:
determining the number of pixel points surrounded by the current detection frame;
when the number is greater than or equal to a first number threshold and less than a second number threshold, determining the current second reference point based on two adjacent pixel points used for determining that the abscissa and the ordinate of the current detection frame both change;
determining the current second reference point based on a center point of the current detection box when the number is greater than or equal to a second number threshold.
4. The image-based vehicle localization method of claim 3, further comprising:
filtering out the current detection box when the number is less than the first number threshold.
5. The image-based vehicle positioning method according to claim 2, wherein the determining the current driving direction of the vehicle according to the current second reference point and by combining the acquired at least one historical second reference point on the at least one historical detection frame comprises:
determining a rough driving direction of the vehicle based on the current second reference point and a predetermined historical second reference point of the at least one historical second reference point;
and determining a weighted average value of the direction vector of the rough driving direction and the direction vector of at least one historical driving direction of the vehicle in the at least one historical frame image to obtain the current driving direction.
6. The image-based vehicle positioning method of claim 2, wherein, prior to said determining a current second reference point on said current detection frame from said current detection frame's position in said current frame image, further comprising:
determining a running track of a central point according to the central point of the current detection frame and the central point of the at least one historical detection frame;
carrying out smoothing processing on the running track of the central point;
and updating the current detection frame based on the smoothed running track of the central point.
7. The image-based vehicle positioning method according to any one of claims 1 to 6, wherein determining a first reference point on the current detection frame based on the current traveling direction and a relative positional relationship of the vehicle and the road monitoring device includes:
in the target coordinate system, calculating an included angle between a normal of a connecting line between a third reference point on the current detection frame and the road monitoring equipment and the current driving direction;
and determining the first reference point according to the corresponding relation between the included angle and the corner of the detection frame, wherein the preset angle interval is combined with the corner of the current detection frame corresponding to the included angle.
8. A vehicle locating device comprising:
the road monitoring device comprises a first determining module, a second determining module and a judging module, wherein the first determining module is used for determining at least one history detection frame about a vehicle in at least one history frame image shot by the road monitoring device;
the second determining module is used for determining a current detection frame of the vehicle in the current frame image;
a direction determination module for determining a current driving direction of the vehicle based on the at least one history detection frame and the current detection frame;
a reference point determining module, configured to determine a first reference point on the current detection frame based on the current driving direction and a relative position relationship between the vehicle and the road monitoring device;
a reference point determining module for determining a positioning reference point in a target coordinate system based on the first reference point;
and the positioning module is used for determining the current position of the vehicle in the target coordinate system according to the positioning reference point, the current driving direction and the size parameter of the target vehicle.
9. A computer-readable storage medium, which stores a computer program for executing the image-based vehicle positioning method of any one of claims 1 to 7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the image-based vehicle localization method of any of claims 1-7.
CN202110300526.1A 2021-03-19 2021-03-19 Image-based vehicle positioning method and device, storage medium and electronic equipment Pending CN113075716A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110300526.1A CN113075716A (en) 2021-03-19 2021-03-19 Image-based vehicle positioning method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110300526.1A CN113075716A (en) 2021-03-19 2021-03-19 Image-based vehicle positioning method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113075716A true CN113075716A (en) 2021-07-06

Family

ID=76613928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110300526.1A Pending CN113075716A (en) 2021-03-19 2021-03-19 Image-based vehicle positioning method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113075716A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113613176A (en) * 2021-07-27 2021-11-05 青岛海信网络科技股份有限公司 Positioning method and device for public transport vehicle
CN116503482A (en) * 2023-06-26 2023-07-28 小米汽车科技有限公司 Vehicle position acquisition method and device and electronic equipment

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH085388A (en) * 1994-06-21 1996-01-12 Nissan Motor Co Ltd Running road detecting device
JP2003099762A (en) * 2001-09-25 2003-04-04 Daihatsu Motor Co Ltd Precedent vehicle recognition device and recognition method
JP2003099784A (en) * 2001-09-25 2003-04-04 Daihatsu Motor Co Ltd Recognition method for moving object
CN105760442A (en) * 2016-02-01 2016-07-13 中国科学技术大学 Image feature enhancing method based on database neighborhood relation
JP2016148883A (en) * 2015-02-10 2016-08-18 オムロン株式会社 Image processor, vehicle dimension calculation method, and vehicle dimension calculation program
US20160363647A1 (en) * 2015-06-15 2016-12-15 GM Global Technology Operations LLC Vehicle positioning in intersection using visual cues, stationary objects, and gps
US20200005052A1 (en) * 2018-12-29 2020-01-02 Baidu Online Network Technology (Beijing) Co., Ltd. Processing method and apparatus for vehicle scene sequence tracking, and vehicle
JP2020034471A (en) * 2018-08-31 2020-03-05 株式会社デンソー Map system, method and storage medium for autonomous navigation
WO2020045323A1 (en) * 2018-08-31 2020-03-05 株式会社デンソー Map generation system, server, vehicle-side device, method, and storage medium
US20200090366A1 (en) * 2017-05-24 2020-03-19 Starship Technologies Oü Device and method for detection and localization of vehicles
CN110988912A (en) * 2019-12-06 2020-04-10 中国科学院自动化研究所 Road target and distance detection method, system and device for automatic driving vehicle
CN111462503A (en) * 2019-01-22 2020-07-28 杭州海康威视数字技术股份有限公司 Vehicle speed measuring method and device and computer readable storage medium
CN111540023A (en) * 2020-05-15 2020-08-14 百度在线网络技术(北京)有限公司 Monitoring method and device of image acquisition equipment, electronic equipment and storage medium
CN111538322A (en) * 2019-01-18 2020-08-14 驭势科技(北京)有限公司 Sensor data selection method and device for automatic driving vehicle and vehicle-mounted equipment
US20200326179A1 (en) * 2018-04-27 2020-10-15 Shenzhen Sensetime Technology Co., Ltd. Distance Measurement Method, Intelligent Control Method, Electronic Device, and Storage Medium
US20200348408A1 (en) * 2018-01-16 2020-11-05 Huawei Technologies Co., Ltd. Vehicle Positioning Method and Vehicle Positioning Apparatus
CN112216102A (en) * 2020-09-11 2021-01-12 恒大新能源汽车投资控股集团有限公司 Method, device and equipment for determining road surface information and storage medium
CN112348874A (en) * 2019-08-08 2021-02-09 北京地平线机器人技术研发有限公司 Method and device for determining structural parameter representation of lane line
CN112365741A (en) * 2020-10-23 2021-02-12 淮阴工学院 Safety early warning method and system based on multilane vehicle distance detection
CN112419385A (en) * 2021-01-25 2021-02-26 国汽智控(北京)科技有限公司 3D depth information estimation method and device and computer equipment
CN112417953A (en) * 2020-10-12 2021-02-26 腾讯科技(深圳)有限公司 Road condition detection and map data updating method, device, system and equipment

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH085388A (en) * 1994-06-21 1996-01-12 Nissan Motor Co Ltd Running road detecting device
JP2003099762A (en) * 2001-09-25 2003-04-04 Daihatsu Motor Co Ltd Precedent vehicle recognition device and recognition method
JP2003099784A (en) * 2001-09-25 2003-04-04 Daihatsu Motor Co Ltd Recognition method for moving object
JP2016148883A (en) * 2015-02-10 2016-08-18 オムロン株式会社 Image processor, vehicle dimension calculation method, and vehicle dimension calculation program
US20160363647A1 (en) * 2015-06-15 2016-12-15 GM Global Technology Operations LLC Vehicle positioning in intersection using visual cues, stationary objects, and gps
CN105760442A (en) * 2016-02-01 2016-07-13 中国科学技术大学 Image feature enhancing method based on database neighborhood relation
US20200090366A1 (en) * 2017-05-24 2020-03-19 Starship Technologies Oü Device and method for detection and localization of vehicles
US20200348408A1 (en) * 2018-01-16 2020-11-05 Huawei Technologies Co., Ltd. Vehicle Positioning Method and Vehicle Positioning Apparatus
US20200326179A1 (en) * 2018-04-27 2020-10-15 Shenzhen Sensetime Technology Co., Ltd. Distance Measurement Method, Intelligent Control Method, Electronic Device, and Storage Medium
JP2020034471A (en) * 2018-08-31 2020-03-05 株式会社デンソー Map system, method and storage medium for autonomous navigation
WO2020045323A1 (en) * 2018-08-31 2020-03-05 株式会社デンソー Map generation system, server, vehicle-side device, method, and storage medium
US20200005052A1 (en) * 2018-12-29 2020-01-02 Baidu Online Network Technology (Beijing) Co., Ltd. Processing method and apparatus for vehicle scene sequence tracking, and vehicle
CN111538322A (en) * 2019-01-18 2020-08-14 驭势科技(北京)有限公司 Sensor data selection method and device for automatic driving vehicle and vehicle-mounted equipment
CN111462503A (en) * 2019-01-22 2020-07-28 杭州海康威视数字技术股份有限公司 Vehicle speed measuring method and device and computer readable storage medium
CN112348874A (en) * 2019-08-08 2021-02-09 北京地平线机器人技术研发有限公司 Method and device for determining structural parameter representation of lane line
CN110988912A (en) * 2019-12-06 2020-04-10 中国科学院自动化研究所 Road target and distance detection method, system and device for automatic driving vehicle
CN111540023A (en) * 2020-05-15 2020-08-14 百度在线网络技术(北京)有限公司 Monitoring method and device of image acquisition equipment, electronic equipment and storage medium
CN112216102A (en) * 2020-09-11 2021-01-12 恒大新能源汽车投资控股集团有限公司 Method, device and equipment for determining road surface information and storage medium
CN112417953A (en) * 2020-10-12 2021-02-26 腾讯科技(深圳)有限公司 Road condition detection and map data updating method, device, system and equipment
CN112365741A (en) * 2020-10-23 2021-02-12 淮阴工学院 Safety early warning method and system based on multilane vehicle distance detection
CN112419385A (en) * 2021-01-25 2021-02-26 国汽智控(北京)科技有限公司 3D depth information estimation method and device and computer equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LU, GUANGQUAN等: "Vehicle trajectory extraction by simple two-dimensional model matching at low camera angles in intersection", IET INTELLIGENT TRANSPORT SYSTEMS, vol. 8, no. 7, 3 December 2014 (2014-12-03) *
孟柯: "基于监控视频的车辆行为识别方法", 中国优秀硕士学位论文全文数据库工程科技Ⅱ辑, 15 July 2020 (2020-07-15) *
彭科举;陈新;周东翔;刘云辉;: "基于单目视觉的车辆行驶方向检测及误差分析", 信号处理, no. 10, 25 October 2010 (2010-10-25) *
胡远志;刘俊生;肖佐仁;耿庄程;: "基于数据融合的目标测距方法研究", 重庆理工大学学报(自然科学), no. 12, 15 December 2019 (2019-12-15) *
郭凡;陈启美;李勃;: "基于云台摄像的实时车速检测算法", 计算机辅助设计与图形学学报, no. 09, 20 September 2006 (2006-09-20) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113613176A (en) * 2021-07-27 2021-11-05 青岛海信网络科技股份有限公司 Positioning method and device for public transport vehicle
CN113613176B (en) * 2021-07-27 2024-05-17 青岛海信网络科技股份有限公司 Positioning method and equipment for bus
CN116503482A (en) * 2023-06-26 2023-07-28 小米汽车科技有限公司 Vehicle position acquisition method and device and electronic equipment
CN116503482B (en) * 2023-06-26 2023-10-20 小米汽车科技有限公司 Vehicle position acquisition method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112292711B (en) Associating LIDAR data and image data
WO2021004077A1 (en) Method and apparatus for detecting blind areas of vehicle
US10354151B2 (en) Method of detecting obstacle around vehicle
CN106952308B (en) Method and system for determining position of moving object
CN109872366B (en) Method and device for detecting three-dimensional position of object
US8564657B2 (en) Object motion detection system based on combining 3D warping techniques and a proper object motion detection
JP5089545B2 (en) Road boundary detection and judgment device
JP4919036B2 (en) Moving object recognition device
US9076047B2 (en) System and method for recognizing parking space line markings for vehicle
CN111046743B (en) Barrier information labeling method and device, electronic equipment and storage medium
EP3602396A1 (en) Embedded automotive perception with machine learning classification of sensor data
US20190387209A1 (en) Deep Virtual Stereo Odometry
KR20160123668A (en) Device and method for recognition of obstacles and parking slots for unmanned autonomous parking
US20210158544A1 (en) Method, Device and Computer-Readable Storage Medium with Instructions for Processing Sensor Data
CN110794406A (en) Multi-source sensor data fusion system and method
CN113075716A (en) Image-based vehicle positioning method and device, storage medium and electronic equipment
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN117015792A (en) System and method for generating object detection tags for automated driving with concave image magnification
Kim et al. Multi-sensor-based detection and tracking of moving objects for relative position estimation in autonomous driving conditions
US20210089791A1 (en) Vehicle lane mapping
CN115187941A (en) Target detection positioning method, system, equipment and storage medium
KR102270827B1 (en) Generating Joint Cameraand LiDAR Features Using Cross-View Spatial Feature Mapping for 3D Object Detection
CN116189150B (en) Monocular 3D target detection method, device, equipment and medium based on fusion output
US20230109473A1 (en) Vehicle, electronic apparatus, and control method thereof
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination