CN117112713A - V2X-based station vehicle perception auxiliary positioning method - Google Patents

V2X-based station vehicle perception auxiliary positioning method Download PDF

Info

Publication number
CN117112713A
CN117112713A CN202311111384.XA CN202311111384A CN117112713A CN 117112713 A CN117112713 A CN 117112713A CN 202311111384 A CN202311111384 A CN 202311111384A CN 117112713 A CN117112713 A CN 117112713A
Authority
CN
China
Prior art keywords
vehicle
markers
parking lot
map
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311111384.XA
Other languages
Chinese (zh)
Inventor
秦建波
郭贤生
周国人
杨涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Ebo Information Technology Co ltd
Chengdu Jiaotou Smart Parking Industry Development Co ltd
Original Assignee
Chengdu Ebo Information Technology Co ltd
Chengdu Jiaotou Smart Parking Industry Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Ebo Information Technology Co ltd, Chengdu Jiaotou Smart Parking Industry Development Co ltd filed Critical Chengdu Ebo Information Technology Co ltd
Priority to CN202311111384.XA priority Critical patent/CN117112713A/en
Publication of CN117112713A publication Critical patent/CN117112713A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a field station vehicle perception auxiliary positioning method based on V2X, which comprises the steps of arranging an auxiliary positioning system, wherein the system comprises a plurality of RSUs, a plurality of in-field cameras and a plurality of MEC nodes; obtaining a high-precision map of a parking lot, selecting objects in the parking lot as markers, marking the markers, the RSU, the acquisition unit and the field end MEC nodes on the high-precision map according to actual positions to obtain a marked map, and storing the marked map into a cloud; entering a vehicle, and acquiring a marking map by the vehicle through an RSU; positioning the subareas; selecting a reference point in a shooting range; and calculating the relative distance between the reference point and the vehicle, and mapping the relative distance in the marking map to obtain the actual position of the vehicle on the marking map. The invention provides a novel real-time vehicle positioning method, which is used for accurately assisting in positioning vehicles in an underground parking lot when GPS signals are weak, and positioning accuracy is not affected by displacement changes of reference points.

Description

V2X-based station vehicle perception auxiliary positioning method
Technical Field
The invention relates to a vehicle positioning method in an underground parking garage, in particular to a station vehicle perception auxiliary positioning method based on V2X.
Background
With the development of society, indoor parking lots are becoming more popular, and vehicles need to be positioned for path planning or real-time map updating when the current vehicles run in the underground parking lots. The positioning methods commonly used at present are GPS positioning, video positioning in a parking lot, bluetooth positioning and uwb positioning.
GPS positioning is the most used in automobile positioning equipment, but has certain requirements on the environment, for example, GPS satellite signals are good in outdoor open places, positioning is accurate, but various shielding exists in an underground parking lot, so that satellite signal positioning of the underground parking lot is not good.
For video positioning in a parking lot, generally, cameras arranged in the parking lot are relied on, in order to ensure the positioning accuracy, the coordinates of the cameras are generally required to be marked on a high-precision map of the parking lot, but the cameras can shift and the like in the long-term use process, so that the video positioning is inaccurate.
The Bluetooth positioning accuracy is more than 1 meter, relatively low, and uwb positioning accuracy can reach about 40cm, but the input cost is high.
At present, with the continuous development and maturity of automatic driving technology, vehicles supporting V2X communication are more and more, V2X English is Vehicle-to-Everything Communication, and Chinese is all communication from Vehicle to Vehicle. The invention uses V2X communication to locate the vehicle.
Noun interpretation:
RSU is the English abbreviation of Road Side Unit, translates into Road Side Unit, is installed in ETC system, adopts DSRC (Dedicated Short Range Communication) technique, communicates with On Board Unit (OBU, on Board Unit), realizes vehicle identification, the device of electronic deduction. The detection range of the RSU is a circular area with a radius of about 800 meters by taking a distribution point as a center.
OBU is English abbreviation of On Board Unit, chinese is vehicle-mounted Unit. The OBU is a microwave device placed on the vehicle and communicating with the RSU using DSRC (Dedicated Short Range Communication) technology.
MEC, which is the English abbreviation of Mobile Edge Computing, chinese is mobile edge computing, and the technology can improve user experience, save bandwidth resources on one hand, and provide third party application integration by sinking computing power to mobile edge nodes on the other hand, so that infinite possibility is provided for service innovation of mobile edge entrance. The technology effectively merges the wireless network technology and the Internet technology together, and adds the functions of calculation, storage, processing and the like on the wireless network side. According to the invention, MEC nodes are arranged in the parking lot and used for calculating and processing data acquired in the parking lot.
Disclosure of Invention
The invention aims to provide a station vehicle perception auxiliary positioning method based on V2X, which can accurately position a vehicle when GPS signals are weak.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows: a station vehicle perception auxiliary positioning method based on V2X is applied to a parking lot capable of carrying out V2X communication with a vehicle machine system of a vehicle, and comprises the following steps of;
(1) An auxiliary positioning system is arranged, and the auxiliary positioning system comprises a plurality of RSUs, a plurality of in-field cameras and a plurality of MEC nodes;
the RSU and the in-field cameras are arranged in the parking lot according to the size of the parking lot, the detection ranges of the RSU completely cover the parking lot, the imaging range of each in-field camera corresponds to a subarea in the parking lot, all subareas completely cover the parking lot, and the in-field cameras are numbered;
the vehicle is provided with a vehicle-mounted camera;
the MEC nodes are arranged in a parking lot or on a vehicle, and are used for acquiring video data of an in-field camera or a vehicle-mounted camera for processing and communicating with a cloud through an RSU;
(2) Manually marking;
obtaining a high-precision map of a parking lot, selecting objects in the parking lot as markers, marking the markers, the RSU, the acquisition unit and the field end MEC nodes on the high-precision map according to actual positions to obtain a marked map, and storing the marked map into a cloud;
(3) The method comprises the steps that a vehicle enters a ground, and a vehicle machine system obtains a marking map through an RSU;
(4) Positioning the subareas;
the method comprises the steps that an in-field camera works, when a vehicle is identified to enter a subarea of the in-field camera, video information is sent to MEC nodes, and the MEC nodes obtain the subarea of the vehicle in a marking map according to the number of the in-field camera;
(5) Selecting a reference point in a shooting range;
(51) Acquiring video information acquired by a camera, wherein the camera is an in-field camera corresponding to the subarea or a vehicle-mounted camera of the vehicle;
(52) Carrying out image recognition on the video information, recognizing a vehicle and a plurality of markers, taking the recognized markers as markers to be detected, acquiring coordinates of the markers to be detected when the markers to be detected are marked manually, and calculating theoretical relative distances between each marker to be detected and other markers to be detected;
(53) Establishing a sub-coordinate system by taking a camera as a round dot, adopting a video positioning method for video information to obtain the positions of the vehicle and each to-be-detected marker in the sub-coordinate system, and calculating the actual relative distance between each to-be-detected marker and other to-be-detected markers in the sub-coordinate system;
(54) Calculating the sum of the distance changes of each to-be-measured marker according to the theoretical relative distance and the actual relative distance, and taking the to-be-measured marker with the minimum sum of the distance changes as a reference point in the shooting range;
(6) And calculating the relative distance between the reference point and the vehicle, and mapping the relative distance in the marking map to obtain the actual position of the vehicle on the marking map.
As preferable: and the method further comprises the step (7) of sending the actual position of the vehicle to the cloud end through the RSU.
As preferable: in the step (1), if MEC nodes are arranged in a parking lot, 1-3 in-field cameras are corresponding according to calculation force, and a plurality of MEC nodes can cover all in-field cameras; if MEC nodes are arranged on vehicles, one vehicle is arranged.
As preferable: in step (2), the object as the marker includes a sign line, a stop line, a deceleration strip, a pillar, and/or other objects that do not move without active interference.
As preferable: step (52) of performing image recognition on the video information;
if the video information is acquired by the in-field camera, the vehicle is directly identified from the video information;
if the video information is acquired by the vehicle-mounted camera, marking the position of the vehicle-mounted camera as a vehicle.
As preferable: in the step (53), the video positioning method comprises a video ranging and direction positioning method of an object in front of the running of the motor vehicle.
As preferable: in the step (54), the sum of the distance changes of the markers to be detected is calculated by the following method;
(a1) Setting m labels to be measured, and sequentially marking the m labels as D1 to Dm;
(a2) For D1, the theoretical relative distance between D1 and D2 is L12, and the actual relative distance between D1 and D2 is S12, then the change amount of D1 and D2 is B12= |L12-S12|;
(a3) Sequentially calculating variation amounts B13-B1 m of D1 and D3-DM;
(a4) Adding B12-B1 m to obtain a distance change sum B1 of D1;
(a5) And (3) sequentially calculating the sum B2-Bm of the distance changes of D2-Dm according to the steps (a 2) - (a 4).
As preferable: step (55) further comprises, if the sum of the distance variations of at least 2 markers to be measured is the same and is the minimum, selecting 1 marker to be measured therefrom as a reference point.
The whole idea of the invention is as follows:
after the auxiliary positioning system is arranged, a marker is selected in the parking lot, and the marker, the RSU, the acquisition unit and the field end MEC node are marked on a high-precision map according to actual positions to obtain a marked map.
The vehicle enters the field to acquire a marking map, and then the approximate area of the vehicle in the parking lot, namely the sub-area positioning of the invention, is calculated according to the position of the camera in the field of the captured vehicle.
And selecting a reference point from the subareas, mapping the reference point into the marking map by calculating the relative distance between the reference point and the vehicle, and accurately obtaining the position of the vehicle in the marking map because the reference point is known in the marking map.
The invention proposes a new selection method according to steps (51) - (54) when selecting the reference point. According to the method, only the vehicle and the markers in the plurality of shooting ranges are identified through image identification, and the markers can be quickly identified because the vehicle is positioned in the subareas, if the cameras are in-field cameras, the actual shooting ranges are the subareas, and if the cameras are in-vehicle cameras, the actual shooting ranges are part of the subareas overlapped with the subareas, and the range is small. After identifying the marker, the coordinates of the marker can be known according to the manual marker, but the marker can move, for example, the marker line of the left turn moves forwards by 10 meters due to the field planning, or the camera moves in the field due to the collision of the vehicle, so that the actual position and the theoretical position are different.
Compared with the prior art, the invention has the advantages that:
the invention provides a novel vehicle real-time positioning method based on V2X communication and video positioning, which is used for accurately assisting in positioning a vehicle in an underground parking lot when GPS signals are weak.
The method comprises the steps of firstly selecting a marker to mark on a high-precision map in a parking lot to obtain a marked map, determining which subarea the vehicle is located in by a camera after the vehicle enters, selecting one marker from the markers in the subarea as a reference point, calculating the relative distance between the reference point and the vehicle, and mapping the relative distance to the marked map.
The method is not affected by the position change of the marker, the method for selecting the reference point is not adopted, the relative distance between the marker and the vehicle is calculated directly by using the marker to position the vehicle, and when the marker is offset but not updated in the map, the positioning of the vehicle is offset, so that the positioning is inaccurate. The invention selects the marker which is least likely to change by selecting the reference point, thereby improving the positioning accuracy.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of an assisted positioning system according to the present invention.
Description of the embodiments
The invention will be further described with reference to the accompanying drawings.
Example 1: referring to fig. 1 and 2, a V2X-based station vehicle perception aided positioning method applied to a parking lot capable of V2X communication with a vehicle system of a vehicle, includes the steps of;
(1) An auxiliary positioning system is arranged, and the auxiliary positioning system comprises a plurality of RSUs, a plurality of in-field cameras and a plurality of MEC nodes;
the RSU and the in-field cameras are arranged in the parking lot according to the size of the parking lot, the detection ranges of the RSU completely cover the parking lot, the imaging range of each in-field camera corresponds to a subarea in the parking lot, all subareas completely cover the parking lot, and the in-field cameras are numbered;
the vehicle is provided with a vehicle-mounted camera;
the MEC nodes are arranged in a parking lot or on a vehicle, and are used for acquiring video data of an in-field camera or a vehicle-mounted camera for processing and communicating with a cloud through an RSU;
(2) Manually marking;
obtaining a high-precision map of a parking lot, selecting objects in the parking lot as markers, marking the markers, the RSU, the acquisition unit and the field end MEC nodes on the high-precision map according to actual positions to obtain a marked map, and storing the marked map into a cloud;
(3) The method comprises the steps that a vehicle enters a ground, and a vehicle machine system obtains a marking map through an RSU;
(4) Positioning the subareas;
the method comprises the steps that an in-field camera works, when a vehicle is identified to enter a subarea of the in-field camera, video information is sent to MEC nodes, and the MEC nodes obtain the subarea of the vehicle in a marking map according to the number of the in-field camera;
(5) Selecting a reference point in a shooting range;
(51) Acquiring video information acquired by a camera, wherein the camera is an in-field camera corresponding to the subarea or a vehicle-mounted camera of the vehicle;
(52) Carrying out image recognition on the video information, recognizing a vehicle and a plurality of markers, taking the recognized markers as markers to be detected, acquiring coordinates of the markers to be detected when the markers to be detected are marked manually, and calculating theoretical relative distances between each marker to be detected and other markers to be detected;
(53) Establishing a sub-coordinate system by taking a camera as a round dot, adopting a video positioning method for video information to obtain the positions of the vehicle and each to-be-detected marker in the sub-coordinate system, and calculating the actual relative distance between each to-be-detected marker and other to-be-detected markers in the sub-coordinate system;
(54) Calculating the sum of the distance changes of each to-be-measured marker according to the theoretical relative distance and the actual relative distance, and taking the to-be-measured marker with the minimum sum of the distance changes as a reference point in the shooting range;
(6) And calculating the relative distance between the reference point and the vehicle, and mapping the relative distance in the marking map to obtain the actual position of the vehicle on the marking map.
In this embodiment: and the MEC nodes are distributed in the parking lot, each MEC node corresponds to 1-3 in-field cameras according to calculation force, and a plurality of MEC nodes can cover all in-field cameras.
In step (2), the object as the marker includes a sign line, a stop line, a deceleration strip, a pillar, and/or other objects that do not move without active interference. The identification line may be a turn line, a forbidden line, or the like.
Step (52) of performing image recognition on the video information;
if the video information is acquired by the in-field camera, the vehicle is directly identified from the video information;
if the video information is acquired by the vehicle-mounted camera, marking the position of the vehicle-mounted camera as a vehicle.
In the step (53), the video positioning method comprises a video ranging and direction positioning method of an object in front of the running of the motor vehicle.
In the step (54), the sum of the distance changes of the markers to be detected is calculated by the following method;
(a1) Setting m labels to be measured, and sequentially marking the m labels as D1 to Dm;
(a2) For D1, the theoretical relative distance between D1 and D2 is L12, and the actual relative distance between D1 and D2 is S12, then the change amount of D1 and D2 is B12= |L12-S12|;
(a3) Sequentially calculating variation amounts B13-B1 m of D1 and D3-DM;
(a4) Adding B12-B1 m to obtain a distance change sum B1 of D1;
(a5) And (3) sequentially calculating the sum B2-Bm of the distance changes of D2-Dm according to the steps (a 2) - (a 4).
Step (55) further comprises, if the sum of the distance variations of at least 2 markers to be measured is the same and is the minimum, selecting 1 marker to be measured therefrom as a reference point.
Example 2: referring to fig. 1 and 2, in the present embodiment, MEC nodes are laid out on vehicles, one on each vehicle. The remainder was the same as in example 1. In this embodiment, the video information is sent to the vehicle-mounted MEC node, and after being processed, sent to the cloud via the RSU.
Example 3: referring to fig. 1 and 2, the present embodiment includes steps (7) of transmitting the actual position of the vehicle to the cloud via the RSU, in addition to steps (1) - (6), on the basis of embodiment 1 or embodiment 2. The cloud end is mainly used for recording the vehicle track and providing a basis for subsequent data analysis.
In addition, in the step (4) of the invention, the vehicle subarea is positioned, and the BSM information of the vehicle can be directly utilized instead of relying on a camera. BSM message: is an abbreviation of Basic Safety Message, a basic message type in the internet of vehicles. BSM is the most widely used message in V2X communication, and all V2V applications are currently implemented based on BSM messages. The BSM message is the basis of inter-vehicle communication, and contains basic information of the position, speed, direction and the like of the vehicles, and is used for traffic safety and traffic flow optimization among the vehicles. The BSM information can realize real-time communication among vehicles through the Internet of vehicles technology, and traffic safety and efficiency are improved.
The BSM message of the vehicle is sent to the MEC node, which is compared with the marking map, and the rough area of the vehicle can be obtained.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (8)

1. The utility model provides a station vehicle perception auxiliary positioning method based on V2X, is applied to the parking area that can carry out V2X communication with the car machine system of vehicle, its characterized in that: comprises the following steps of;
(1) An auxiliary positioning system is arranged, and the auxiliary positioning system comprises a plurality of RSUs, a plurality of in-field cameras and a plurality of MEC nodes;
the RSU and the in-field cameras are arranged in the parking lot according to the size of the parking lot, the detection ranges of the RSU completely cover the parking lot, the imaging range of each in-field camera corresponds to a subarea in the parking lot, all subareas completely cover the parking lot, and the in-field cameras are numbered;
the vehicle is provided with a vehicle-mounted camera;
the MEC nodes are arranged in a parking lot or on a vehicle, and are used for acquiring video data of an in-field camera or a vehicle-mounted camera for processing and communicating with a cloud through an RSU;
(2) Manually marking;
obtaining a high-precision map of a parking lot, selecting objects in the parking lot as markers, marking the markers, the RSU, the acquisition unit and the field end MEC nodes on the high-precision map according to actual positions to obtain a marked map, and storing the marked map into a cloud;
(3) The method comprises the steps that a vehicle enters a ground, and a vehicle machine system obtains a marking map through an RSU;
(4) Positioning the subareas;
the method comprises the steps that an in-field camera works, when a vehicle is identified to enter a subarea of the in-field camera, video information is sent to MEC nodes, and the MEC nodes obtain the subarea of the vehicle in a marking map according to the number of the in-field camera;
(5) Selecting a reference point in a shooting range;
(51) Acquiring video information acquired by a camera, wherein the camera is an in-field camera corresponding to the subarea or a vehicle-mounted camera of the vehicle;
(52) Carrying out image recognition on the video information, recognizing a vehicle and a plurality of markers, taking the recognized markers as markers to be detected, acquiring coordinates of the markers to be detected when the markers to be detected are marked manually, and calculating theoretical relative distances between each marker to be detected and other markers to be detected;
(53) Establishing a sub-coordinate system by taking a camera as a round dot, adopting a video positioning method for video information to obtain the positions of the vehicle and each to-be-detected marker in the sub-coordinate system, and calculating the actual relative distance between each to-be-detected marker and other to-be-detected markers in the sub-coordinate system;
(54) Calculating the sum of the distance changes of each to-be-measured marker according to the theoretical relative distance and the actual relative distance, and taking the to-be-measured marker with the minimum sum of the distance changes as a reference point in the shooting range;
(6) And calculating the relative distance between the reference point and the vehicle, and mapping the relative distance in the marking map to obtain the actual position of the vehicle on the marking map.
2. The V2X-based venue vehicle awareness aided positioning method of claim 1, wherein: and the method further comprises the step (7) of sending the actual position of the vehicle to the cloud end through the RSU.
3. The V2X-based venue vehicle awareness aided positioning method of claim 1, wherein: in the step (1), if MEC nodes are arranged in a parking lot, 1-3 in-field cameras are corresponding according to calculation force, and a plurality of MEC nodes can cover all in-field cameras; if MEC nodes are arranged on vehicles, one vehicle is arranged.
4. The V2X-based venue vehicle awareness aided positioning method of claim 1, wherein: in step (2), the object as the marker includes a sign line, a stop line, a deceleration strip, a pillar, and/or other objects that do not move without active interference.
5. The V2X-based venue vehicle awareness aided positioning method of claim 1, wherein: step (52) of performing image recognition on the video information;
if the video information is acquired by the in-field camera, the vehicle is directly identified from the video information;
if the video information is acquired by the vehicle-mounted camera, marking the position of the vehicle-mounted camera as a vehicle.
6. The V2X-based venue vehicle awareness aided positioning method of claim 1, wherein: in the step (53), the video positioning method comprises a video ranging and direction positioning method of an object in front of the running of the motor vehicle.
7. The V2X-based venue vehicle awareness aided positioning method of claim 1, wherein: in the step (54), the sum of the distance changes of the markers to be detected is calculated by the following method;
(a1) Setting m labels to be measured, and sequentially marking the m labels as D1 to Dm;
(a2) For D1, the theoretical relative distance between D1 and D2 is L12, and the actual relative distance between D1 and D2 is S12, then the change amount of D1 and D2 is B12= |L12-S12|;
(a3) Sequentially calculating variation amounts B13-B1 m of D1 and D3-DM;
(a4) Adding B12-B1 m to obtain a distance change sum B1 of D1;
(a5) And (3) sequentially calculating the sum B2-Bm of the distance changes of D2-Dm according to the steps (a 2) - (a 4).
8. The V2X-based venue vehicle awareness aided positioning method of claim 1, wherein: step (54) further includes, if the sum of the distance variations of at least 2 of the markers to be measured is the same and is the minimum, selecting 1 of the markers to be measured as a reference point.
CN202311111384.XA 2023-08-31 2023-08-31 V2X-based station vehicle perception auxiliary positioning method Pending CN117112713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311111384.XA CN117112713A (en) 2023-08-31 2023-08-31 V2X-based station vehicle perception auxiliary positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311111384.XA CN117112713A (en) 2023-08-31 2023-08-31 V2X-based station vehicle perception auxiliary positioning method

Publications (1)

Publication Number Publication Date
CN117112713A true CN117112713A (en) 2023-11-24

Family

ID=88808896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311111384.XA Pending CN117112713A (en) 2023-08-31 2023-08-31 V2X-based station vehicle perception auxiliary positioning method

Country Status (1)

Country Link
CN (1) CN117112713A (en)

Similar Documents

Publication Publication Date Title
US11663916B2 (en) Vehicular information systems and methods
CN108230734B (en) Outdoor wisdom parking system based on V2X communication
CN102889892B (en) The method of real scene navigation and navigation terminal
CN105793669B (en) Vehicle position estimation system, device, method, and camera device
CN110208739B (en) Method, device and equipment for assisting vehicle positioning by using V2X based on road side equipment
KR100469714B1 (en) Method and apparatus for collecting traffic information in realtime
TWI537580B (en) Positioning control method
CA2794990C (en) Method for determining traffic flow data in a road network
CN109931927B (en) Track recording method, indoor map drawing method, device, equipment and system
KR101446546B1 (en) Display system of vehicle information based on the position
CN101673460B (en) Traffic information quality evaluation method, device and system therefor
EP4336864A1 (en) Vehicle-road cooperative positioning method and apparatus, vehicle-mounted positioning system, and roadside unit
CN112712697B (en) Lane-level traffic state discrimination method oriented to vehicle-road cooperative application
CN102324182A (en) Traffic road information detection system based on cellular network and detection method thereof
CN112396856A (en) Road condition information acquisition method, traffic signboard and intelligent internet traffic system
CN111757288A (en) Perception base station in road traffic environment and message sending method and device thereof
CN111885524A (en) Indoor positioning method based on V2X technology
CN113029187A (en) Lane-level navigation method and system fusing ADAS fine perception data
CN111212375B (en) Positioning position adjusting method and device
CN112639404A (en) Position determination device for vehicle visual position determination
CN117112713A (en) V2X-based station vehicle perception auxiliary positioning method
CN209842070U (en) Indoor positioning system based on intelligent equipment
Mishra et al. A Novel and Cost Effective Approach to Public Vehicle Tracking System
US11371853B2 (en) Information processing device, information processing method and program
CN109658707A (en) The overhead recording method violating the regulations of electric vehicle and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination