WO2022012316A1 - 控制方法、车辆和服务器 - Google Patents

控制方法、车辆和服务器 Download PDF

Info

Publication number
WO2022012316A1
WO2022012316A1 PCT/CN2021/102846 CN2021102846W WO2022012316A1 WO 2022012316 A1 WO2022012316 A1 WO 2022012316A1 CN 2021102846 W CN2021102846 W CN 2021102846W WO 2022012316 A1 WO2022012316 A1 WO 2022012316A1
Authority
WO
WIPO (PCT)
Prior art keywords
window
road segment
mode
vehicle
signal
Prior art date
Application number
PCT/CN2021/102846
Other languages
English (en)
French (fr)
Inventor
柴文楠
刘中元
广学令
蒋少峰
赖健明
Original Assignee
广州小鹏自动驾驶科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州小鹏自动驾驶科技有限公司 filed Critical 广州小鹏自动驾驶科技有限公司
Priority to EP21790066.1A priority Critical patent/EP3968609B1/en
Publication of WO2022012316A1 publication Critical patent/WO2022012316A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • G01C21/3819Road shape data, e.g. outline of a route
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3844Data obtained from position sensors only, e.g. from inertial navigation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Definitions

  • the present invention relates to the technical field of intelligent assisted driving of vehicles, in particular to a control method, a vehicle and a server.
  • the car has the function of intelligent network connection.
  • the car is equipped with various sensors and high-performance processors, and is connected to the cloud through the Internet to exchange various information.
  • intelligent navigation and autonomous parking in the parking lot have become new features of intelligent car autonomous driving. How to use various sensors of the car to realize the above functions has become a technical problem to be solved.
  • Embodiments of the present invention provide a control method, a vehicle, and a server.
  • control method of the embodiment of the present invention is applied to a vehicle, and the control method includes:
  • the sensor signal comprises an odometer signal
  • identifying the road segment window and collecting sensor signals of the vehicle including:
  • the set driving distance is taken as the size of the road segment window
  • the positioning track corresponding to the driving distance is taken as the road segment window
  • the driving distance of the vehicle is recorded.
  • the date and time of the section window is used as the update time of the section window.
  • processing the sensor signal to identify the road segment pattern corresponding to the road segment window comprises:
  • the sensor signal includes a visual odometry signal and an attitude signal
  • the road section mode includes a flat mode or a slope mode
  • the sensor signal is processed to extract the visual odometer signal and the attitude signal to identify the road section mode as the flat ground mode is also the slope mode; or,
  • the sensor signal includes an odometer signal, an attitude signal and a turning signal
  • the road section mode includes a straight mode or a turning mode
  • the sensor signal is processed to extract the odometer signal, the attitude signal and the turning signal to identify the whether the segment mode is the straight mode or the turn mode; or,
  • the sensor signal includes a visual odometry signal and an attitude signal
  • the road section mode includes a good road condition mode or a poor road condition mode
  • the sensor signal is processed to extract the visual odometer signal and the attitude signal to identify whether the road section mode is all the the good road condition mode or the bad road condition mode; or,
  • the sensor signal includes a position signal and a light intensity signal
  • the road section mode includes an indoor mode or an outdoor mode
  • the sensor signal is processed to extract the position signal and the light intensity signal to identify whether the road section mode is the indoor mode or the indoor mode the outdoor mode.
  • processing the sensor signal to identify the road segment pattern corresponding to the road segment window includes:
  • the corresponding relationship includes a corresponding relationship between the confidence level of the road segment pattern and the positioning track.
  • control method of the embodiment of the present invention is used for a server, and the control method includes:
  • the first link window and the second link window are fused to update the second link window.
  • the scene includes a plurality of the second road segment windows, each of the second road segment windows corresponds to a maturity level, and the control method includes:
  • the current second section window and the second section window matching the current second section window are fused to Update the scene.
  • each of the second road segment windows corresponds to an update time
  • the control method includes:
  • control method includes:
  • a collection module which is configured to collect sensor signals of the vehicle with a road segment window when the vehicle is close to the parking lot;
  • a processing module configured to process the sensor signal to identify the road segment mode corresponding to the road segment window
  • the uploading module is configured to upload the corresponding relationship between the road segment mode corresponding to each road segment window of the vehicle driving track and the positioning track of the vehicle in the parking lot map to the server.
  • the receiving module is configured to receive and save a plurality of correspondences between the road section mode uploaded by the vehicle and the positioning trajectory of the vehicle in the parking lot map, and form a scene road section library;
  • an acquisition module which is used to acquire the current positioning track input by the vehicle
  • a first matching module which is configured to match the current positioning track with each scene in the scene road segment library according to the position signal of the current positioning track
  • the second matching module is configured to match the first road segment window of the current location trajectory with the second road segment window in the scene when the current location track matches the scene;
  • a fusion module configured to fuse the first road segment window and the second road segment window to update the second road segment window when the first road segment window matches the second road segment window .
  • the vehicle can identify the road section mode of the parking lot through the sensor signal of the vehicle, and the corresponding relationship between the road section mode corresponding to each road section window of the vehicle driving track and the positioning trajectory of the vehicle in the parking lot map. Upload to the server, so that the server can form an effective database to provide support for the intelligent navigation and autonomous parking of the vehicle.
  • 1-4 are schematic flowcharts of a vehicle control method according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an example of a control method of a vehicle according to an embodiment of the present invention.
  • 6-10 are schematic flowcharts of a control method for a server according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a module of a vehicle according to an embodiment of the present invention.
  • FIG. 12 is a schematic block diagram of a server according to an embodiment of the present invention.
  • first and second are only used for description purposes, and cannot be understood as indicating or implying relative importance or implying the number of indicated technical features. Thus, features defined as “first”, “second” may expressly or implicitly include one or more of said features.
  • “plurality” means two or more, unless otherwise expressly and specifically defined.
  • control method of the embodiment of the present invention is applied to a vehicle, and the control method includes:
  • Step S12 when the vehicle is close to the parking lot, identify the road segment window and collect the sensor signal of the vehicle;
  • Step S14 processing the sensor signal to identify the road segment mode corresponding to the road segment window
  • Step S16 upload the corresponding relationship between the road segment mode corresponding to each road segment window of the vehicle's driving trajectory and the vehicle's positioning trajectory in the parking lot map to the server.
  • the vehicle can identify the road section mode of the parking lot through the sensor signal of the vehicle, and upload the corresponding relationship between the road section mode corresponding to each road section window of the vehicle driving track and the positioning trajectory of the vehicle in the parking lot map to the server,
  • the server can form an effective database to provide support for the intelligent navigation and autonomous parking of the vehicle.
  • the positioning trajectory of the vehicle is identified based on a real-time identification algorithm or a machine vision-based identification algorithm only.
  • the real-time identification algorithm requires a large amount of computation, occupies a large amount of memory, and requires a high processor.
  • the recognition algorithm mainly relies on the camera to obtain images. Due to the diversity and complexity of the images, it is difficult to achieve efficient and accurate recognition. That is to say, the recognition algorithms based on real-time recognition and those based only on machine vision have shortcomings, and the accuracy and reliability are relatively low.
  • the control method of the embodiment of the present invention based on the road segment window and the multi-sensor recognition algorithm, divides the positioning trajectory of the vehicle into multiple road segment windows, and utilizes various sensors and high-performance processors equipped with the vehicle to collect and process each road segment.
  • the sensor signal of the window vehicle can efficiently and accurately identify the road section pattern of the parking lot, and form an effective database on the server.
  • step S12 in some embodiments, the sensor signal includes an odometer signal, and step S12 includes: Step 122: According to the positioning track of the vehicle and the odometer signal, take the set driving distance As the size of the road segment window, the positioning track corresponding to the driving distance is taken as the road segment window, and the date and time when the vehicle exits the road segment window is recorded as the update time of the road segment window.
  • the positioning trajectory of the vehicle is divided into multiple road segment windows, which facilitates the collection of sensor signals of the vehicle. Further, for each road segment window, corresponding to the time when the vehicle enters the road segment window and the time when the vehicle exits the road segment window, the date and time when the vehicle exits the road segment window is recorded, which is beneficial to the subsequent management of the road segment window.
  • processing the sensor signal includes extracting the sensor signal that is helpful for identifying the road segment mode corresponding to the road segment window, calculating the characteristic value corresponding to the sensor signal, and determining the road segment mode corresponding to the road segment window according to the characteristic value.
  • the time to process the sensor signals can occur after the vehicle has exited a segment window or after the vehicle has exited the entire segment window. It should be noted that each road segment window includes a plurality of independent road segment modes, and step S14 includes processing sensor signals to identify each road segment mode corresponding to the road segment window.
  • the data uploaded to the server includes the date and time when the vehicle leaves each road segment window, that is, the update time of the road segment window, the latitude and longitude information of each road segment window, the road segment mode corresponding to each road segment window, and the vehicle in the parking lot map positioning trajectory.
  • the uploading method may be to upload the entire vehicle driving trajectory, or upload the vehicle driving trajectory separately in the form of multiple road segment windows.
  • the driving distance of 10 meters is set as the size of the road segment window, that is, according to the odometer signal, the corresponding positioning trajectory of the vehicle is identified as a road segment window every 10 meters, and the sensor signals of the vehicles within the 10 meters are collected.
  • the processing is performed to identify the road segment mode corresponding to the road segment window, and then the corresponding relationship between the road segment mode corresponding to each road segment window of the overall vehicle driving track and the vehicle positioning trajectory in the parking lot map is uploaded to the server.
  • step S14 includes:
  • Step S142 the sensor signal includes a visual odometry signal and an attitude signal, the road section mode includes a flat mode or a slope mode, and the sensor signal is processed to extract the visual odometer signal and the attitude signal, so as to identify whether the road section mode is a flat mode or a slope mode; or,
  • Step S144 The sensor signal includes an odometer signal, an attitude signal and a turn signal, and the road section mode includes a straight mode or a turning mode, and the sensor signal is processed to extract the odometer signal, the attitude signal and the turning signal, so as to identify whether the road section mode is a straight mode or a turning mode; or,
  • Step S146 the sensor signal includes a visual odometer signal and an attitude signal, the road section mode includes a good road condition mode or a bad road condition mode, and the sensor signal is processed to extract the visual odometer signal and the attitude signal, so as to identify whether the road section mode is a good road condition mode or a bad road condition mode; or,
  • Step S148 The sensor signal includes a position signal and a light intensity signal, the road section mode includes an indoor mode or an outdoor mode, and the sensor signal is processed to extract the position signal and light intensity signal to identify whether the road section mode is an indoor mode or an outdoor mode.
  • the visual odometry signal can be obtained by a visual sensor, and the mean value of the visual odometry yaw angle and the standard deviation of the visual odometry pitch/roll angle can be obtained by calculation.
  • the attitude signal can be obtained by gyroscope and accelerometer, and the mean value of pitch angle, mean value of yaw angular velocity, standard deviation of pitch/roll angular velocity, mean value of downward acceleration, and standard deviation of acceleration can be obtained by calculation.
  • the position signal can be obtained by a satellite positioning sensor (such as a GPS sensor), and the mean value of the satellite signal strength and the mean value of the satellite positioning accuracy (such as the mean value of the GPS positioning accuracy) can be obtained by calculation.
  • the steering signal can be obtained through the wheel speedometer and the Electric Power Steering (EPS) sensor, and the standard deviation of the speed and the mean value of the EPS corner can be obtained by calculation.
  • the light intensity signal can be obtained by the light sensor, and the standard deviation of the light intensity can be obtained by calculation.
  • step S142 it is possible to identify whether the road segment mode is the flat mode or the slope mode through the average value of the yaw angle, the average value of the pitch angle and the average value of the downward acceleration of the visual odometer.
  • Each sensor signal corresponding to the flat mode is weaker than each sensor signal corresponding to the slope mode.
  • step S144 whether the road segment mode is the straight mode or the turning mode can be identified by the visual odometer yaw angle mean value, yaw angle speed mean value, speed standard deviation and EPS corner mean value.
  • Each sensor signal corresponding to the straight mode is weaker than each sensor signal corresponding to the turning mode.
  • step S146 whether the road segment mode is a good road condition mode or a bad road condition mode can be identified by using the visual odometer pitch/roll angle standard deviation, pitch/roll angular velocity standard deviation, and acceleration standard deviation.
  • Each sensor signal corresponding to the good road condition mode is weaker than each sensor signal corresponding to the bad road condition mode.
  • step S148 it is possible to identify whether the road segment mode is the indoor mode or the outdoor mode through the mean value of the satellite signal strength, the mean value of the satellite positioning accuracy (eg, the mean value of the GPS positioning accuracy) and the standard deviation of the light intensity.
  • the mean value of the satellite positioning accuracy eg, the mean value of the GPS positioning accuracy
  • each road section window includes four independent road section modes.
  • each segment window may include three, five, six, or other than four individually independent segment modes.
  • step S142 is performed first, then step S144 is performed, then step S146 is performed, and finally step S148 is performed.
  • step S144 can be performed first, then step S142 is performed, and then step S148, step S146 is finally executed, and the execution sequence of step S142, step S144, step S146 and step S148 may also be other sequences, which are not specifically limited herein.
  • step S14 includes:
  • Step S141 Calculate the confidence of each identified road segment pattern
  • the corresponding relationship includes the corresponding relationship between the confidence of the road segment pattern and the positioning trajectory.
  • the confidence level of the pattern recognition result of each road segment can be obtained.
  • the confidence is calculated based on the features used for road segment pattern recognition, for example, the confidence of the flat-ground mode is calculated based on the mean value of the yaw angle of the visual odometry, the mean value of the pitch angle and the mean value of the downward acceleration, and the mean value of the yaw angle of the visual odometer, the mean value of the pitch angle and the mean value of the downward acceleration are calculated.
  • the mean yaw rate, the standard deviation of the speed and the mean value of the EPS corners are used to calculate the confidence of the straight mode, and the confidence of the mode with good road conditions is calculated based on the pitch/roll angle standard deviation, pitch/roll angle velocity standard deviation and acceleration standard deviation of the visual odometer.
  • the confidence of the indoor mode is calculated based on the mean satellite signal strength, the mean GPS positioning accuracy, and the standard deviation of the light intensity.
  • the confidence value is greater than 0 and less than 1, that is, the confidence value can be 0.1, 0.5, 0.9 or other values between 0-1.
  • the positioning track of the vehicle includes parts S1, S2, S3, S4, S5, S6 and S7, and each part of the positioning track is composed of one or more than one section window, Each partial positioning trajectory includes one or more than one of four respective independent road segment patterns.
  • both the trajectory S1 and the trajectory S4 include the flat-ground mode, and the corresponding confidence levels are 0.9 and 0.8, respectively, that is, compared with the trajectory S4, the visual odometry mean value of the yaw angle, the mean value of the pitch angle and the mean value of the downward acceleration of the trajectory S1 are more Consistent with the description of the flat ground mode, that is to say, the recognition result that the track S1 is the flat ground mode is more confident than the recognition result that the track S4 is the flat ground mode.
  • the confidence level of each road segment pattern can be calculated. After calculating the confidence level of each road segment pattern, the confidence level of the road segment pattern corresponding to each segment window of the vehicle's driving track is compared with the vehicle's confidence level. The corresponding relationship of the positioning track in the parking lot map is uploaded to the server.
  • the confidence level of the flat ground mode is calculated using the empirical threshold method and the supervised machine learning method.
  • control method of the embodiment of the present invention is used for the server, and the control method includes:
  • Step S21 Receive and save multiple correspondences between the road section mode uploaded by the vehicle and the positioning trajectory of the vehicle in the parking lot map, and form a scene road section library;
  • Step S23 obtaining the current positioning track input by the vehicle
  • Step S25 matching the current positioning track with each scene in the scene road segment library according to the position signal of the current positioning track;
  • Step S27 in the case that the current positioning track matches the scene, match the first road segment window of the current positioning track with the second road segment window in the scene;
  • Step S29 In the case that the first link window matches the second link window, the first link window and the second link window are merged to update the second link window.
  • the server can receive the corresponding relationship between the road section mode and the positioning trajectory of the vehicle in the parking lot map uploaded by the vehicle, so that the server can form an effective database for intelligent navigation and navigation of the vehicle. Self-parking is supported.
  • the scene road segment library includes multiple scenes, and each scene includes the positioning track of the vehicle in the parking lot map, the update time of each road segment window, the position signal of each road segment window, and the corresponding road segment window. The corresponding relationship between the road segment pattern and the confidence level.
  • step S23 the current positioning track input by the vehicle is the newly received positioning track of the vehicle with multiple correspondences in the parking lot map.
  • the position signal of the current positioning track includes latitude and longitude height information, and by comparing the latitude and longitude height information, it can be determined whether there is a scene matching the current positioning track in the scene road segment library.
  • the current positioning track includes a plurality of first road segment windows
  • the scene also includes a plurality of second road segment windows
  • matching the first road segment window of the current positioning track with the second road segment window in the scene includes: based on the latitude and longitude information , each first segment window in the current positioning track is sequentially matched with each second segment window in the scene.
  • fusing the first road segment window and the second road segment window includes: based on the maturity, weighted average fusion of the position signal, road segment mode and confidence degree of the road segment mode of the first road segment window and the second road segment window, wherein the weight is the maturity, the maturity is positively correlated with the fusion times of the link window, and the maturity is an integer greater than 0. Specifically, the maturity of all the first section windows is 1, and the maturity of the second section windows is an integer equal to or greater than 1. Each time the fusion of the section window occurs, the maturity of the second section window is increased by 1, that is, the difference between the maturity of the second section window and the number of times of fusion of the second section window is 1. The more times the second link window is fused, the greater the value of the maturity of the second link window.
  • the fusion of the position signals of the first road segment window and the second road segment window is a weighted average based on maturity
  • the fusion of the road segment patterns of the first road segment window and the second road segment window is a weighted average based on maturity
  • the fusion of the confidence levels of the segment patterns of the window and the second segment window is a weighted average based on maturity.
  • the weight of the first link window is 1, and the weight of the second link window is the maturity of the second link window.
  • updating the second road segment window in the scene includes updating the update time of the second road segment window, the position signal, the road segment mode and confidence level corresponding to the second road segment window, wherein the update time of the updated second road segment window is changed to the current positioning track The update time of the first segment window in .
  • the confidence level of the flat mode of the first link window is 0.9
  • the maturity of the first link window that is, the weight is 1
  • the confidence level of the flat mode of the second link window is 0.7
  • the scene includes a plurality of second road segment windows, and each second road segment window corresponds to a maturity level
  • the control method includes:
  • Step S22 when the sum of the maturity of the scene reaches the first preset value, according to the position signal, sequentially match the current second road segment window and the remaining second road segment windows in the scene;
  • Step S24 In the case that the current second section window matches one of the remaining second section windows, the current second section window and the second section window matching the current second section window are merged to update the scene.
  • step S24 based on the maturity, the current second link window and the position signal of the second link window matching the current second link window, the link mode and the confidence of the link mode are fused to update the scene. It should be noted that when the maturity of the current second road segment window and the second road segment window matching the current second road segment window are not equal to 1, the update time of the second road segment window after fusion is the current second road segment window. and a later time in the second segment window that matches the current second segment window.
  • each second road segment window corresponds to an update time
  • the control method includes:
  • Step S26 delete the second road segment window whose update time exceeds the second preset value, or
  • control method includes:
  • Step S28 Delete the second road segment window whose maturity is lower than the third preset value and whose update time exceeds the fourth preset value, wherein the second preset value is greater than the fourth preset value, and the third preset value is less than the first preset value default value.
  • the storage space can be saved, the computation amount can be reduced, and the computation speed can be improved.
  • the road segment window occupies a certain storage space, and in the process of matching the road segment window, it is necessary to traverse all the road segment windows in the scene. The more road segment windows in the scene, the greater the amount of computation required for matching.
  • the update time or maturity and update time of the second link window reach a certain condition, the second link window is deleted.
  • the update time of the second road segment window may exceed the first segment window.
  • two preset values or cause the maturity of the second road segment window to be lower than the third preset value and the update time to exceed the fourth preset value.
  • the first preset value is 20
  • the second preset value is 15 days
  • the third preset value is 3
  • the fourth preset value is 10 days.
  • control method includes:
  • Step S31 in the case that the current trajectory does not match the scene, generate a new scene and save it to the scene road segment library;
  • Step S33 In the case that the first link window cannot match the second link window, generate a new link window and save it to the scene.
  • the maturity of the generated new link window is 1.
  • the vehicle 10 includes a collection module 12 , a processing module 14 and an uploading module 16 .
  • the acquisition module 12 is used for acquiring the sensor signal of the vehicle 10 in the road segment window when the vehicle 10 is close to the parking lot.
  • the processing module 14 is used for processing the sensor signal to identify the road segment mode corresponding to the road segment window.
  • the uploading module 16 is configured to upload the corresponding relationship between the road segment mode corresponding to each road segment window of the driving trajectory of the vehicle 10 and the positioning trajectory of the vehicle 10 in the parking lot map to the server.
  • the vehicle 10 can identify the road section mode of the parking lot through the sensor signal of the vehicle 10, and the corresponding relationship between the road section mode corresponding to each road section window of the vehicle 10's driving track and the positioning trajectory of the vehicle 10 in the parking lot map. Uploaded to the server, so that the server can form an effective database to provide support for the intelligent navigation and autonomous parking of the vehicle 10 .
  • the server 20 includes a receiving module 21 , an obtaining module 23 , a first matching module 25 , a second matching module 27 and a fusion module 29 .
  • the receiving module 21 is configured to receive and save multiple correspondences between the road segment mode uploaded by the vehicle 10 and the positioning track of the vehicle 10 in the parking lot map, and form a scene road segment library.
  • the obtaining module 23 is used to obtain the current positioning track input by the vehicle.
  • the first matching module 25 is configured to match the current positioning track with each scene in the scene road segment library according to the position signal of the current positioning track.
  • the second matching module 27 is configured to match the first link window of the current positioning track with the second link window in the scene when the current positioning track matches the scene.
  • the fusion module 29 is configured to fuse the first link window and the second link window to update the second link window when the first link window matches the second link window.
  • the server 20 can receive the corresponding relationship between the road segment mode and the positioning trajectory of the vehicle 10 in the parking lot map uploaded by the vehicle 10, so that the server 20 can form an effective database for Intelligent navigation and autonomous parking of the vehicle 10 is provided.
  • the vehicle 10 may connect to the server 20 through wireless communication (eg, WIFI, mobile communication network, etc.).
  • the vehicle can upload the positioning track with the road section mode to the server 20 for building a parking lot map, and can also obtain the parking lot map with the road section mode in the server 20 for intelligent navigation of the parking lot and autonomous parking.
  • the vehicle 10 obtains the parking lot map with the road section mode in the server 20 for long-distance autonomous parking.
  • the vehicle controller of the vehicle 10 may Control the vehicle 10 to decelerate; when the road ahead is straight/good road conditions, the vehicle controller of the vehicle 10 can control the vehicle 10 to accelerate; when it is on an outdoor road, GPS positioning is mainly selected, and when it is on an indoor road, the Mainly using visual odometry positioning.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with “first”, “second” may expressly or implicitly include at least one of that feature.
  • plurality means at least two, such as two, three, etc., unless otherwise expressly and specifically defined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明公开了一种控制方法、车辆和服务器。控制方法用于车辆。控制方法包括在车辆靠近停车场的情况下,识别路段窗口并采集车辆的传感器信号;处理传感器信号以识别停车场的路段模式;将车辆行驶轨迹的各路段窗口对应的路段模式与车辆在停车场地图内的定位轨迹的对应关系上传至服务器。上述控制方法中,车辆可通过车辆的传感器信号来识别停车场的路段模式,并将车辆行驶轨迹的各路段窗口对应的路段模式与车辆在停车场地图内的定位轨迹的对应关系上传至服务器,使得服务器可以形成有效的数据库,为车辆的智能导航和自主泊车提供支持。

Description

控制方法、车辆和服务器
交叉引用
本申请要求2020年7月16日递交的发明名称为“控制方法、车辆和服务器”的申请号为202010685584.6的在先申请优先权,上述在先申请的内容以引入的方式并入本文本中。
技术领域
本发明涉及车辆智能辅助驾驶技术领域,特别涉及一种控制方法、车辆和服务器。
背景技术
在相关技术中,汽车具备智能网联功能,汽车具备各种传感器以及高性能处理器,并通过互联网与云端相连来交互各种信息。而在停车场内的智能导航和自主泊车已经成为智能汽车自动驾驶的新特性。如何利用汽车的各种传感器来实现上述功能成为待解决的技术问题。
发明内容
本发明实施方式提供一种控制方法、车辆和服务器。
本发明实施方式的控制方法用于车辆,所述控制方法包括:
在所述车辆靠近停车场的情况下,识别路段窗口并采集所述车辆的传感器信号;
处理所述传感器信号以识别所述路段窗口对应的路段模式;
将所述车辆行驶轨迹的各路段窗口对应的路段模式与所述车辆在所述停车场地图内的定位轨迹的对应关系上传至服务器。
在某些实施方式中,所述传感器信号包括里程计信号,
在所述车辆靠近停车场的情况下,识别路段窗口并采集所述车辆的传感器信号,包括:
根据所述车辆的定位轨迹和所述里程计信号,取设定的行驶距离作为所述路段窗口的尺寸,取所述行驶距离对应的定位轨迹作为所述路段窗口,记录所述车辆驶出所述路段窗口的日期和时刻作为所述路段窗口的更新时间。
在某些实施方式中,处理所述传感器信号以识别所述路段窗口对应的路段模式,包括:
所述传感器信号包括视觉里程计信号和姿态信号,所述路段模式包括平地模式或坡模式,处理传感器信号提取所述视觉里程计信号和所述姿态信号,以识别所述路段模式是所述平地模式还是所述坡模式;或,
所述传感器信号包括里程计信号、姿态信号和转向信号,所述路段模式包括直行模式或转弯模式,处理传感器信号提取所述里程计信号、所述姿态信号和所述转向信号,以识别所述路段模式是所述直行模式还是所述转弯模式;或,
所述传感器信号包括视觉里程计信号和姿态信号,所述路段模式包括路况好模式或路况差模式,处理传感器信号提取所述视觉里程计信号和所述姿态信号,以识别所述路段模式是所述路况好模式还是所述路况差模式;或,
所述传感器信号包括位置信号和光强信号,所述路段模式包括室内模式或室外模式,处理传感器信号提取所述位置信号和所述光强信号,以识别所述路段模式是所述室内模式还是所述室外模式。
在某些实施方式中,处理所述传感器信号以识别所述路段窗口对应的路段模式,包括:
计算识别出的每个所述路段模式的置信度;
所述对应关系包括所述路段模式的置信度与所述定位轨迹的对应关系。
本发明实施方式的控制方法用于服务器,所述控制方法包括:
接收并保存车辆上传的路段模式与所述车辆在所述停车场地图内的定位轨迹的多个对应关系并形成场景路段库;
获取所述车辆输入的当前定位轨迹;
根据所述当前定位轨迹的位置信号,匹配所述当前定位轨迹与所述场景路段库中的每个场景;
在所述当前定位轨迹匹配到所述场景的情况下,匹配所述当前定位轨迹的第一路段窗口与所述场景内的第二路段窗口;
在所述第一路段窗口匹配到所述第二路段窗口的情况下,融合所述第一路段窗口与所述第二路段窗口以更新所述第二路段窗口。
在某些实施方式中,所述场景包括多个所述第二路段窗口,每个所述第二路段窗口对应有一个成熟度,所述控制方法包括:
在所述场景的成熟度的总和达到第一预设值的情况下,根据所述位置信号,依次匹配所述场景内的当前所述第二路段窗口与其余所述第二路段窗口;
在当前所述第二路段窗口与其余所述第二路段窗口的其中一个匹配的情况下,融合当前所述第二路段窗口和与当前所述第二路段窗口匹配的所述第二路段窗口以更新所述场景。
在某些实施方式中,每个所述第二路段窗口对应有一个更新时间,所述控制方法包括:
删除所述更新时间超过第二预设值的所述第二路段窗口,或
删除所述成熟度低于第三预设值且所述更新时间超过第四预设值的所述第二路段窗口,其中,第二预设值大于第四预设值,所述第三预设值小于所述第一预设值。
在某些实施方式中,所述控制方法包括:
在所述当前轨迹匹配不到所述场景的情况下,生成新场景并保存至所述场景路段库;
在所述第一路段窗口匹配不到所述第二路段窗口的情况下,生成新路段窗口并保存至所述场景。
本发明实施方式的车辆包括:
采集模块,所述采集模块用于在所述车辆靠近停车场的情况下,以路段窗口采集所述车辆的传感器信号;
处理模块,所述处理模块用于处理所述传感器信号以识别所述路段窗口对应的路段模式;
上传模块,所述上传模块用于将所述车辆行驶轨迹的各路段窗口对应的路段模式与所述车辆在所述停车场地图内的定位轨迹的对应关系上传至服务器。
本发明实施方式的服务器包括:
接收模块,所述接收模块用于接收并保存车辆上传的路段模式与所述车辆在所述停车场地图内的定位轨迹的多个对应关系并形成场景路段库;
获取模块,所述获取模块用于获取所述车辆输入的当前定位轨迹;
第一匹配模块,所述第一匹配模块用于根据所述当前定位轨迹的位置信号,匹配所述当前定位轨迹与所述场景路段库中的每个场景;
第二匹配模块,所述第二匹配模块用于在所述当前定位轨迹匹配到所述场景的情况下,匹配所述当前定位轨迹的第一路段窗口与所述场景内的第二路段窗口;
融合模块,所述融合模块用于在所述第一路段窗口匹配到所述第二路段窗口的情况下,融合所述第一路段窗口与所述第二路段窗口以更新所述第二路段窗口。
上述控制方法、车辆和服务器中,车辆可通过车辆的传感器信号来识别停车场的路段模式,并将车辆行驶轨迹的各路段窗口对应的路段模式与车辆在停车场地图内的定位轨迹的对应关系上传至服务器,使得服务器可以形成有效的数据库,为车辆的智能导航和自主泊车提供支持。
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1-图4是本发明实施方式的车辆的控制方法的流程示意图;
图5是本发明实施方式的车辆的控制方法的示例图;
图6-图10是本发明实施方式的服务器的控制方法的流程示意图;
图11是本发明实施方式的车辆的模块示意图;
图12是本发明实施方式的服务器的模块示意图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。
在本发明的实施方式的描述中,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本发明的实施方式的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
请参阅图1,本发明实施方式的控制方法用于车辆,控制方法包括:
步骤S12:在车辆靠近停车场的情况下,识别路段窗口并采集车辆的传感器信号;
步骤S14:处理传感器信号以识别路段窗口对应的路段模式;
步骤S16:将车辆行驶轨迹的各路段窗口对应的路段模式与车辆在停车场地图内的定位轨迹的对应关系上传至服务器。
上述控制方法中,车辆可通过车辆的传感器信号来识别停车场的路段模式,并将车辆行驶轨迹的各路段窗口对应的路段模式与车辆在停车场地图内的定位轨迹的对应关系上传至服务器,使得服务器可以形成有效的数据库,为车辆的智能导航和自主泊车提供支持。
在相关技术中,构建数据库时,基于实时识别算法或仅基于机器视觉的识别算法对车辆的定位轨迹进行识别,实时识别算法运算量大,内存占用多,对处理器要求高,基于机器视觉的识别算法主要依赖于摄像机获取图像,由于图像的多样性和复杂性,难以做到高效准确的识别。也即是说,基于实时识别算法和仅基于机器视觉的识别算法存在不足之处,准确度和可靠性相对较低。
而本发明实施方式的控制方法,基于路段窗口和多传感器的识别算法,将车辆的定位轨迹分为多个路段窗口,利用车辆配备的多种传感器以及高性能处理器,采集并处理每个路段窗口车辆的传感器信号,从而高效准确地识别出停车场的路段模式,并在服务器形成有效的数据库。
具体地,请参阅图2,在步骤S12中,在某些实施方式中,传感器信号包括里程计信号,步骤S12包括:步骤122:根据车辆的定位轨迹和里程计信号,取设定的行驶距离作 为路段窗口的尺寸,取行驶距离对应的定位轨迹作为路段窗口,记录车辆驶出路段窗口的日期和时刻作为路段窗口的更新时间。
如此,车辆的定位轨迹被分为多个路段窗口,便于采集车辆的传感器信号。进一步地,对于每一个路段窗口,对应有车辆驶入路段窗口的时刻和驶出路段窗口的时刻,记录车辆驶出路段窗口的日期和时刻,有利于后续对路段窗口进行管理。
在步骤S14中,处理传感器信号包括提取出有助于识别路段窗口对应的路段模式的传感器信号,计算传感器信号对应的特征值,根据特征值确定路段窗口对应的路段模式。处理传感器信号的时间可以发生在车辆驶出一个路段窗口之后,也可以发生在车辆驶出全部路段窗口之后。需要指出的是,每个路段窗口包括多个各自独立的路段模式,步骤S14包括处理传感器信号以识别路段窗口对应的每个路段模式。
在步骤S16中,上传至服务器的数据包括车辆驶出各路段窗口的日期和时刻即路段窗口的更新时间、各路段窗口的经纬高度信息、各路段窗口对应的路段模式和车辆在停车场地图内的定位轨迹。上传的方式可以是将车辆行驶轨迹整体上传,也可以是将车辆行驶轨迹按多个路段窗口的形式分开上传。
在一个例子中,设定行驶距离10米作为路段窗口的尺寸,即根据里程计信号,车辆每行驶10米识别对应的定位轨迹为一个路段窗口,采集此10米内车辆的传感器信号,对传感器信号进行处理以识别路段窗口对应的路段模式,然后将整体车辆行驶轨迹的各路段窗口对应的路段模式与车辆在停车场地图内的定位轨迹的对应关系上传至服务器。
请参阅图3,在某些实施方式中,步骤S14包括:
步骤S142:传感器信号包括视觉里程计信号和姿态信号,路段模式包括平地模式或坡模式,处理传感器信号提取视觉里程计信号和姿态信号,以识别路段模式是平地模式还是坡模式;或,
步骤S144:传感器信号包括里程计信号、姿态信号和转向信号,路段模式包括直行模式或转弯模式,处理传感器信号提取里程计信号、姿态信号和转向信号,以识别路段模式是直行模式还是转弯模式;或,
步骤S146:传感器信号包括视觉里程计信号和姿态信号,路段模式包括路况好模式或路况差模式,处理传感器信号提取视觉里程计信号和姿态信号,以识别路段模式是路况好模式还是路况差模式;或,
步骤S148:传感器信号包括位置信号和光强信号,路段模式包括室内模式或室外模式,处理传感器信号提取位置信号和光强信号,以识别路段模式是室内模式还是室外模式。
如此,通过对不同传感器信号的提取,可准确识别出停车场的路段模式。具体地,视觉里程计信号可通过视觉传感器获得,通过计算可得到视觉里程计偏航角均值和视觉里程计俯 仰/横滚角标准差。姿态信号可通过陀螺仪和加速度计获得,通过计算可得到俯仰角均值、偏航角速度均值、俯仰/横滚角速度标准差、向下加速度均值、和加速度标准差。位置信号可通过卫星定位传感器(如GPS传感器)获得,通过计算可得到卫星信号强度均值和卫星定位精度均值(如GPS定位精度均值)。转向信号可通过轮速计和电子助力转向(Electric Power Steering,EPS)传感器获得,通过计算可得到速度标准差和EPS转角均值。光强信号可通过光线传感器获得,通过计算可得到光线强度标准差。
进一步地,请结合表1,在步骤S142中,可通过视觉里程计偏航角均值、俯仰角均值和向下加速度均值,识别路段模式是平地模式还是坡模式。平地模式对应的各传感器信号弱于坡模式对应的各传感器信号。
在步骤S144中,可通过视觉里程计偏航角均值、偏航角速度均值、速度标准差和EPS转角均值,识别路段模式是直行模式还是转弯模式。直行模式对应的各传感器信号弱于转弯模式对应的各传感器信号。
在步骤S146中,可通过视觉里程计俯仰/横滚角标准差、俯仰/横滚角速度标准差和加速度标准差,识别路段模式是路况好模式还是路况差模式。路况好模式对应的各传感器信号弱于路况差模式对应的各传感器信号。
在步骤S148中,可通过卫星信号强度均值、卫星定位精度均值(如GPS定位精度均值)和光线强度标准差,识别路段模式是室内模式还是室外模式。室内模式对应的各传感器信号弱于室外模式对应的各传感器信号。
表1
Figure PCTCN2021102846-appb-000001
需要说明的是,上述平地与坡模式、直行与转弯模式、路况好与路况差模式、室内与室外模式为四种各自独立的路段模式,即每个路段窗口包括四种各自独立的路段模式,在其它实施方式中,每个路段窗口可包括三种、五种、六种或其它非四种各自独立的路段模式。此外,在图3的实施方式中,先执行步骤S142,再执行步骤S144,接着执行步骤S146,最后 执行步骤S148,在其它实施方式中,可先执行步骤S144,再执行步骤S142,接着执行步骤S148,最后执行步骤S146,步骤S142、步骤S144、步骤S146和步骤S148的执行顺序还可以是其它顺序,在此不作具体限定。
请参阅图4,在某些实施方式中,步骤S14包括:
步骤S141:计算识别出的每个路段模式的置信度;
对应关系包括路段模式的置信度与定位轨迹的对应关系。
如此,可获得每个路段模式识别结果的置信程度。具体地,置信度基于路段模式识别所用的特征进行计算,例如,基于视觉里程计偏航角均值、俯仰角均值和向下加速度均值计算平地模式的置信度,基于视觉里程计偏航角均值、偏航角速度均值、速度标准差和EPS转角均值计算直行模式的置信度,基于视觉里程计俯仰/横滚角标准差、俯仰/横滚角速度标准差和加速度标准差计算路况好模式的置信度,基于卫星信号强度均值、GPS定位精度均值和光线强度标准差计算室内模式的置信度。
置信度的数值大于0且小于1,即置信度可以是0.1、0.5、0.9或其它介于0-1之间的数值。在路段模式识别过程中,识别所用的特征越符合对应的路段模式,置信度的数值就越大。请结合图5,在图5所示的实施方式中,车辆的定位轨迹包括S1、S2、S3、S4、S5、S6和S7部分,每部分定位轨迹由一个或多于一个的路段窗口组成,每部分定位轨迹均包括四种各自独立的路段模式中的一个或多于一个。其中,轨迹S1和轨迹S4均包括平地模式,且对应的置信度分别为0.9和0.8,即相比于轨迹S4,轨迹S1的视觉里程计偏航角均值、俯仰角均值和向下加速度均值更加符合平地模式的描述,也即是说,轨迹S1为平地模式的识别结果比轨迹S4为平地模式的识别结果更置信。此外,在识别每个路段模式的同时可计算出每个路段模式的置信度,在计算出每个路段模式的置信度后,将车辆行驶轨迹的各路段窗口对应的路段模式的置信度与车辆在停车场地图内的定位轨迹的对应关系上传至服务器。
在一个例子中,基于视觉里程计偏航角均值、俯仰角均值和向下加速度均值,利用经验阈值法和监督机器学习法计算得到平地模式的置信度。
请结合图6,本发明实施方式的控制方法用于服务器,控制方法包括:
步骤S21:接收并保存车辆上传的路段模式与车辆在停车场地图内的定位轨迹的多个对应关系并形成场景路段库;
步骤S23:获取车辆输入的当前定位轨迹;
步骤S25:根据当前定位轨迹的位置信号,匹配当前定位轨迹与场景路段库中的每个场景;
步骤S27:在当前定位轨迹匹配到场景的情况下,匹配当前定位轨迹的第一路段窗口与场景内的第二路段窗口;
步骤S29:在第一路段窗口匹配到第二路段窗口的情况下,融合第一路段窗口与第二路段窗口以更新第二路段窗口。
上述控制方法中,服务器可以接收车辆上传的车辆行驶轨迹的各路段窗口对应的路段模式与车辆在停车场地图内的定位轨迹的对应关系,使得服务器可以形成有效的数据库,为车辆的智能导航和自主泊车提供支持。
具体地,在步骤S21中,场景路段库中包括多个场景,每个场景中包括车辆在停车场地图内的定位轨迹与各路段窗口的更新时间、各路段窗口的位置信号、各路段窗口对应的路段模式和置信度的对应关系。
在步骤S23中,车辆输入的当前定位轨迹即新接收到的具有多个对应关系的车辆在停车场地图内的定位轨迹。
在步骤S25中,当前定位轨迹的位置信号包括经纬高度信息,通过对比经纬高度信息,可确定场景路段库是否存在与当前定位轨迹匹配的场景。
在步骤S27中,当前定位轨迹包括多个第一路段窗口,场景内也包括多个第二路段窗口,匹配当前定位轨迹的第一路段窗口与场景内的第二路段窗口包括:基于经纬高度信息,将当前定位轨迹内的每个第一路段窗口依次与场景内的每个第二路段窗口进行匹配。
在步骤S29中,融合第一路段窗口与第二路段窗口,包括:基于成熟度,加权平均融合第一路段窗口与第二路段窗口的位置信号、路段模式和路段模式的置信度,其中,权重为成熟度,成熟度与路段窗口的融合次数成正相关,且成熟度为大于0的整数。具体地,所有第一路段窗口的成熟度为1,第二路段窗口的成熟度为等于或大于1的整数。每发生1次路段窗口的融合,第二路段窗口的成熟度加1,即第二路段窗口的成熟度与第二路段窗口的融合次数的差值为1。第二路段窗口的融合次数越多,第二路段窗口的成熟度的数值就越大。
进一步地,第一路段窗口与第二路段窗口的位置信号的融合为基于成熟度的加权平均,第一路段窗口与第二路段窗口的路段模式的融合为基于成熟度的加权平均,第一路段窗口与第二路段窗口的路段模式的置信度的融合为基于成熟度的加权平均。其中,第一路段窗口的权重为1,第二路段窗口的权重为第二路段窗口的成熟度。融合后第二路段窗口的路段模式的置信度的计算公式为:置信度=(第一路段窗口的路段模式的置信度*1+第二路段窗口的路段模式的置信度*第二路段窗口的成熟度)/(1+第二路段窗口的成熟度)。
此外,更新场景内的第二路段窗口包括更新第二路段窗口的更新时间、位置信号、第二路段窗口对应的路段模式和置信度,其中更新后第二路段窗口的更新时间变更为当前定位轨迹中第一路段窗口的更新时间。
在一个例子中,第一路段窗口的平地模式的置信度为0.9,第一路段窗口的成熟度即权重为1,第二路段窗口的平地模式的置信度为0.7,第二路段窗口的成熟度即权重为 3,则融合后第二路段窗口的平地模式的置信度为:(0.9*1+0.7*3)/(1+3)=0.75。
请结合图7,在某些实施方式中,场景包括多个第二路段窗口,每个第二路段窗口对应有一个成熟度,控制方法包括:
步骤S22:在场景的成熟度的总和达到第一预设值的情况下,根据位置信号,依次匹配场景内的当前第二路段窗口与其余第二路段窗口;
步骤S24:在当前第二路段窗口与其余第二路段窗口的其中一个匹配的情况下,融合当前第二路段窗口和与当前第二路段窗口匹配的第二路段窗口以更新场景。
如此,因无法匹配而添加到场景中的新的路段窗口能够重新匹配和融合。可以理解,对于定位轨迹Z中的一个路段窗口A,在定位轨迹Z刚输入服务器时,定位轨迹Z与场景路段库中的场景Y基于位置信号匹配,而路段窗口A在场景Y中不存在基于位置信号匹配的路段窗口,此时路段窗口A作为新的路段窗口保存在场景Y中。随着路段窗口A成熟度的增加,路段窗口A的位置信号可能发生变化,即路段窗口A可能逐步靠近场景Y中的路段窗口B,并能与路段窗口B进行匹配和融合。因此,在第二路段窗口的成熟度总和达到第一预设值之后,重新匹配和融合第二路段窗口。
具体地,在步骤S24中,基于成熟度,融合当前第二路段窗口和与当前第二路段窗口匹配的第二路段窗口的位置信号、路段模式和路段模式的置信度以更新场景。需要说明的是,在当前第二路段窗口和与当前第二路段窗口匹配的第二路段窗口的成熟度都不等于1的情况下,融合后第二路段窗口的更新时间取当前第二路段窗口和与当前第二路段窗口匹配的第二路段窗口中更靠后的时间。
请参阅图8,在某些实施方式中,每个第二路段窗口对应有一个更新时间,控制方法包括:
步骤S26:删除更新时间超过第二预设值的第二路段窗口,或
请参阅图9,控制方法包括:
步骤S28:删除成熟度低于第三预设值且更新时间超过第四预设值的第二路段窗口,其中,第二预设值大于第四预设值,第三预设值小于第一预设值。
如此,可节省储存空间,减少运算量,提升运算速度。可以理解,路段窗口占用一定的存储空间,且在匹配路段窗口的过程中,需要遍历场景内的所有路段窗口,场景内的路段窗口数量越多,匹配所需要的运算量越大,因此,在第二路段窗口更新时间或成熟度和更新时间达到一定条件时,删除第二路段窗口。
具体地,在场景环境发生变化,路段窗口信息过时的情况下,或在路段窗口定位误差较大,无法与其它路段窗口进行匹配和融合的情况下,可能导致第二路段窗口的更新时间超过第二预设值,或导致第二路段窗口的成熟度低于第三预设值且更新时间超过第四预设值。
在一个例子中,第一预设值为20,第二预设值为15天,第三预设值为3,第四预设值为10天。当场景的成熟度总和达到20后,场景内的路段窗口将进行重匹配和融合,以更新场景。当场景中路段窗口的更新时间超过15天后,路段窗口将会从场景中删除,以节省储存空间。在场景中路段窗口的成熟度低于3的情况下,当路段窗口的更新时间超过10天后,路段窗口将会从场景中删除,以节省储存空间。
需要指出的是,上述所提到的具体数值只为了作为例子详细说明本发明的实施,而不应理解为对本发明的限制。在其它例子或实施方式或实施例中,可根据本发明来选择其它数值,在此不作具体限定。
请参阅图10,在某些实施方式中,控制方法包括:
步骤S31:在当前轨迹匹配不到场景的情况下,生成新场景并保存至场景路段库;
步骤S33:在第一路段窗口匹配不到第二路段窗口的情况下,生成新路段窗口并保存至场景。
如此,有利于完善场景,构建有效的场景路段库。具体地,生成的新路段窗口的成熟度为1。
请结合图11,本发明实施方式的车辆10包括采集模块12、处理模块14和上传模块16。采集模块12用于在车辆10靠近停车场的情况下,以路段窗口采集车辆10的传感器信号。处理模块14用于处理传感器信号以识别路段窗口对应的路段模式。上传模块16用于将车辆10行驶轨迹的各路段窗口对应的路段模式与车辆10在停车场地图内的定位轨迹的对应关系上传至服务器。
上述车辆10中,车辆10可通过车辆10的传感器信号来识别停车场的路段模式,并将车辆10行驶轨迹的各路段窗口对应的路段模式与车辆10在停车场地图内的定位轨迹的对应关系上传至服务器,使得服务器可以形成有效的数据库,为车辆10的智能导航和自主泊车提供支持。
需要指出的是,上述对用于车辆的控制方法的实施方式和对用于服务器的控制方法的实施方式的有益效果的解释说明,也适应车辆10和以下实施方式的服务器,为避免冗余,在此不作详细展开。
请结合图12,本发明实施方式的服务器20包括接收模块21、获取模块23、第一匹配模块25、第二匹配模块27和融合模块29。接收模块21用于接收并保存车辆10上传的路段模式与车辆10在停车场地图内的定位轨迹的多个对应关系并形成场景路段库。获取模块23用于获取车辆输入的当前定位轨迹。第一匹配模块25用于根据当前定位轨迹的位置信号,匹配当前定位轨迹与场景路段库中的每个场景。第二匹配模块27用于在当前定位轨迹匹配到场景的情况下,匹配当前定位轨迹的第一路段窗口与场景内的第二路段窗口。融合模块 29用于在第一路段窗口匹配到第二路段窗口的情况下,融合第一路段窗口与第二路段窗口以更新第二路段窗口。
上述服务器20中,服务器20可以接收车辆10上传的车辆10行驶轨迹的各路段窗口对应的路段模式与车辆10在停车场地图内的定位轨迹的对应关系,使得服务器20可以形成有效的数据库,为车辆10的智能导航和自主泊车提供支持。
具体地,车辆10可以通过无线通信方式(如WIFI、移动通信网络等)连接服务器20。在某些实施方式中,车辆可以上传带有路段模式的定位轨迹至服务器20,用于构建停车场地图,也可以获取服务器20中带有路段模式的停车场地图,用于停车场的智能导航和自主泊车。在一个例子中,车辆10获取服务器20中带有路段模式的停车场地图用于长距离自主泊车,当车辆10接近转弯/下坡/路况差的路段时,车辆10的整车控制器可控制车辆10减速;当前方为直行/路况好的路段时,车辆10的整车控制器可控制车辆10加速;当处于室外的路段时,主要选择使用GPS定位,当处于室内的路段时,则主要使用视觉里程计定位。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (10)

  1. 一种控制方法,用于车辆,其特征在于,所述控制方法包括:
    在所述车辆靠近停车场的情况下,识别路段窗口并采集所述车辆的传感器信号;
    处理所述传感器信号以识别所述路段窗口对应的路段模式;
    将所述车辆行驶轨迹的各路段窗口对应的路段模式与所述车辆在所述停车场地图内的定位轨迹的对应关系上传至服务器。
  2. 根据权利要求1所述的控制方法,其特征在于,所述传感器信号包括里程计信号,
    在所述车辆靠近停车场的情况下,识别路段窗口并采集所述车辆的传感器信号,包括:
    根据所述车辆的定位轨迹和所述里程计信号,取设定的行驶距离作为所述路段窗口的尺寸,取所述行驶距离对应的定位轨迹作为所述路段窗口,记录所述车辆驶出所述路段窗口的日期和时刻作为所述路段窗口的更新时间。
  3. 根据权利要求1所述的控制方法,其特征在于,处理所述传感器信号以识别所述路段窗口对应的路段模式,包括:
    所述传感器信号包括视觉里程计信号和姿态信号,所述路段模式包括平地模式或坡模式,处理传感器信号提取所述视觉里程计信号和所述姿态信号,以识别所述路段模式是所述平地模式还是所述坡模式;或,
    所述传感器信号包括里程计信号、姿态信号和转向信号,所述路段模式包括直行模式或转弯模式,处理传感器信号提取所述里程计信号、所述姿态信号和所述转向信号,以识别所述路段模式是所述直行模式还是所述转弯模式;或,
    所述传感器信号包括视觉里程计信号和姿态信号,所述路段模式包括路况好模式或路况差模式,处理传感器信号提取所述视觉里程计信号和所述姿态信号,以识别所述路段模式是所述路况好模式还是所述路况差模式;或,
    所述传感器信号包括位置信号和光强信号,所述路段模式包括室内模式或室外模式,处理传感器信号提取所述位置信号和所述光强信号,以识别所述路段模式是所述室内模式还是所述室外模式。
  4. 根据权利要求1所述的控制方法,其特征在于,处理所述传感器信号以识别所述路段窗口对应的路段模式,包括:
    计算识别出的每个所述路段模式的置信度;
    所述对应关系包括所述路段模式的置信度与所述定位轨迹的对应关系。
  5. 一种控制方法,用于服务器,其特征在于,所述控制方法包括:
    接收并保存车辆上传的路段模式与所述车辆在所述停车场地图内的定位轨迹的多个对应关系并形成场景路段库;
    获取所述车辆输入的当前定位轨迹;
    根据所述当前定位轨迹的位置信号,匹配所述当前定位轨迹与所述场景路段库中的每个场景;
    在所述当前定位轨迹匹配到所述场景的情况下,匹配所述当前定位轨迹的第一路段窗口与所述场景内的第二路段窗口;
    在所述第一路段窗口匹配到所述第二路段窗口的情况下,融合所述第一路段窗口与所述第二路段窗口以更新所述第二路段窗口。
  6. 根据权利要求5所述的控制方法,其特征在于,所述场景包括多个所述第二路段窗口,每个所述第二路段窗口对应有一个成熟度,所述控制方法包括:
    在所述场景的成熟度的总和达到第一预设值的情况下,根据所述位置信号,依次匹配所述场景内的当前所述第二路段窗口与其余所述第二路段窗口;
    在当前所述第二路段窗口与其余所述第二路段窗口的其中一个匹配的情况下,融合当前所述第二路段窗口和与当前所述第二路段窗口匹配的所述第二路段窗口以更新所述场景。
  7. 根据权利要求6所述的控制方法,其特征在于,每个所述第二路段窗口对应有一个更新时间,所述控制方法包括:
    删除所述更新时间超过第二预设值的所述第二路段窗口,或
    删除所述成熟度低于第三预设值且所述更新时间超过第四预设值的所述第二路段窗口,其中,第二预设值大于第四预设值,所述第三预设值小于所述第一预设值。
  8. 根据权利要求5所述的控制方法,其特征在于,所述控制方法包括:
    在所述当前轨迹匹配不到所述场景的情况下,生成新场景并保存至所述场景路段库;
    在所述第一路段窗口匹配不到所述第二路段窗口的情况下,生成新路段窗口并保存至所述场景。
  9. 一种车辆,其特征在于,所述车辆包括:
    采集模块,所述采集模块用于在所述车辆靠近停车场的情况下,以路段窗口采集所述车辆的传感器信号;
    处理模块,所述处理模块用于处理所述传感器信号以识别所述路段窗口对应的路段模式;
    上传模块,所述上传模块用于将所述车辆行驶轨迹的各路段窗口对应的路段模式与所述车辆在所述停车场地图内的定位轨迹的对应关系上传至服务器。
  10. 一种服务器,其特征在于,所述服务器包括:
    接收模块,所述接收模块用于接收并保存车辆上传的路段模式与所述车辆在所述停车场地图内的定位轨迹的多个对应关系并形成场景路段库;
    获取模块,所述获取模块用于获取所述车辆输入的当前定位轨迹;
    第一匹配模块,所述第一匹配模块用于根据所述当前定位轨迹的位置信号,匹配所述当前定位轨迹与所述场景路段库中的每个场景;
    第二匹配模块,所述第二匹配模块用于在所述当前定位轨迹匹配到所述场景的情况下,匹配所述当前定位轨迹的第一路段窗口与所述场景内的第二路段窗口;
    融合模块,所述融合模块用于在所述第一路段窗口匹配到所述第二路段窗口的情况下,融合所述第一路段窗口与所述第二路段窗口以更新所述第二路段窗口。
PCT/CN2021/102846 2020-07-16 2021-06-28 控制方法、车辆和服务器 WO2022012316A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21790066.1A EP3968609B1 (en) 2020-07-16 2021-06-28 Control method, vehicle, and server

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010685584.6 2020-07-16
CN202010685584.6A CN111885138A (zh) 2020-07-16 2020-07-16 控制方法、车辆和服务器

Publications (1)

Publication Number Publication Date
WO2022012316A1 true WO2022012316A1 (zh) 2022-01-20

Family

ID=73155472

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102846 WO2022012316A1 (zh) 2020-07-16 2021-06-28 控制方法、车辆和服务器

Country Status (3)

Country Link
EP (1) EP3968609B1 (zh)
CN (1) CN111885138A (zh)
WO (1) WO2022012316A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111885138A (zh) * 2020-07-16 2020-11-03 广州小鹏车联网科技有限公司 控制方法、车辆和服务器
CN112985404A (zh) * 2021-02-09 2021-06-18 广州小鹏自动驾驶科技有限公司 一种停车场众包地图生成方法、装置、设备和介质
DE102022200142A1 (de) * 2022-01-10 2023-07-13 Robert Bosch Gesellschaft mit beschränkter Haftung Schätzen eines Bewegungszustands eines Fahrzeugs aus Kameradaten mithilfe von maschinellem Lernen

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175654A (zh) * 2019-05-29 2019-08-27 广州小鹏汽车科技有限公司 一种轨迹路标的更新方法及系统
CN110287803A (zh) * 2019-05-29 2019-09-27 广州小鹏汽车科技有限公司 一种轨迹路标的识别方法及系统
CN110519701A (zh) * 2019-08-15 2019-11-29 广州小鹏汽车科技有限公司 定位信息的创建方法、车载终端、服务器设备和定位系统
EP3598260A1 (en) * 2018-07-17 2020-01-22 Baidu USA LLC Multimodal motion planning framework for autonomous driving vehicles
US20200064846A1 (en) * 2018-08-21 2020-02-27 GM Global Technology Operations LLC Intelligent vehicle navigation systems, methods, and control logic for multi-lane separation and trajectory extraction of roadway segments
US20200088525A1 (en) * 2018-09-15 2020-03-19 Toyota Research Institute, Inc. Systems and methods for vehicular navigation and localization
CN110962843A (zh) * 2018-09-30 2020-04-07 上海汽车集团股份有限公司 一种自动泊车控制决策方法及系统
CN111076732A (zh) * 2018-10-19 2020-04-28 百度(美国)有限责任公司 基于车辆行驶的轨迹标记和生成高清地图的标记方案
CN111885138A (zh) * 2020-07-16 2020-11-03 广州小鹏车联网科技有限公司 控制方法、车辆和服务器

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL276754B1 (en) * 2018-03-05 2024-07-01 Mobileye Vision Technologies Ltd Systems and methods for obtaining anonymous navigation information
CN110702132B (zh) * 2019-09-27 2020-07-31 速度时空信息科技股份有限公司 基于道路标记点和道路属性的微路网地图数据的采集方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3598260A1 (en) * 2018-07-17 2020-01-22 Baidu USA LLC Multimodal motion planning framework for autonomous driving vehicles
US20200064846A1 (en) * 2018-08-21 2020-02-27 GM Global Technology Operations LLC Intelligent vehicle navigation systems, methods, and control logic for multi-lane separation and trajectory extraction of roadway segments
US20200088525A1 (en) * 2018-09-15 2020-03-19 Toyota Research Institute, Inc. Systems and methods for vehicular navigation and localization
CN110962843A (zh) * 2018-09-30 2020-04-07 上海汽车集团股份有限公司 一种自动泊车控制决策方法及系统
CN111076732A (zh) * 2018-10-19 2020-04-28 百度(美国)有限责任公司 基于车辆行驶的轨迹标记和生成高清地图的标记方案
CN110175654A (zh) * 2019-05-29 2019-08-27 广州小鹏汽车科技有限公司 一种轨迹路标的更新方法及系统
CN110287803A (zh) * 2019-05-29 2019-09-27 广州小鹏汽车科技有限公司 一种轨迹路标的识别方法及系统
CN110519701A (zh) * 2019-08-15 2019-11-29 广州小鹏汽车科技有限公司 定位信息的创建方法、车载终端、服务器设备和定位系统
CN111885138A (zh) * 2020-07-16 2020-11-03 广州小鹏车联网科技有限公司 控制方法、车辆和服务器

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3968609A4 *

Also Published As

Publication number Publication date
EP3968609B1 (en) 2023-12-06
CN111885138A (zh) 2020-11-03
EP3968609A1 (en) 2022-03-16
EP3968609C0 (en) 2023-12-06
EP3968609A4 (en) 2022-07-06

Similar Documents

Publication Publication Date Title
WO2022012316A1 (zh) 控制方法、车辆和服务器
RU2737874C1 (ru) Способ хранения информации транспортного средства, способ управления движением транспортного средства и устройство хранения информации транспортного средства
JP5057184B2 (ja) 画像処理システム及び車両制御システム
CN105270410B (zh) 用于自主驾驶车辆的路径规划的精确曲率估计算法
CN111516673B (zh) 基于智能摄像头和高精地图定位的车道线融合系统及方法
CN107085938B (zh) 基于车道线与gps跟随的智能驾驶局部轨迹容错规划方法
US11520340B2 (en) Traffic lane information management method, running control method, and traffic lane information management device
JP5966747B2 (ja) 車両走行制御装置及びその方法
CN107169468A (zh) 用于控制车辆的方法和装置
JP2020516880A (ja) ポリゴンにおける中間点を低減する方法および装置
CN105955257A (zh) 基于固定路线的公交车自动驾驶系统及其驾驶方法
CN113359171B (zh) 基于多传感器融合的定位方法、装置和电子设备
EP4242998A1 (en) Traffic stream information determination method and apparatus, electronic device and storage medium
CN110411440B (zh) 一种道路采集方法、装置、服务器及存储介质
US20200033141A1 (en) Data generation method for generating and updating a topological map for at least one room of at least one building
CN113405555B (zh) 一种自动驾驶的定位传感方法、系统及装置
CN112433531A (zh) 一种自动驾驶车辆的轨迹跟踪方法、装置及计算机设备
CN114889606B (zh) 一种基于多传感融合的低成本高精定位方法
CN116670610A (zh) 用于公共速度映射和导航的系统和方法
CN116027375B (zh) 自动驾驶车辆的定位方法、装置及电子设备、存储介质
CN115388880B (zh) 一种低成本记忆泊车建图与定位方法、装置及电子设备
CN114625164A (zh) 基于无人机母车的无人机智能返航方法
CN115046546A (zh) 一种基于车道线识别的自动驾驶汽车定位系统及方法
JP5557036B2 (ja) 退出判定装置、退出判定プログラム及び退出判定方法
CN115143977A (zh) 一种快速的高精度地图构建方法及其装置、车辆

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021790066

Country of ref document: EP

Effective date: 20211026

NENP Non-entry into the national phase

Ref country code: DE