WO2024036607A1 - 定位方法、装置以及智能驾驶设备 - Google Patents

定位方法、装置以及智能驾驶设备 Download PDF

Info

Publication number
WO2024036607A1
WO2024036607A1 PCT/CN2022/113626 CN2022113626W WO2024036607A1 WO 2024036607 A1 WO2024036607 A1 WO 2024036607A1 CN 2022113626 W CN2022113626 W CN 2022113626W WO 2024036607 A1 WO2024036607 A1 WO 2024036607A1
Authority
WO
WIPO (PCT)
Prior art keywords
ambient light
light sensor
position information
plane
light source
Prior art date
Application number
PCT/CN2022/113626
Other languages
English (en)
French (fr)
Inventor
王炳蘅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2022/113626 priority Critical patent/WO2024036607A1/zh
Publication of WO2024036607A1 publication Critical patent/WO2024036607A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement

Definitions

  • the present application relates to the field of intelligent driving, and more specifically, to a positioning method, device and intelligent driving equipment.
  • GNSS global navigation satellite system
  • GNSS positioning accuracy is unstable and cannot be positioned in some scenarios (for example, scenarios with poor network signals).
  • vision fusion positioning technology can assist or replace GNSS for positioning, because the perceived object of vision fusion positioning technology is visible light, the accuracy and accuracy of vehicle positioning will be limited in scenarios where the natural light of the vehicle is insufficient. dropped.
  • laser fusion positioning technology can also perform positioning, the sensing object of laser fusion positioning technology is laser, and the application cost of this technology is relatively high. Therefore, in scenarios where the natural light of the vehicle is insufficient, how to position the vehicle with low cost and high accuracy is an urgent problem that needs to be solved.
  • This application provides a positioning method, device and intelligent driving equipment, which can enable the intelligent driving equipment to perform positioning in a low-cost and high-precision manner in scenes with insufficient natural light.
  • the intelligent driving equipment includes a vehicle.
  • a positioning method includes: obtaining first position information of a first light source; determining a first ambient light sensor corresponding to the first light source, and the first ambient light sensor is located at the ambient light sensor.
  • the ambient light sensor group is installed on the intelligent driving device; the second position information of the intelligent driving device is determined according to the orientation information of the first ambient light sensor and the first position information.
  • the first light source may be a point light source on the road (for example, a street light).
  • the first location information may indicate the geographical location of the first light source.
  • the first location information may be obtained from a high-precision map or point cloud image, or may be obtained from other devices through vehicle-to-everything (V2X).
  • V2X vehicle-to-everything
  • the ambient light sensor group may include multiple ambient light sensors, and the overall shape (appearance) thereof may be a smooth and continuous curved surface without any concave portions.
  • the direction in which the multiple ambient light sensors perceive light can be a direction perpendicular to the curved surface of the ambient light sensor.
  • the orientation information of the first ambient light sensor can be understood as the direction in which the first ambient light sensor faces the first light source. Furthermore, the orientation information can be the pose of the first ambient light sensor.
  • the second position information of the intelligent driving device can be determined based on the orientation information of the first ambient light sensor and the first position information of the first light source. In this way, the intelligent driving device can be used in a scene with insufficient natural light. Low-cost, high-precision positioning.
  • the intelligent driving equipment includes a vehicle.
  • determining the first ambient light sensor according to the light emitted by the first light source includes: obtaining the relationship between each ambient light sensor in the ambient light sensor group and The light intensity value corresponding to the first light source; determine the ambient light sensor with the largest light intensity value in the ambient light sensor group as the first ambient light sensor.
  • the light intensity value corresponding to each ambient light sensor and the first light source can be obtained by obtaining the reading of each ambient light sensor, including: the sensor corresponding to the maximum value of the reading is the first ambient light sensor.
  • the value range of the reading may be between 0 and 90.
  • the larger the reading the larger the light intensity value corresponding to the ambient light sensor and the first light source.
  • the first light source directly or approximately directly illuminates the first ambient light sensor.
  • the light intensity value corresponding to each ambient light sensor in the ambient light sensor group and the first light source can be counted to determine the sensor with the largest light intensity value as the first environment sensor. In this way, it is possible to quickly and efficiently Determine the first ambient light sensor corresponding to the first light source, thereby calculating precise positioning information of the intelligent driving device.
  • the orientation information is the pose of the first ambient light sensor in the world coordinate system; the orientation information according to the first ambient light sensor Before determining the second position information of the intelligent driving device with the first position information, the method further includes: obtaining the third position information of the intelligent driving device, the third position information is based on the global navigation satellite system Determined; the determining the second position information of the intelligent driving device based on the orientation information of the first ambient light sensor and the first position information includes: based on the position of the first ambient light sensor in the world coordinates
  • the fourth position information of the first light source is determined based on the posture of the system.
  • the fourth position information is determined based on the theoretical incident light of the first ambient light sensor and the first plane.
  • the first plane is The plane where the first light source is located, the first plane is determined based on the second plane, the second plane is the plane where the intelligent driving device is located, the first plane is parallel to the second plane; According to the first position information and the fourth position information, a first offset vector is determined; according to the first offset vector and the third position information, the second position of the intelligent driving device is determined information.
  • the world coordinate system can refer to the absolute coordinate system of the system, and the world coordinate system can be a system that describes the positional relationship of objects on the earth, which can be used to represent the absolute positions of objects on the earth.
  • the third position information may be rough initial positioning information of the intelligent driving device. Based on the third position information, it can be determined which light source or light sources the intelligent driving device is located near.
  • the theoretical incident light of the ambient light sensor can be obtained.
  • the intersection of the theoretical incident light and the first plane is the fourth position information of the first light source. If the first position information of the intelligent driving device is accurate positioning information, then the first position information and the fourth position information of the first light source are the same. There is no need to calibrate the third location information of the smart driving device. In most cases, the first position information and the fourth position information are not the same, and in this case, the offset vector needs to be calculated.
  • the offset vector can be understood as the distance between the first position information and the fourth position information.
  • the second position information accurate positioning information of the intelligent driving device
  • the first plane must first be obtained.
  • the first plane is determined based on the second plane.
  • the first plane is the plane where the light source is located.
  • the first plane is parallel to the second plane.
  • the second plane is where the intelligent driving equipment is located. flat.
  • the second plane is a plane determined based on the driving direction and attitude of the intelligent driving device.
  • the intelligent driving device is regarded as a point in the world coordinate system, the second plane may be the tangent plane of the point on the road surface.
  • the positioning information of the intelligent driving device and the location information of the intelligent driving device have the same meaning.
  • the theoretical incident light can be determined based on the posture of the first ambient light sensor in the world coordinate system, and the theoretical positioning point of the first light source can be obtained, thereby Obtain the offset vector and use this offset vector to calibrate the initial position of the intelligent driving device. In this way, accurate positioning information of intelligent driving equipment can be obtained.
  • the method further includes: obtaining fifth position information of the second light source; determining a second ambient light sensor corresponding to the second light source, and the second The ambient light sensor is located in the ambient light sensor group; based on the pose of the second ambient light sensor in the world coordinate system, sixth position information of the second light source is determined, and the sixth position information is It is determined by the theoretical incident light of the second ambient light sensor and the third plane; the third plane is the plane where the second light source is located, and the third plane is determined based on the second plane, so The second plane is parallel to the third plane; a second offset vector is determined based on the fifth position information and the sixth position information; the second offset vector is determined based on the first offset vector and the third position Information, determining the second position information of the intelligent driving device includes: determining the second position information according to the first offset vector, the second offset vector and the third position information.
  • the correct first ambient light sensor and the second ambient light sensor can be determined through mathematical analysis of the ambient light sensor group readings.
  • the local maximum light intensity readings corresponding to the first light source and the second light source can be obtained by analyzing the local maximum value of the ambient light sensor group, and then the information of the first ambient light sensor and the second ambient light sensor can be obtained. .
  • the second offset vector can be determined based on the fifth position information and the sixth position information of the second light source. , so that the initial position of the intelligent driving device can be calibrated according to the first offset vector and the second offset vector, which is beneficial to obtaining more accurate position information of the intelligent driving device.
  • the ambient light sensor group includes a plurality of ambient light sensors, and the surface of each ambient light sensor in the plurality of ambient light sensors is a smooth, non-dented curved surface.
  • the above-mentioned ambient light sensor group can also be a smooth and continuous assembly without concave curved surfaces.
  • the surface of each ambient light sensor in the ambient light sensor group is a smooth and non-dented curved surface. In this way, it is conducive to obtain the theoretical incident light more accurately, thereby obtaining more accurate position information of the intelligent driving equipment.
  • the ambient light sensor group includes solar panels, and the solar panels are distributed on the top of the intelligent driving device.
  • the ambient light sensor group may include solar panels.
  • the intelligent driving device can be charged while driving and can also achieve low-cost, high-precision positioning.
  • a positioning device which device includes: an acquisition unit and a processing unit: the acquisition unit is used to acquire the first position information of the first light source; the processing unit is used to: determine the first position information of the first light source.
  • a first ambient light sensor corresponding to a light source the first ambient light sensor is located in an ambient light sensor group, the ambient light sensor group is installed on the intelligent driving device; and according to the orientation information of the first ambient light sensor and The first location information determines the second location information of the intelligent driving device.
  • the acquisition unit is further configured to acquire the light intensity value corresponding to the first light source for each ambient light sensor in the ambient light sensor group;
  • the processing unit is configured to determine that the ambient light sensor with the largest light intensity value in the ambient light sensor group is the first ambient light sensor.
  • the orientation information is the pose of the first ambient light sensor in the world coordinate system; the acquisition unit is also used to acquire the intelligent driving The third position information of the device, the third position information is determined based on the global navigation satellite system; the processing unit is configured to: determine based on the pose of the first ambient light sensor in the world coordinate system Fourth position information of the first light source, the fourth position information is determined based on the theoretical incident light of the first ambient light sensor and a first plane, where the first light source is located Plane, the first plane is determined based on the second plane, the second plane is the plane where the intelligent driving device is located, the first plane is parallel to the second plane; according to the first position information and the fourth position information, determine a first offset vector; determine the second position information of the intelligent driving device according to the first offset vector and the third position information.
  • the acquisition unit is also used to obtain the fifth position information of the second light source; the processing unit is also used to: determine the corresponding position of the second light source.
  • a second ambient light sensor the second ambient light sensor is located in the ambient light sensor group; and determine the position of the second light source according to the pose of the second ambient light sensor in the world coordinate system.
  • the sixth position information is determined by the theoretical incident light of the second ambient light sensor and a third plane; the third plane is the plane where the second light source is located, and the third The plane is determined based on the second plane, which is parallel to the third plane; a second offset vector is determined according to the fifth position information and the sixth position information; according to the third An offset vector, the second offset vector and the third position information determine the second position information.
  • the ambient light sensor group includes a plurality of ambient light sensors, and the surface of each ambient light sensor in the plurality of ambient light sensors is a smooth, non-dented curved surface.
  • the ambient light sensor group includes solar panels, and the solar panels are distributed on the top of the intelligent driving device.
  • a positioning device in a third aspect, includes: at least one processor and a memory.
  • the at least one processor is coupled to the memory and is used to read and execute instructions in the memory, so that the device implements Methods in the above first aspect and each implementation manner.
  • a computer-readable medium stores program code.
  • the computer program code When the computer program code is run on a computer, it causes the computer to execute the method in the above-mentioned first aspect and its respective implementations. .
  • a chip in a fifth aspect, includes a circuit for performing the methods in the above aspects.
  • a sixth aspect provides an intelligent driving equipment, which includes: the positioning device in the second to third aspects and an ambient light sensor group, where the ambient light sensor group includes a plurality of ambient light sensors, and the The surface of each of the plurality of ambient light sensors is a smooth and concave-free curved surface.
  • Figure 1 is a functional schematic diagram of an intelligent driving device provided by an embodiment of the present application.
  • Figure 2 is a system architecture applicable to the positioning method provided by the embodiment of the present application.
  • Figure 3 is a schematic flow chart of a positioning method provided by an embodiment of the present application.
  • Figure 4 is a schematic flow chart of another positioning method provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of the position of the ambient light sensor group on the vehicle provided by the embodiment of the present application.
  • Figure 6 is a schematic diagram of the shape and orientation of the ambient light sensor group provided by the embodiment of the present application.
  • Figure 7 is a line graph of the incident angle of light and the luminous flux per unit area of the ambient light sensor provided by the embodiment of the present application;
  • Figure 8 is a schematic diagram of ambient light sensor readings that can be obtained by the sub-sensor provided in the embodiment of the present application.
  • Figure 9 is a schematic diagram of a method for determining the positioning point M provided by the embodiment of the present application.
  • Figure 10 is an application scenario applicable to the positioning method provided by the embodiment of the present application.
  • Figure 11 is another applicable application scenario for the positioning method provided by the embodiment of the present application.
  • Figure 12 is a positioning device provided by an embodiment of the present application.
  • Figure 13 is another positioning device provided by an embodiment of the present application.
  • At least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • Prefixes such as “first” and “second” are used in the embodiments of this application only to distinguish different description objects, and have no limiting effect on the position, order, priority, quantity or content of the described objects.
  • the use of ordinal words and other prefixes used to distinguish the described objects does not limit the described objects.
  • Words constitute redundant restrictions.
  • GNSS positioning accuracy is unstable and cannot be positioned in some scenarios (for example, scenarios with poor network signals).
  • vision fusion positioning technology can assist or replace GNSS for positioning, because the perceived object of vision fusion positioning technology is visible light, the accuracy and accuracy of vehicle positioning will be limited in scenarios where the natural light of the vehicle is insufficient. dropped.
  • the expected error value of the vehicle's positioning accuracy and stability at night may increase by 2 to 4 times.
  • laser fusion positioning technology can also perform positioning
  • the sensing object of laser fusion positioning technology is laser.
  • Laser fusion and positioning technology are relatively high in terms of energy consumption cost, procurement cost and computing cost. Therefore, in some specific scenes where natural light is insufficient, how to position vehicles with low cost and high accuracy is an urgent problem that needs to be solved.
  • the above-mentioned specific scene may be, for example, night, dusk, dawn, evening, etc.
  • This application provides a positioning method, device and intelligent driving equipment, which can enable intelligent driving equipment, including vehicles, to perform positioning in a low-cost, high-precision manner in scenarios where natural light is insufficient.
  • FIG. 1 is a functional schematic diagram of an intelligent driving device 100 provided by an embodiment of the present application. It should be understood that FIG. 1 and related descriptions are only examples and do not limit the intelligent driving equipment in the embodiments of the present application.
  • the intelligent driving device 100 may be configured in a fully or partially autonomous driving mode, or may be manually driven by the user.
  • the intelligent driving device 100 can obtain its surrounding environment information through the sensing system 120, and obtain an autonomous driving strategy based on the analysis of the surrounding environment information to achieve fully autonomous driving, or present the analysis results to the user to achieve partially autonomous driving.
  • the intelligent driving device 100 may include various subsystems, such as the perception system 120, the computing platform 130, and the display device 140.
  • the intelligent driving device 100 may include more or fewer subsystems, and each subsystem may include one or more components.
  • each subsystem and component of the intelligent driving device 100 can be interconnected in a wired or wireless manner.
  • the sensing system 120 may include several types of sensors for sensing information about the environment around the intelligent driving device 100 .
  • the sensing system 120 may include a positioning system, which may be a global positioning system (GPS), Beidou system, or other positioning systems.
  • the sensing system 120 may include one or more of an inertial measurement unit (IMU), lidar, millimeter wave radar, ultrasonic radar, and camera device.
  • IMU inertial measurement unit
  • the computing platform 130 may include processors 131 to 13n (n is a positive integer).
  • a processor is a circuit with signal processing capabilities.
  • the processor may be a circuit with instruction reading and execution capabilities.
  • CPU central processing unit
  • microprocessor graphics processing unit
  • GPU graphics processing unit
  • DSP digital signal processor
  • the processor can realize certain functions through the logical relationship of the hardware circuit. The logical relationship of the hardware circuit is fixed or can be reconstructed.
  • the processor is an application-specific integrated circuit (application-specific integrated circuit). ASIC) or programmable logic device (PLD) implemented hardware circuit, such as FPGA.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the process of the processor loading the configuration file and realizing the hardware circuit configuration can be understood as the process of the processor loading instructions to realize the functions of some or all of the above units.
  • the processor can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as a neural network processing unit (NPU), tensor processing unit (TPU), depth Learning processing unit (deep learning processing unit, DPU), etc.
  • the computing platform 130 may also include a memory, which is used to store instructions. Some or all of the processors 131 to 13n may call instructions in the memory to implement corresponding functions.
  • Computing platform 130 may control functions of intelligent driving device 100 based on input received from various subsystems (eg, perception system 120). In some embodiments, the computing platform 130 may be used to provide control of many aspects of the intelligent driving device 100 and its subsystems.
  • various subsystems eg, perception system 120
  • the computing platform 130 may be used to provide control of many aspects of the intelligent driving device 100 and its subsystems.
  • the intelligent driving device 100 traveling on the road can identify objects in its surrounding environment to determine adjustments to the current speed.
  • the objects may be other vehicles, traffic control equipment, or other types of objects.
  • each identified object can be considered independently, and the speed to be adjusted by the smart driving device 100 can be determined based on the object's respective characteristics, such as its current speed, acceleration, distance from the vehicle, etc.
  • the intelligent driving device 100 or the sensing and computing device (eg, computing platform 130) associated with the intelligent driving device 100 may be based on the characteristics of the recognized object and the state of the surrounding environment (eg, traffic, rain, on the road). of ice, etc.) to predict the behavior of the identified objects.
  • each recognized object depends on the behavior of each other, so it is also possible to predict the behavior of a single recognized object by considering all recognized objects together.
  • the intelligent driving device 100 in this application may include: road vehicles, water vehicles, air vehicles, industrial equipment, agricultural equipment, or entertainment equipment, etc.
  • the intelligent driving device can be a vehicle, which is a vehicle in a broad sense, and can be a means of transportation (such as commercial vehicles, passenger cars, motorcycles, flying cars, trains, etc.), industrial vehicles (such as forklifts, trailers, tractors, etc.) vehicles, etc.), engineering vehicles (such as excavators, bulldozers, cranes, etc.), agricultural equipment (such as lawn mowers, harvesters, etc.), amusement equipment, toy vehicles, etc.
  • the embodiments of this application do not specifically limit the types of vehicles.
  • the intelligent driving device can be a means of transportation such as an airplane or a ship.
  • the following takes the intelligent driving equipment as a vehicle as an example to illustrate the technical problems that need to be solved in this application and the technical solutions adopted.
  • FIG. 2 is a system architecture 200 applicable to the positioning method provided by the embodiment of the present application.
  • the system architecture shown in FIG. 2 can be applied to the intelligent driving device 100 of FIG. 1 .
  • the architecture 200 may include: a sensor abstraction module 210 , a perception module 220 , a positioning module 230 and a planning control module 240 .
  • the sensor abstraction module 210 can preprocess input data according to the sensor type, and input the preprocessed data to the perception module 220 and the positioning module 230 .
  • the sensing module 220 can obtain the driving environment data of the vehicle based on the input data, and send the driving environment data to the positioning module 230 .
  • the positioning module 230 can position the vehicle according to the vehicle's driving environment data and the data preprocessed by the sensor abstraction module 210, and input the positioning result data into the planning control module 240.
  • the planning control module 240 can implement automatic control based on the positioning result data. driving functions.
  • the embodiment of the present application optimizes the data output by the sensor abstraction module 210 and the perception module 220 to the positioning module 230, thereby improving the accuracy of vehicle positioning.
  • the above-mentioned sensor abstraction module 210 and perception module 220 may be located in the perception system 120 of FIG. 1
  • the positioning module 230 and the planning control module 240 may be located in the computing platform 130 of FIG. 1 .
  • Figure 3 is a schematic flow chart of a positioning method 300 provided by an embodiment of the present application.
  • the method 300 can be executed by the intelligent driving device 100 in Figure 1 , or can also be executed by the computing platform 130 in the intelligent driving device 100 , or It may also be executed by a system on chip (SOC) in the computing platform 130 , or it may also be executed by a processor in the computing platform 130 .
  • Method 300 may include steps S301-S303.
  • the first light source may be a point light source on the road (for example, a street light).
  • the first location information can be used to indicate the geographical location of the first light source.
  • the first location information can be obtained from a high-precision map or point cloud map, or can be obtained from other communication devices through the Internet of Vehicles.
  • the ambient light sensor group includes multiple ambient light sensors, the overall shape (appearance) of which can be a smooth and continuous curved surface, and the curved surface has no concave parts, and the perception of light by each ambient light sensor in the multiple ambient light sensors The direction can be perpendicular to the ambient light sensor surface.
  • the orientation information of the first ambient light sensor can be understood as the direction in which the first ambient light sensor faces the first light source. Furthermore, the orientation information can be the posture of the first ambient light sensor.
  • the second position information of the vehicle can be determined based on the orientation information of the first ambient light sensor and the first position information of the first light source. In this way, the vehicle can be operated with low speed in a scene with insufficient natural light. Positioning in a cost-effective and high-precision manner.
  • determining the first ambient light sensor based on the light emitted by the first light source includes: obtaining the light corresponding to the first light source for each ambient light sensor in the ambient light sensor group. Intensity value; determine that the ambient light sensor with the largest light intensity value in the ambient light sensor group is the first ambient light sensor.
  • the light intensity value corresponding to each ambient light sensor and the first light source can be obtained by obtaining the reading of each ambient light sensor, including: the sensor corresponding to the maximum value of the reading is the first ambient light sensor.
  • the value range of the reading may be 0-90.
  • the larger the reading the larger the light intensity value corresponding to the ambient light sensor and the first light source.
  • the first light source directly or approximately directly illuminates the first ambient light sensor.
  • the light intensity values corresponding to the ambient light sensor and the first light source can be counted to determine the sensor with the largest light intensity value as the first environment sensor. In this way, the corresponding light intensity of the first light source can be determined quickly and efficiently.
  • the first ambient light sensor is used to calculate the precise positioning information of the vehicle.
  • the orientation information is the pose of the first ambient light sensor in the world coordinate system; the orientation information determined based on the orientation information of the first ambient light sensor and the first position information Before obtaining the second position information of the vehicle, the method further includes: obtaining the third position information of the vehicle, the third position information being determined according to the global navigation satellite system; Determining the second position information of the vehicle based on the orientation information of the sensor and the first position information includes: determining the first position of the first light source based on the position and orientation of the first ambient light sensor in the world coordinate system.
  • the fourth position information is determined based on the theoretical incident light of the first ambient light sensor and the first plane, the first plane is the plane where the first light source is located, the first plane is determined based on the second plane, which is the plane where the vehicle is located, and the first plane is parallel to the second plane; based on the first position information and the fourth position information, it is determined a first offset vector; determining the second position information of the vehicle based on the first offset vector and the third position information.
  • the world coordinate system can refer to the absolute coordinate system of the system, and the world coordinate system can be a system that describes the positional relationship of objects on the earth, which can be used to represent the absolute positions of objects on the earth.
  • the third position information may be rough initial positioning information of the vehicle. Based on the third position information, it can be determined which light source or light sources the vehicle is near.
  • the theoretical incident light of the ambient light sensor can be obtained.
  • the intersection of the theoretical incident light and the first plane is the fourth position information of the first light source. If If the first position information of the vehicle is accurate positioning information, the first position information and the fourth position information of the first light source are the same. There is no need to calibrate the third position information of the vehicle. In most cases, the first position information and the fourth position information are not the same, and in this case, the offset vector needs to be calculated.
  • the offset vector can be understood as the distance between the first position information and the fourth position information.
  • the second position information precision positioning information of the vehicle
  • the first plane must first be obtained.
  • the first plane is determined based on the second plane.
  • the first plane is the plane where the light source is located.
  • the first plane is parallel to the second plane.
  • the second plane is the plane where the vehicle is located.
  • the second plane may be a plane determined based on the driving direction and attitude of the vehicle.
  • the second plane may be the tangent plane of the point on the road surface.
  • the location information of the vehicle and the location information of the vehicle have the same meaning.
  • the theoretical incident light can be determined based on the posture of the first ambient light sensor in the world coordinate system, and the theoretical positioning point of the first light source can be obtained, thereby Obtain the offset vector and use this offset vector to calibrate the initial position of the vehicle. In this way, accurate positioning information of the vehicle can be obtained.
  • the method further includes: obtaining fifth position information of the second light source; determining a second ambient light sensor corresponding to the second light source, and the second ambient light sensor is located in the ambient light In the sensor group; determine the sixth position information of the second light source according to the pose of the second ambient light sensor in the world coordinate system, the sixth position information is determined by the second ambient light sensor
  • the theoretical incident light is determined by the third plane; the third plane is the plane where the second light source is located, the third plane is determined based on the second plane, and the second plane is the same as the third plane.
  • the second position information includes: determining the second position information according to the first offset vector, the second offset vector and the third position information.
  • the correct first and second ambient light sensors may be determined by mathematical analysis of the ambient light sensor group readings.
  • the local maximum light intensity numbers corresponding to the first light source and the second light source can be obtained by analyzing the local maximum value of the ambient light sensor group, thereby obtaining the correct first ambient light sensor and second ambient light sensor. Information.
  • the second offset vector can be determined based on the fifth position information and the sixth position information of the second light source, so that The initial position of the vehicle can be calibrated according to the first offset vector and the second offset vector, which is beneficial to obtaining more accurate vehicle position information.
  • the ambient light sensor group includes a plurality of ambient light sensors, the ambient light sensor group includes a plurality of ambient light sensors, and the surface of each ambient light sensor in the plurality of ambient light sensors is It is a smooth surface without depressions.
  • the above-mentioned ambient light sensor has a smooth and continuous component with no concave curved surface.
  • the surface of each ambient light sensor in the ambient light sensor group is a smooth and concave-free curved surface. In this way, it is conducive to obtain the theoretical incident light more accurately, thereby obtaining more accurate vehicle position information.
  • the ambient light sensor group includes solar panels, and the solar panels are distributed on the top of the vehicle.
  • the ambient light sensor group may include solar panels. In this way, the vehicle can not only realize the charging function while driving, but also achieve low-cost and high-precision positioning.
  • Figure 4 is a schematic flow chart of another positioning method 400 provided by an embodiment of the present application.
  • the method 400 can be executed by the intelligent driving device 100 in Figure 1, or can also be executed by the computing platform 130 in the intelligent driving device 100. Or it can also be executed by the SOC in the computing platform 130 , or it can also be executed by the processor in the computing platform 130 .
  • Method 400 is a specific description of the implementation of method 300. As shown in Figure 4, method 400 may include the following steps.
  • the sensor parameters may include: light source geographical location information and vehicle location information.
  • the light source geographical location information may be used to indicate the geographical location of a point light source (for example, a street lamp) illuminated on the road.
  • the light source geographical location information may be 3D information. This information can be obtained from high-precision maps or point clouds, or from other devices through the Internet of Vehicles.
  • the positioning information of the vehicle may be positioning information with low positioning accuracy, but is sufficient to determine which light source the vehicle is located under or between which two light sources.
  • the positioning information can be obtained through GNSS, inertial navigation system (INS) or odometer. Alternatively, the positioning information can also be obtained through data fusion of GNSS, INS and odometry.
  • the sensor can be an ambient light sensor group (ALSG).
  • the ALSG can be composed of several ambient light sensors (sub-sensors).
  • the ambient light sensor can sense electromagnetic waves in a specific wavelength range in the environment.
  • the ambient light sensor includes but Not limited to, light pressure sensors, infrared sensors, solar panels, etc.
  • the geographical location information of the light source in method 400 may be the first location information in method 300, and the positioning information of the vehicle may be the third location information in method 300.
  • the location of the ALSG sensor in the vehicle can be shown in Figure 5, where the design of the ALSG sensor can meet the following conditions:
  • ALSG The overall shape of ALSG is topologically homeomorphic to a hemisphere, and its shape is a convex hull.
  • the shapes and orientations of the sub-sensors shown in (a) to (c) in Figure 6 are only exemplary illustrations, and the actual shapes and orientations of the sub-sensors can be changed based on the actual application conditions of the sub-sensors, for example,
  • the shape of the photosensitive surface of the sub-sensors in (a) to (c) of Figure 6 may be a quadrilateral with equal areas.
  • the angles between different sub-sensors may be unequal.
  • main light source L there is a main light source L near the vehicle. Since the main light source L is a light source closer to the vehicle, its brightness exceeds the brightness of other light sources.
  • this figure is a line graph of the incident angle of light and the luminous flux per unit area of ALSG.
  • the horizontal axis of the line graph represents the angle between the incident light and the normal line of the sub-sensor photosensitive surface, that is, the incident angle of the light;
  • the vertical axis of the line graph represents the luminous flux per unit area of the ALSG. It can be seen from the figure that as the incident angle of light becomes larger, the luminous flux per unit area becomes smaller. When the light is directed, that is, when the incident angle of the light is 0, the luminous flux per unit area is maximum. Therefore, only the sub-sensor that is exposed to direct or nearly direct sunlight can obtain the maximum ambient light sensor reading.
  • the incident angle of the photosensitive surface of the ambient light sensor when the incident angle of the photosensitive surface of the ambient light sensor is 0, it can be called vertical incidence of light, or direct incidence for short. When the incident angle is smaller than the preset threshold, it can be called approximately direct.
  • the setting of the preset threshold can depend on the density of the ALSG. When the adjacent angle difference is smaller, the preset threshold can be set smaller.
  • this figure is a schematic diagram of ambient light sensor readings that can be obtained by different sub-sensors.
  • the small circles in the figure represent sub-sensors. It can be seen from the figure that the point marked 90 is subject to direct or approximately direct light.
  • the sub-sensor is recorded as sensor V. This sensor V is the sub-sensor with the largest reading. This sensor V is easily acquired by the system.
  • the sensor V in the method 400 may be the first ambient light sensor or the second ambient light sensor in the method 300 .
  • step S401 sensor V serves as a sub-sensor, and its position on the ALSG is fixed. That is to say, the pose P v-car of sensor V in the vehicle coordinate system can be a constant or a fixed value. In the same coordinate system, the pose P v-car of the sensor V can be converted to the pose of the center of the rear axle of the vehicle through the unique transformation matrix T v . The center pose of the rear axle of the vehicle is also the pose of the vehicle.
  • the position in the pose of the sensor V can refer to the three-dimensional geometric coordinates of the sensor V in the vehicle coordinate system, and the pose can refer to the direction vector of the direct light hitting the sensor V.
  • the pose of the sensor V in the world coordinate system can be obtained by the following formula.
  • P v-world is the pose of sensor V in the world coordinate system
  • P car-world is the pose of the vehicle in the world coordinate system
  • T v is the transformation matrix
  • the theoretical incident light F can be expressed in the form of a linear equation system in the world coordinate system (mathematically it has been Knowing the coordinates of the point on the straight line and the direction of the straight line, the straight line equation can be determined), that is, if the positioning result is accurate, the real incident light of the light source L should illuminate the sensor V along the theoretical incident light F.
  • a small range of space can be regarded as a Euclidean space.
  • the distance from the light source L to the theoretical incident light F can be obtained through the point-to-line distance formula.
  • Distance D LF If D LF is equal to or close to 0, it means that F passes through the light source L, the light source L can be regarded as the incident light, and the vehicle's positioning result is accurate. At this time, the positioning result can be directly input into the vehicle's positioning planning module. Steps S405 to S407 are no longer performed.
  • the light source L may be a point light source. Furthermore, the light source L may be a street light.
  • D LF is often not equal to or close to 0. Therefore, F cannot be regarded as a real incident light at this time, but only a real incident parallel light.
  • the tangent plane equation of the vehicle relative to the road in the world coordinate system can be determined based on the posture of the vehicle in the world coordinate system.
  • C 1 (mathematically, a certain vehicle posture can obtain a certain tangent plane)
  • the parallel plane C 2 passing through the light source L is obtained.
  • the large hemisphere in the figure is the ambient light sensor group.
  • the plane where the large hemisphere is located is C 1
  • the plane where the light source is located is C 2
  • the intersection point M is the intersection of F and plane C 2 .
  • the offset vector can be determined according to the intersection point M, the real incident ray and C 2 Since the real incident ray is parallel to F, and C 1 is parallel to C 2 , the offset vector It is also the offset vector between the initial positioning and the actual positioning.
  • the plane C 2 in the method 400 may be the first plane or the third plane in the method 300
  • the plane C 1 may be the second plane in the method 300
  • the intersection point M may be the fourth position information in the method 300.
  • shift vector Can be the first offset vector in method 300.
  • the ground is relatively flat, and the error between the vehicle's initial position and its true position does not exceed 15 meters. Therefore, it can be assumed that the ground is equivalent to C 1 .
  • the optimized vehicle positioning position P opt can be obtained. After obtaining the optimized vehicle positioning position P opt , the data can be input into the vehicle positioning planning module. Step S407 is no longer performed.
  • vehicle positioning position P opt may be the second position information in the method 300 .
  • the processing method can be to use Newton's method, that is, obtain the curvature of the ground from the map module, and use the curvature and offset vector to Obtain the tangent plane equation at P opt , then regard P opt as the new initial positioning position, and iterate through steps S404 to S406.
  • the module is less than the preset threshold, or the number of iterations is greater than or equal to the preset number, the iteration is stopped, and P opt is input into the vehicle's positioning planning module.
  • curvature of the ground can be used to represent the degree of undulation of the ground. The larger the curvature value is, the more uneven the ground is and the greater the degree of undulation of the ground.
  • the above method 400 is introduced by taking the scene of a vehicle driving under one light source as an example.
  • the processing method is similar to the above method 400 . That is, there are two sub-sensors V at this time, which are the sub-sensors with the maximum reading in the ambient light sensor group corresponding to each light source.
  • the positioning optimization calculation of method 400 can be performed on the two sensors respectively, and two optimized positioning points P opt1 and P opt2 can be obtained. The midpoint of the line connecting P opt1 and P opt2 can be taken as the positioning point.
  • the above method can also be applied by analogy to scenes with multiple light sources.
  • the position of the anchor point can be determined based on the pose of the ambient light sensor in the world coordinate system, and the offset vector between the anchor point position and the real light source position can be calculated, and the offset vector can be used to position the vehicle.
  • the vehicle can be positioned in a low-cost, high-precision manner in scenes with insufficient natural light.
  • Figure 10 is an application scenario applicable to the positioning method provided by the embodiment of the present application. Method 300 and method 400 can be applied to this application scenario.
  • one or more solar panels can be deployed on the top of the vehicle, and sensors that can measure the power generation efficiency or voltage value of the panels can be placed under each panel. Because there is a corresponding relationship between the current incident light intensity and the power generation voltage of the battery panel, that is, the stronger the incident light, the higher the power generation voltage of the battery panel. In this way, since the battery panels are distributed in a streamlined manner on the vehicle, there is omnidirectional light illumination above and above the battery panels. As long as sufficient voltage sensor density exists, the design conditions for the ambient light sensor groups in methods 300 and 400 can be met. Therefore, the battery panels of the entire vehicle can be regarded as ALSG.
  • the vehicle When the above-mentioned vehicle drives under a point light source at night, it can generate electricity through the photoelectric reaction of the panels, so that the reading of the voltage sensor indirectly reflects the incident light of each panel. Light intensity. Therefore, the vehicle can be positioned at night through method 300 and method 400 .
  • the ambient light sensor is set as a solar panel, which can not only meet the charging needs of the vehicle while driving, but also position the vehicle at night in a low-cost, high-precision manner.
  • Figure 11 is another applicable application scenario for the positioning method provided by the embodiment of the present application.
  • Method 300 and method 400 can be applicable to this application scenario.
  • the interface 1100 includes user account login information 1101, Bluetooth function icon 1102, Wi-Fi function icon 1103, cellular network signal icon 1104, vehicle map application search box 1105, switch to a card displaying all applications installed in the vehicle 1106, switch to the card 1107 that displays the car music application, the display card 1108 that displays the vehicle's remaining power and remaining mileage, and the display card 1109 that displays the vehicle's 360-degree (°) surround effect function.
  • vehicle map application search box 1105 may include a home control 11051 and a positioning control 11052 set by the user.
  • the function bar 1110 includes an icon 1111 for switching to display the central control large screen desktop, a vehicle internal circulation icon 1112, a main driver seat heating function icon 1113, a main driver area air conditioning temperature display icon 1114, a passenger area air conditioning temperature display icon 1115, and a passenger seat heating function icon 1113.
  • Seat heating function icon 1116 and volume setting icon 1117 are examples of the vehicle internal circulation icon 1112 and the vehicle internal circulation icon 1112.
  • the user can use the positioning control 11052 to view the location of the vehicle on the map in real time.
  • the interface 1100 can display a graphical user interface (GUI) as shown in (b) of Figure 11.
  • GUI graphical user interface
  • a prompt box 1118 can be displayed on the interface 1100 to inform the user.
  • the current vehicle positioning accuracy is poor. Whether to optimize positioning based on nearby light sources.
  • the vehicle can optimize the positioning of the vehicle based on the above method 300 or method 400.
  • the vehicle can determine the light source closest to the vehicle based on the initial positioning information (imprecise positioning information), and can determine the first ambient light sensor based on the nearest light source.
  • the pose of the first ambient light sensor in the world coordinate system can determine the theoretical incident light of the first ambient light sensor, and determine the first offset vector, and then use the first offset vector to calibrate the vehicle's initial positioning information to obtain Accurate vehicle positioning information.
  • the interface 1100 can display the precise location of the vehicle on the map, and can display a prompt box 1119, Used to inform the user that the vehicle's positioning optimization is successful.
  • the above-mentioned vehicle may be the intelligent driving device 100 in FIG. 1
  • the interface 1100 may be the interface displayed on the display device 140 in FIG. 1 .
  • Embodiments of the present application also provide a device for implementing any of the above methods.
  • the device includes units for implementing each step performed by the intelligent driving device 100 in any of the above methods.
  • FIG. 12 is a schematic diagram of a positioning device 1200 provided by an embodiment of the present application.
  • the device 1200 may include an acquisition unit 1210, a storage unit 1220, and a processing unit 1230.
  • the acquisition unit 1210 is used to acquire instructions and/or data.
  • the acquisition unit 1210 may also be called a communication interface or a communication unit.
  • the storage unit 1220 is used to implement corresponding storage functions and store corresponding instructions and/or data.
  • the processing unit 1230 is used for data processing.
  • the processing unit 1230 can read the instructions and/or data in the storage unit, so that the device 1200 implements the aforementioned positioning method.
  • the device 1200 includes: an acquisition unit 1210 and a processing unit 1230.
  • the acquisition unit 1210 is used to acquire the first position information of the first light source;
  • the processing unit 1230 is used to determine the corresponding position of the first light source.
  • a first ambient light sensor the first ambient light sensor is located in an ambient light sensor group, the ambient light sensor group is installed on the intelligent driving device; and determine the said first ambient light sensor based on the orientation information and the first position information of the first ambient light sensor Second location information of smart driving equipment.
  • the obtaining unit 1210 is also used to obtain the light intensity value corresponding to the first light source for each ambient light sensor in the ambient light sensor group; the processing unit 1230 is used to determine the ambient light sensor.
  • the ambient light sensor with the largest light intensity value in the group is the first ambient light sensor.
  • the orientation information is the pose of the first ambient light sensor in the world coordinate system
  • the acquisition unit 1210 is also used to acquire the third position information of the intelligent driving device, and the third position information is Determined according to the global navigation satellite system
  • the processing unit 1230 is used to: determine the fourth position information of the first light source according to the pose of the first ambient light sensor in the world coordinate system, the fourth position information is determined according to the first
  • the theoretical incident light of the ambient light sensor is determined by the first plane, which is the plane where the first light source is located, and the first plane is determined based on the second plane, which is the plane where the intelligent driving device is located, The first plane is parallel to the second plane; according to the first position information and the fourth position information, the first offset vector is determined; according to the first offset vector and the third position information, the second position of the intelligent driving device is determined information.
  • the obtaining unit 1210 is also used to obtain the fifth position information of the second light source; the processing unit 1230 is also used to determine the second ambient light sensor corresponding to the second light source.
  • the ambient light sensor is located in the ambient light sensor group; and based on the posture of the second ambient light sensor in the world coordinate system, determine sixth position information of the second light source, the sixth position information is determined by the second ambient light sensor
  • the theoretical incident light ray and the third plane are determined; the third plane is the plane where the second light source is located, the third plane is determined based on the second plane, the second plane is parallel to the third plane; according to the fifth
  • the second offset vector is determined based on the location information and the sixth location information; the second location information is determined based on the first offset vector, the second offset vector and the third location information.
  • the ambient light sensor group includes multiple ambient light sensors, and the surface of each ambient light sensor in the multiple ambient light sensors is a smooth, non-dented curved surface.
  • the ambient light sensor group includes solar panels distributed on the top of the intelligent driving device.
  • each unit in the above device is only a division of logical functions. In actual implementation, all or part of the units may be integrated into a physical entity or physically separated.
  • the above-mentioned processing unit 1230 may be the processor 131 shown in FIG. 1 .
  • the acquisition unit 1210 may be the sensing module 220, and the processing unit 1230 may be the positioning module 230.
  • Figure 13 is a schematic diagram of another positioning device 1300 provided by an embodiment of the present application.
  • the device 1300 can be applied in the intelligent driving device 100 of FIG. 1 .
  • the device 1300 of the positioning device includes: a memory 1310, a processor 1320, and a communication interface 1330.
  • the memory 1310, the processor 1320, and the communication interface 1330 are connected through an internal connection path.
  • the memory 1310 is used to store instructions
  • the processor 1320 is used to execute the instructions stored in the memory 1310 to control the communication interface 1330 to obtain information, or to use
  • the positioning device executes the positioning methods in the above embodiments.
  • the memory 1310 can be coupled with the processor 1320 through an interface or integrated with the processor 1320 .
  • the above-mentioned communication interface 1330 uses a transceiver device such as but not limited to a transceiver.
  • the above-mentioned communication interface 1330 may also include an input/output interface.
  • Processor 1320 stores one or more computer programs including instructions. When the instruction is executed by the processor 1320, the positioning device 1300 is caused to perform the positioning methods in the above embodiments.
  • each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor 1320 .
  • the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware processor for execution, or can be executed by a combination of hardware and software modules in the processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory 1310.
  • the processor 1320 reads the information in the memory 1310 and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • the above-mentioned acquisition unit 1210 can be the communication interface 1330 in Figure 13 that can implement the acquisition unit 1210 in Figure 12
  • the memory 1310 in Figure 13 can implement the storage unit in Figure 12
  • the processor 1320 in Figure 13 can Implement processing unit 1230 in Figure 12.
  • the device 1200 or the device 1300 may be a computing platform, and the computing platform may be a vehicle-mounted computing platform or a cloud computing platform.
  • the device 1200 or the device 1300 may be located in the intelligent driving device 100 in FIG. 1 .
  • the device 1200 or the device 1300 may be the computing platform 130 in the intelligent driving device in Figure 1 .
  • Embodiments of the present application also provide a computer-readable medium.
  • the computer-readable medium stores program code.
  • the computer program code When the computer program code is run on a computer, it causes the computer to execute any of the above-mentioned Figure 3 or Figure 4. a way.
  • An embodiment of the present application also provides a chip, including: a circuit configured to perform any of the methods in Figure 3 or Figure 4 above.
  • An embodiment of the present application also provides an intelligent driving device, including any positioning device in Figure 12 or Figure 13 and an ambient light sensor group.
  • the ambient light sensor group includes a plurality of ambient light sensors, and the multiple ambient light sensors Each ambient light sensor surface in the sensor is a smooth, concave-free surface.
  • the intelligent driving device may be a vehicle.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)

Abstract

一种定位方法、装置以及车辆,该方法包括:获取第一光源的第一位置信息(S301);确定第一光源对应的第一环境光传感器,第一环境光传感器位于环境光传感器组中(S302);根据第一环境光传感器的朝向信息和第一位置信息确定所述车辆的第二位置信息(S303)。通过该方法,能够使得车辆在自然光不足的场景下,以低成本、高精度的方式进行定位。

Description

定位方法、装置以及智能驾驶设备 技术领域
本申请涉及智能驾驶领域,并且更具体地,涉及一种定位方法、装置以及智能驾驶设备。
背景技术
随着车辆在日常生活中被广泛使用,车辆定位的精准度也被愈发重视起来。目前,大部分的车辆可以通过全球导航卫星系统(global navigation satellite system,GNSS)进行定位。
然而,GNSS定位准确度不稳定,在部分场景下无法定位(例如,网络信号较差的场景)。在这些场景下,虽然视觉融合定位技术能够辅助或代替GNSS进行定位,但是,由于视觉融合定位技术的感知物为可见光,因此,在车辆自然光不足的场景下,车辆定位的精度和准确性均有所下降。此外,激光融合定位技术虽然也能够进行定位,但是激光融合定位技术的感知物为激光,该技术的应用成本相对较高。因此,在车辆自然光不足的场景下,如何能够低成本、高精度地对车辆进行定位是亟需解决的问题。
发明内容
本申请提供了一种定位方法、装置以及智能驾驶设备,能够使得智能驾驶设备在自然光不足的场景下,以低成本、高精度的方式进行定位。其中,所述智能驾驶设备包括车辆。
第一方面,提供了一种定位方法,该方法包括:获取第一光源的第一位置信息;确定所述第一光源对应的第一环境光传感器,所述第一环境光传感器位于环境光传感器组中,所述环境光传感器组安装在智能驾驶设备上;根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述智能驾驶设备的第二位置信息。
其中,第一光源可以是道路上的点光源(例如,路灯)。第一位置信息可以指示该第一光源的地理位置,第一位置信息可以从高精度地图,或点云图中获取,也可以通过车联网(vehicle-to-everything,V2X)从其他设备获取。
可选地,环境光传感器组可以包括多个环境光传感器,其整体形状(外观)可以是平滑连续的曲面,且该曲面无凹陷部分。多个环境光传感器对光的感知方向,可以是与环境光传感器曲面垂直的方向。
可选地,第一环境光传感器的朝向信息可以理解为第一环境光传感器面向第一光源的方向,更进一步,该朝向信息可以是第一环境光传感器的位姿。
应理解,本申请提供的定位方法可以应用在夜间、黄昏、黎明、傍晚等场景。
本申请中,能够根据第一环境光传感器的朝向信息和第一光源的第一位置信息确定智能驾驶设备的第二位置信息,通过这样的方式,能够在自然光不足的场景下,使得智能驾驶设备以低成本、高精度的方式进行定位。其中,所述智能驾驶设备包括车辆。
结合第一方面,在第一方面的某些实现方式中,所述根据所述第一光源发射的光线确定第一环境光传感器,包括:获取所述环境光传感器组中每个环境光传感器与所述第一光源对应的光强值;确定所述环境光传感器组中光强值最大的环境光传感器为所述第一环境光传感器。
其中,可以通过获取每个环境光传感器的读数来获取每个环境光传感器与所述第一光源对应的光强值包括:读数的最大值对应的传感器为第一环境光传感器。
示例性地,读数的取值范围可以在0-90之间,读数越大表示环境光传感器与第一光源对应的光强值越大,第一光源直射或近似直射该第一环境光传感器。
本申请中,可以通过统计环境光传感器组中每个环境光传感器与第一光源对应的光强值,确定光强值最大的传感器为第一环境传感器,通过这样的方式,能够快捷、高效的确定第一光源对应的第一环境光传感器,从而计算出智能驾驶设备的精确定位信息。
结合第一方面,在第一方面的某些实现方式中,所述朝向信息为所述第一环境光传感器在世界坐标系下的位姿;所述根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述智能驾驶设备的第二位置信息之前,所述方法还包括:获取所述智能驾驶设备的第三位置信息,所述第三位置信息是根据全球导航卫星系统确定的;所述根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述智能驾驶设备的第二位置信息,包括:根据所述第一环境光传感器在所述世界坐标系下的位姿,确定所述第一光源的第四位置信息,所述第四位置信息是根据所述第一环境光传感器的理论入射光线和第一平面确定的,所述第一平面为所述第一光源所在的平面,所述第一平面是基于第二平面确定的,所述第二平面为所述智能驾驶设备所在的平面,所述第一平面与所述第二平面平行;根据所述第一位置信息和所述第四位置信息,确定第一偏移向量;根据所述第一偏移向量和所述第三位置信息,确定所述智能驾驶设备的所述第二位置信息。
其中,世界坐标系可以指系统的绝对坐标系,世界坐标系可以是描述地球上物体位置关系的系统,其可以用于表示地球上物体的绝对位置。
可选地,第三位置信息可以是粗略的智能驾驶设备的初始定位信息,根据该第三位置信息可以判断出智能驾驶设备位于哪一个或哪几个光源附近。
在确定了第一环境光传感器在世界坐标系下的位姿后,可以得到该环境光传感器理论入射光线,该理论入射光线与第一平面的交点便是第一光源的第四位置信息,如果智能驾驶设备的第一位置信息是精确的定位信息,则第一光源的第一位置信息和第四位置信息相同。不需要对智能驾驶设备的第三位置信息进行校准。在大多数情况下,第一位置信息和第四位置信息并不相同,此时便需要计算出偏移向量。该偏离向量可以理解为第一位置信息和第四位置信息的距离,使用该偏移向量对智能驾驶设备的第三位置信息进行校准便得到第二位置信息(智能驾驶设备精确的定位信息)。
在上述计算过程中,首先要得到第一平面,该第一平面是基于第二平面确定的,第一平面是光源所在平面,第一平面与第二平面平行,第二平面是智能驾驶设备所在平面。可选地,第二平面是基于智能驾驶设备的行驶方向与姿态确定的平面。可选地,如果将智能驾驶设备在世界坐标系下视为一个点,第二平面可以是该点在路面上的切平面。
应理解,上述理论入射光线、第一平面和第二平面在世界坐标系下可以分别表示为:理论入射光线方程、第一平面方程和第二平面方程。
还应理解,在本申请中,智能驾驶设备的定位信息和智能驾驶设备的位置信息表示的含义相同。
本申请中,在获取的智能驾驶设备初始位置信息不精确的情况下,能够根据第一环境光传感器在世界坐标系下的位姿确定理论入射光线,并得到第一光源的理论定位点,从而得到偏移向量,并使用该偏移向量对智能驾驶设备的初始位置进行校准。通过这样的方式,能够得到智能驾驶设备精确的定位信息。
结合第一方面,在第一方面的某些实现方式中,所述方法还包括:获取第二光源的第五位置信息;确定所述第二光源对应的第二环境光传感器,所述第二环境光传感器位于所述环境光传感器组中;根据所述第二环境光传感器在所述世界坐标系下的位姿,确定所述第二光源的第六位置信息,所述第六位置信息是由所述第二环境光传感器的理论入射光线和第三平面确定的;所述第三平面为所述第二光源所在的平面,所述第三平面是基于所述第二平面确定的,所述第二平面与所述第三平面平行;根据所述第五位置信息和所述第六位置信息,确定第二偏移向量;所述根据所述第一偏移向量和所述第三位置信息,确定所述智能驾驶设备的所述第二位置信息,包括:根据所述第一偏移向量、所述第二偏移向量和所述第三位置信息,确定所述第二位置信息。
可选地,本申请中,在智能驾驶设备位于两个光源中间时,可以通过对环境光传感器组读数进行数学分析,确定正确的第一环境光传感器和第二环境光传感器。
可选地,可以通过对环境光传感器组的局部最大值进行分析,得到分别对应第一光源和第二光源的局部最大光强读数,进而获得第一环境光传感器和第二环境光传感器的信息。
本申请中,在获取的智能驾驶设备初始位置信息不精确的情况下,如果智能驾驶设备位于两个光源中间,可以基于第二光源的第五位置信息和第六位置信息确定第二偏移向量,从而能够根据第一偏移向量和第二偏移向量对智能驾驶设备的初始位置进行校准,有利于获得更精确的智能驾驶设备位置信息。
结合第一方面,在第一方面的某些实现方式中,所述环境光传感器组包括多个环境光传感器,且所述多个环境光传感器中的每个环境光传感器表面为平滑无凹陷曲面。
可选地,上述环境光传感器组也可以是组件整体平滑连续,无凹陷曲面。
本申请中,环境光传感器组中的每个环境光传感器表面为平滑无凹陷曲面,通过这样的方式,有利于更精确的得到理论入射光线,从而获得更精确的智能驾驶设备位置信息。
结合第一方面,在第一方面的某些实现方式中,所述环境光传感器组包括太阳能电池板,所述太阳能电池板分布于所述智能驾驶设备的顶部。
本申请中,环境光传感器组可以包括太阳能电池板,通过这样的方式,智能驾驶设备在行驶的过程中既能够进行充电,又能够实现低成本、高精度的定位。
第二方面,提供了一种定位装置,该装置包括:获取单元和处理单元:所述获取单元,用于获取第一光源的第一位置信息;所述处理单元,用于:确定所述第一光源对应的第一环境光传感器,所述第一环境光传感器位于环境光传感器组中,所述环境光传感器组安装在智能驾驶设备上;以及根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述智能驾驶设备的第二位置信息。
结合第二方面,在第二方面的某些实现方式中,所述获取单元,还用于获取所述环境光传感器组中每个环境光传感器与所述第一光源对应的光强值;所述处理单元,用于确定 所述环境光传感器组中光强值最大的环境光传感器为所述第一环境光传感器。
结合第二方面,在第二方面的某些实现方式中,所述朝向信息为所述第一环境光传感器在世界坐标系下的位姿;所述获取单元,还用于获取所述智能驾驶设备的第三位置信息,所述第三位置信息是根据全球导航卫星系统确定的;所述处理单元,用于:根据所述第一环境光传感器在所述世界坐标系下的位姿,确定所述第一光源的第四位置信息,所述第四位置信息是根据所述第一环境光传感器的理论入射光线和第一平面确定的,所述第一平面为所述第一光源所在的平面,所述第一平面是基于第二平面确定的,所述第二平面为所述智能驾驶设备所在的平面,所述第一平面与所述第二平面平行;根据所述第一位置信息和所述第四位置信息,确定第一偏移向量;根据所述第一偏移向量和所述第三位置信息,确定所述智能驾驶设备的所述第二位置信息。
结合第二方面,在第二方面的某些实现方式中,所述获取单元,还用于获取第二光源的第五位置信息;所述处理单元,还用于:确定所述第二光源对应的第二环境光传感器,所述第二环境光传感器位于所述环境光传感器组中;以及根据所述第二环境光传感器在所述世界坐标系下的位姿,确定所述第二光源的第六位置信息,所述第六位置信息是由所述第二环境光传感器的理论入射光线和第三平面确定的;所述第三平面为所述第二光源所在的平面,所述第三平面是基于所述第二平面确定的,所述第二平面与所述第三平面平行;根据所述第五位置信息和所述第六位置信息,确定第二偏移向量;根据所述第一偏移向量、所述第二偏移向量和所述第三位置信息,确定所述第二位置信息。
结合第二方面,在第二方面的某些实现方式中,所述环境光传感器组包括多个环境光传感器,且所述多个环境光传感器中的每个环境光传感器表面为平滑无凹陷曲面。
结合第二方面,在第二方面的某些实现方式中,所述环境光传感器组包括太阳能电池板,所述太阳能电池板分布于所述智能驾驶设备的顶部。
第三方面,提供一种定位装置,该装置包括:至少一个处理器和存储器,所述至少一个处理器与所述存储器耦合,用于读取并执行所述存储器中的指令,使得该装置实现上述第一方面及各实现方式中的方法。
第四方面,提供一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面及其各实现方式中的方法。
第五方面,提供一种芯片,该芯片包括电路,该电路用于执行上述各个方面中的方法。
第六方面,提供一种智能驾驶设备,该智能驾驶设备包括:第二方面至第三方面中的定位装置以及环境光传感器组,所述环境光传感器组包括多个环境光传感器,且所述多个环境光传感器中的每个环境光传感器表面为平滑无凹陷曲面。
附图说明
图1是本申请实施例提供的智能驾驶设备的功能性示意图;
图2是本申请实施例提供的定位方法所适用的系统架构;
图3是本申请实施例提供的一种定位方法的示意性流程图;
图4是本申请实施例提供的另一种定位方法的示意性流程图;
图5是本申请实施例提供的环境光传感器组在车辆上的位置示意图;
图6是本申请实施例提供的环境光传感器组形状和朝向示意图;
图7是本申请实施例提供的光的入射角与环境光传感器单位面积上的光通量折线图;
图8是本申请实施例提供的子传感器能够获取到的环境光传感器读数示意图;
图9是本申请实施例提供的一种确定定位点M的方法示意图;
图10是本申请实施例提供的定位方法的所适用的一种应用场景;
图11是本申请实施例提供的定位方法的所适用的另一种应用场景;
图12是本申请实施例提供的一种定位装置;
图13是本申请实施例提供的另一种定位装置。
具体实施方式
在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
本申请实施例中采用诸如“第一”、“第二”的前缀词,仅仅为了区分不同的描述对象,对被描述对象的位置、顺序、优先级、数量或内容等没有限定作用。本申请实施例中对序数词等用于区分描述对象的前缀词的使用不对所描述对象构成限制,对所描述对象的陈述参见权利要求或实施例中上下文的描述,不应因为使用这种前缀词而构成多余的限制。
随着车辆在日常生活中被广泛使用,车辆定位的精准度也被愈发的重视起来。目前,大部分的车辆可以通过GNSS系统进行定位。
然而,GNSS定位准确度不稳定,在部分场景下无法定位(例如,网络信号较差的场景)。在这些场景下,虽然视觉融合定位技术能够辅助或代替GNSS进行定位,但是,由于视觉融合定位技术的感知物为可见光,因此,在车辆自然光不足的场景下,车辆定位的精度和准确性均有所下降。其中,由于算法的使用差异,车辆在夜间的定位精度和稳定性的误差期望值可能增大2至4倍。
此外,激光融合定位技术虽然也能够进行定位,但是激光融合定位技术的感知物为激光,激光融合及其定位技术在能耗成本、采购成本和计算成本上均相对较高。因此,在一些自然光不足的特定场景下,如何能够低成本、高精度的对车辆定位是亟需解决的问题。上述特定场景例如可以是夜间、黄昏、黎明、傍晚等。
本申请提供了一种定位方法、装置以及智能驾驶设备,能够使得包括车辆在内的智能驾驶设备在自然光不足的场景下,以低成本、高精度的方式进行定位。下面将结合附图,对本申请实施例中的技术方案进行描述。
图1是本申请实施例提供的智能驾驶设备100的一个功能性示意图。应理解,图1及相关描述仅为一种举例,并不对本申请实施例中的智能驾驶设备进行限定。
在实施过程中,智能驾驶设备100可以被配置为完全或部分自动驾驶模式,也可以由用户进行人工驾驶。例如:智能驾驶设备100可以通过感知系统120获取其周围的环境信 息,并基于对周边环境信息的分析得到自动驾驶策略以实现完全自动驾驶,或者将分析结果呈现给用户以实现部分自动驾驶。
智能驾驶设备100可包括多种子系统,例如感知系统120、计算平台130和显示装置140。可选地,该智能驾驶设备100可包括更多或更少的子系统,并且每个子系统都可包括一个或多个部件。另外,该智能驾驶设备100的每个子系统和部件可以通过有线或者无线的方式实现互连。
感知系统120可包括用于感测关于智能驾驶设备100周边的环境的信息的若干种传感器。例如,感知系统120可以包括定位系统,该定位系统可以是全球定位系统(global positioning system,GPS),也可以是北斗系统或者其他定位系统。感知系统120可以包括惯性测量单元(inertial measurement unit,IMU)等、激光雷达、毫米波雷达、超声波雷达以及摄像装置中的一种或者多种。
智能驾驶设备100的部分或所有功能可以由计算平台130控制。计算平台130可包括处理器131至13n(n为正整数),处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如中央处理单元(central processing unit,CPU)、微处理器、图形处理器(graphics processing unit,GPU)(可以理解为一种微处理器)、或数字信号处理器(digital signal processor,DSP)等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为专用集成电路(application-specific integrated circuit,ASIC)或可编程逻辑器件(programmable logic device,PLD)实现的硬件电路,例如FPGA。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,处理器还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如神经网络处理单元(neural network processing unit,NPU)、张量处理单元(tensor processing unit,TPU)、深度学习处理单元(deep learning processing unit,DPU)等。此外,计算平台130还可以包括存储器,存储器用于存储指令,处理器131至13n中的部分或全部处理器可以调用存储器中的指令,以实现相应的功能。
计算平台130可基于从各种子系统(例如,感知系统120)接收的输入来控制智能驾驶设备100的功能。在一些实施例中,计算平台130可用于对智能驾驶设备100及其子系统的许多方面提供控制。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
在道路行进的智能驾驶设备100,可以识别其周围环境内的物体以确定对当前速度的调整。所述物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定智能驾驶设备100所要调整的速度。
可选地,智能驾驶设备100或者与智能驾驶设备100相关联的感知和计算设备(例如,计算平台130)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所述识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。
本申请中的智能驾驶设备100可以包括:路上交通工具、水上交通工具、空中交通工具、工业设备、农业设备、或娱乐设备等。例如智能驾驶设备可以为车辆,该车辆为广义概念上的车辆,可以是交通工具(如商用车、乘用车、摩托车、飞行车、火车等),工业车辆(如:叉车、挂车、牵引车等),工程车辆(如挖掘机、推土车、吊车等),农用设备(如割草机、收割机等),游乐设备,玩具车辆等,本申请实施例对车辆的类型不作具体限定。再如,智能驾驶设备可以为飞机、或轮船等交通工具。
以下以智能驾驶设备为车辆为例,说明本申请需要解决的技术问题以及所采用的技术方案。
在介绍定位方法之前,首先介绍本申请实施例提供的定位方法所适用的系统架构。
图2是本申请实施例提供的定位方法所适用的系统架构200,图2的所示的系统架构可应用于图1的智能驾驶设备100中。
如图2所示,架构200可以包括:传感器抽象模块210、感知模块220、定位模块230和规划控制模块240。以车辆的自动驾驶场景为例,传感器抽象模块210能够根据传感器类型对输入数据进行预处理,并将经过预处理的数据输入到感知模块220和定位模块230。感知模块220能够根据输入的数据获取到车辆的行驶环境数据,并将该行驶环境数据发送给定位模块230。定位模块230可以根据车辆的行驶环境数据和传感器抽象模块210预处理的数据对车辆进行定位,并将定位的结果数据输入到规划控制模块240中,规划控制模块240可以基于定位的结果数据实现自动驾驶功能。
本申请实施例优化了传感器抽象模块210和感知模块220向定位模块230输出的数据,从而提高了车辆定位的精准度。
应理解,上述传感器抽象模块210和感知模块220可以位于图1的感知系统120中,定位模块230和规划控制模块240可以位于图1中的计算平台130中。
图3是本申请实施例提供的一种定位方法300的示意性流程图,方法300可以由图1中的智能驾驶设备100执行,或者也可以由智能驾驶设备100中的计算平台130执行,或者还可以由计算平台130中的片上系统(system on chip,SOC)执行,或者,还可以由计算平台130中的处理器执行。方法300可以包括步骤S301-S303。
S301,获取第一光源的第一位置信息。
其中,第一光源可以是道路上的点光源(例如,路灯)。第一位置信息可以用于指示该第一光源的地理位置,该第一位置信息可以从高精度地图,或点云图中获取,也可以通过车联网从其他通信设备获取。
S302,确定第一光源对应的第一环境光传感器。
可选地,环境光传感器组包括多个环境光传感器,其整体形状(外观)可以是平滑连续的曲面,且该曲面无凹陷部分,多个环境光传感器中每个环境光传感器对光的感知方向,可以是与环境光传感器曲面垂直的方向。
S303,根据第一环境光传感器的朝向信息和第一位置信息确定车辆的第二位置信息。
可选地,第一环境光传感器的朝向信息可以理解为第一环境光传感器面向第一光源的方向,更进一步,该朝向信息可是第一环境光传感器的位姿。
本申请实施例中,能够根据第一环境光传感器的朝向信息和第一光源的第一位置信息确定车辆的第二位置信息,通过这样的方式,能够使得车辆在自然光不足的场景下,以低 成本、高精度的方式进行定位。
一种可能的实现方式中,所述根据所述第一光源发射的光线确定第一环境光传感器,包括:获取所述环境光传感器组中每个环境光传感器与所述第一光源对应的光强值;确定所述环境光传感器组中光强值最大的环境光传感器为所述第一环境光传感器。
其中,可以通过获取每个环境光传感器的读数来获取每个环境光传感器与所述第一光源对应的光强值包括:读数的最大值对应的传感器为第一环境光传感器。
示例性地,读数的取值范围可以在0-90,读数越大表示环境光传感器与第一光源对应的光强值越大,第一光源直射或近似直射该第一环境光传感器。
本申请实施例中,可以通过统计环境光传感器与第一光源对应的光强值,确定光强值最大的传感器为第一环境传感器,通过这样的方式,能够快捷、高效的确定第一光源对应的第一环境光传感器,从而计算出车辆的精确定位信息。
一种可能的实现方式中,所述朝向信息为所述第一环境光传感器在世界坐标系下的位姿;所述根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述车辆的第二位置信息之前,所述方法还包括:获取所述车辆的第三位置信息,所述第三位置信息是根据全球导航卫星系统确定的;所述根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述车辆的第二位置信息,包括:根据所述第一环境光传感器在所述世界坐标系下的位姿,确定所述第一光源的第四位置信息,所述第四位置信息是根据所述第一环境光传感器的理论入射光线和第一平面确定的,所述第一平面为所述第一光源所在的平面,所述第一平面是基于第二平面确定的,所述第二平面为所述车辆所在的平面,所述第一平面与所述第二平面平行;根据所述第一位置信息和所述第四位置信息,确定第一偏移向量;根据所述第一偏移向量和所述第三位置信息,确定所述车辆的所述第二位置信息。
其中,世界坐标系可以指系统的绝对坐标系,世界坐标系可以是描述地球上物体位置关系的系统,其可以用于表示地球上物体的绝对位置。
可选地,第三位置信息可以是粗略的车辆的初始定位信息,根据该第三位置信息可以判断出车辆位于哪一个或哪几个光源附近。
在确定了第一环境光传感器在世界坐标系下的位姿后,可以得到该环境光传感器理论入射光线,该理论入射光线与第一平面的交点便是第一光源的第四位置信息,如果车辆的第一位置信息是精确的定位信息,则第一光源的第一位置信息和第四位置信息相同。不需要对车辆的第三位置信息进行校准。在大多数情况下,第一位置信息和第四位置信息并不相同,此时便需要计算出偏移向量。该偏离向量可以理解为第一位置信息和第四位置信息的距离,使用该偏移向量对车辆的第三位置信息进行校准便得到第二位置信息(车辆精确的定位信息)。
在上述计算过程中,首先要得到第一平面,该第一平面是基于第二平面确定的,第一平面是光源所在平面,第一平面与第二平面平行,第二平面是车辆所在平面。可选地,第二平面可以是基于车辆的行驶方向与姿态确定的平面。可选地,如果将车辆在世界坐标系下视为一个点,第二平面可以是该点在路面上的切平面。
应理解,上述理论入射光线、第一平面和第二平面在世界坐标系下可以分别表示为:理论入射光线方程、第一平面方程和第二平面方程。
还应理解,在本申请实施例中,车辆的定位信息和车辆的位置信息表示的含义相同。
本申请实施例中,在获取的车辆初始位置信息不精确的情况下,能够根据第一环境光传感器在世界坐标系下的位姿确定理论入射光线,并得到第一光源的理论定位点,从而得到偏移向量,并使用该偏移向量对车辆的初始位置进行校准。通过这样的方式,能够得到车辆精确的定位信息。
一种可能的实现方式中,所述方法还包括:获取第二光源的第五位置信息;确定所述第二光源对应的第二环境光传感器,所述第二环境光传感器位于所述环境光传感器组中;根据所述第二环境光传感器在所述世界坐标系下的位姿,确定所述第二光源的第六位置信息,所述第六位置信息是由所述第二环境光传感器的理论入射光线和第三平面确定的;所述第三平面为所述第二光源所在的平面,所述第三平面是基于所述第二平面确定的,所述第二平面与所述第三平面平行;根据所述第五位置信息和所述第六位置信息,确定第二偏移向量;所述根据所述第一偏移向量和所述第三位置信息,确定所述车辆的所述第二位置信息,包括:根据所述第一偏移向量、所述第二偏移向量和所述第三位置信息,确定所述第二位置信息。
可选地,如果车辆位于两个光源中间,可以通过对环境光传感器组读数进行数学分析,确定正确的第一环境光传感器和第二环境光传感器。
可选的,可以通过对环境光传感器组的局部最大值进行分析,得到分别对应第一光源和第二光源的局部最大光强度数,进而获得正确的第一环境光传感器和第二环境光传感器的信息。
本申请实施例中,在获取的车辆初始位置信息不精确的情况下,如果车辆位于两个光源中间,可以基于第二光源的第五位置信息和第六位置信息确定第二偏移向量,从而能够根据第一偏移向量和第二偏移向量对车辆的初始位置进行校准,有利于获得更精确的车辆位置信息。
一种可能的实现方式中,所述环境光传感器组包括多个环境光传感器,所述环境光传感器组包括多个环境光传感器,且所述多个环境光传感器中的每个环境光传感器表面为平滑无凹陷曲面。
可选地,上述环境光传感器为组件整体平滑连续,无凹陷曲面。
本申请实施例中,环境光传感器组中的每个环境光传感器表面为平滑无凹陷曲面,通过这样的方式,有利于更精确的得到理论入射光线,从而获得更精确的车辆位置信息。
一种可能的实现方式中,所述环境光传感器组包括太阳能电池板,所述太阳能电池板分布于所述车辆的顶部。
本申请实施例中,环境光传感器组可以包括太阳能电池板,通过这样的方式,车辆在行驶的过程中既可以实现充电功能,又能够实现低成本、高精度的定位。
图4是本申请实施例提供的另一种定位方法400的示意性流程图,方法400可以由图1中的智能驾驶设备100执行,或者也可以由智能驾驶设备100中的计算平台130执行,或者还可以由计算平台130中的SOC执行,或者,还可以由计算平台130中的处理器执行。方法400是对方法300中实现方式的具体说明,如图4所示,方法400可以包括如下步骤。
S401,获取当前时刻传感器参数。
其中,传感器参数可以包括:光源地理位置信息和车辆的定位信息,光源地理位置信 息可以用于指示道路上照明的点光源(例如,路灯)的地理位置,该光源地理位置信息可以是3D信息,该信息可以从高精度地图或点云图中获取,也可以通过车联网从其他设备获取。车辆的定位信息可以是定位精度较低的定位信息,但是足以用于判断出车辆位于哪一个光源下或者哪两个光源之间。该定位信息可以通过GNSS、惯性导航系统(inertial navigation system,INS)或者里程计获取。或者,该定位信息也可以是GNSS、INS和里程计通过数据融合的得到的。
传感器可以是环境光传感器组(ambient light sensor group,ALSG),该ALSG可以由若干个环境光传感器(子传感器)组成,环境光传感器可以感知环境中特定波长区间内的电磁波,环境光传感器包括但不限于,光压传感器、红外传感器、太阳能电池板等等。
应理解,方法400中的光源地理位置信息可以是方法300中的第一位置信息,车辆的定位信息可以是方法300中的第三位置信息。
ALSG传感器在车辆中的位置可以如图5所示,其中,ALSG传感器的设计可以满足如下条件:
(1)ALSG的整体形状与半球拓扑同胚,且形状为凸包。
(2)ALSG上设置有多个子传感器,这样能够保证车辆的正上方和偏上方任意一个方向的入射光,均直射且仅直射到一个子传感器。其相邻的子传感器之间需要有一定的角度差,这样能够保证所有的子传感器朝向不同的方向。即子传感器的形状和朝向可以如图6中的(a)至(c)所示,其中,半球中的每个小区域可以表示子传感器的感光面。
(3)子传感器对光的感知方向,与该子传感器的感光面曲面垂直。
(4)子传感器的位置固定不变,这样保证每个子传感器的物理位置,以及接收的直射光的方向在车辆坐标系下是定值
应理解,图6中的(a)至(c)所示的子传感器的形状和朝向仅仅是示例性的说明,子传感器的实际形状和朝向可以基于子传感器实际的应用情况进行变化,例如,图6的(a)至(c)中的子传感器的感光面形状可以是面积相等的四边形,再例如,不同子传感器之间的角度可以不相等。
S402,确定光强度值最大的子传感器V。
以车辆行驶在一个光源下的场景为例,设定车辆的附近具有一个主要光源L,由于该主要光源L是离车辆较近的光源,其亮度超过其他光源的亮度。
如图7所示,该图是光的入射角与ALSG单位面积上的光通量折线图。其中,折线图的横轴代表入射光线与子传感器感光面法线的夹角,即光线的入射角;折线图的纵轴为ALSG单位面积上的光通量。从图中可以看出,随着光线的入射角越大,单位面积上的光通量越小。当光线直射,即光线的入射角为0时,单位面积上的光通量最大。因此,只有受到直射或者近似直射的子传感器,才能够得到最大的环境光传感器读数。
应理解,当环境光传感器的感光面入射角为0时可以称为光垂直入射,简称直射。当入射角小于预设阈值时,可以称为近似直射,该预设阈值的设定可以取决于ALSG的密度,当相邻的角度差越小时,该预设阈值可以设置的越小。
如图8所示,该图是不同的子传感器能够获取到的环境光传感器读数示意图,图中的小圆圈代表子传感器,从图中可以看出标注为90的点是受到光直射或近似直射的子传感器,将该子传感器记为传感器V,该传感器V便是读数最大的子传感器,该传感器V容 易被系统获取到。
还应理解,方法400中的传感器V可以是方法300中的第一环境光传感器或者第二环境光传感器。
S403,获取传感器V在车辆坐标系下的位姿。
具体地,由步骤S401可知,传感器V作为一个子传感器,其在ALSG上的位置是固定的,也就是说传感器V在车辆坐标系下的位姿P v-car可以是一个常数或者定值。在同一坐标系下,传感器V的位姿P v-car可以通过唯一的转换矩阵T v转换到车辆后轴中心的位姿,车辆后轴中心位姿也就是车辆的位姿。在这个过程中,传感器V的位姿中的位置可以指传感器V的在车辆坐标系下的三维几何坐标,姿态可以指直射到传感器V的直射光的方向向量。
S404,获取传感器V在世界坐标系下的位姿。
具体地,由于车辆的位姿中的位置精度较低,姿态相对较高,可以通过以下公式得到传感器V在世界坐标系下的位姿。
P v-world=P car-worldT v -1
其中,P v-world是传感器V在世界坐标系下的位姿,P car-world是车辆在世界坐标系下的位姿,T v是转换矩阵。
在得到了传感器V在世界坐标系下的位姿后,可以进一步得到传感器V的理论入射光线F,该理论入射光线F在世界坐标系下可以用直线方程组的形式来表示(在数学上已知直线上点的坐标和直线的方向,可以确定的直线方程),即如果定位结果准确,光源L的真实入射光线应该沿着理论入射光线F照向传感器V。
在通用横墨卡托格网系统(universal transverse mercator grid system,UTM)等地理坐标系下,小范围的空间可以视为欧式空间,通过点到直线距离公式可以得到光源L到理论入射光线F的距离D L-F。如果D L-F等于或接近0,说明F经过光源L,光源L可以被视为入射光,车辆的定位结果准确,此时,可以直接将定位结果输入到车辆的定位规划模块。不再进行步骤S405至S407。
可选地,本申请实施例中光源L可以为点光源,更进一步,该光源L可以是路灯。
S405,获取光源到直射光线的偏移量。
在大多数的情况下,由于步骤S401中获取的车辆的定位信息精度较低,D L-F往往不等于或接近0。因此,此时F不能视为真实的入射光线,而仅仅是真实的入射平行光,此时可以根据车辆在世界坐标系下的位姿,确定车辆在世界坐标系下相对于道路的切平面方程C 1(在数学上一个确定的车辆位姿可以得到一个确定的切平面),并通过C 1得到过光源L的平行平面C 2
如图9所示,图中大的半球为环境光传感器组,大的半球所在平面为C 1,光源所在平面为C 2,交点M为F与平面C 2的交点。根据交点M、真实入射光线与C 2的交点可以确定偏移向量
Figure PCTCN2022113626-appb-000001
由于真实入射光线与F是平行关系,且C 1平行于C 2,因此偏移向量
Figure PCTCN2022113626-appb-000002
也是初始定位到真实定位之间的偏移向量。
应理解,方法400中平面C 2可以是方法300中的第一平面或第三平面,平面C 1可以是方法300中的第二平面,交点M可以是方法300中的第四位置信息,偏移向量
Figure PCTCN2022113626-appb-000003
可以是方法300中的第一偏移向量。
S406,将初始定位位置移动,得到优化定位点。
在大多数情况下,地面较为平坦,车辆的初始定位位置与真实位置的误差不超过15米,因此,可以假设地面等价于C 1。此时,根据初始定位位置P car-world+偏移向量
Figure PCTCN2022113626-appb-000004
可以得到经过优化后的车辆定位位置P opt。在得到了经过优化后的车辆定位位置P opt后可以将该数据输入到车辆的定位规划模块。不再进行步骤S407。
应理解,上述车辆定位位置P opt可以是方法300中的第二位置信息。
S407,根据定位点的优化结果,确定是否需要迭代。
在另外一些情况下,当地面曲度较大时,需要通过车辆的地图模块获取地面的地形,进行特殊处理。处理的方式可以是使用牛顿法,即从地图模块获取地面的曲度,根据曲度和偏移向量
Figure PCTCN2022113626-appb-000005
得到P opt处的切面方程,然后将P opt视为新的初始定位位置,迭代进行步骤S404至S406,当
Figure PCTCN2022113626-appb-000006
的模小于预设阈值,或者迭代的次数大于或等于预设次数时,停止迭代,并将P opt输入到车辆的定位规划模块中。
应理解,地面的曲度可以用于表示地面的起伏程度,曲度值越大表示地面越不平坦,地面起伏程度越大。
还应理解,上述方法400是以车辆行驶在一个光源下的场景为例进行介绍,当车辆位于两个光源(例如,两个路灯)之间时,处理的方式与上述方法400近似。即此时存在两子传感器V,分别是每个光源对应的环境光传感器组中读数为最大值的子传感器。此时,可以对两个传感器分别进行方法400的定位优化计算,得到两个优化后的定位点P opt1和P opt2,可以取P opt1和P opt2连线的中点作为定位点,通过这样的方式,通常能够获得更高的定位精度。此外,上述方法也可类推适用到存在多个光源的场景下。
本申请实施例中,能够根据环境光传感器在世界坐标系下的位姿确定定位点位置,并计算定位点位置与真实光源位置之间的偏移向量,并使用该偏移向量对车辆的定位进行校准,通过这样的方式,能够使得车辆在自然光不足的场景下,以低成本、高精度的方式进行定位。
图10是本申请实施例提供的定位方法的所适用的一种应用场景,方法300和方法400可适用于该应用场景中。
如图10所示,车辆的顶部可部署一块或多块太阳能电池板,可以在每块电池板下设置可以测量电池板发电效率或发电电压值的传感器。由于,当前入射光光强与电池板的发电电压间存在对应关系,即入射光光线越强,电池板的发电电压越高。这样,由于电池板在车辆上呈流线型分布,电池板的上方和偏上方存在万向光照射。只要存在足够的电压传感器密度,便可以满足方法300和方法400中环境光传感器组的设计条件。因此,整车的电池板便可以视为ALSG,当上述车辆行驶在夜间的点光源下,可以通过电池板的光电反应发电,从而使得电压传感器的读数间接地反应了每块电池板的入射光光强。从而能够通过方法300和方法400实现车辆的夜间定位。
应理解,一个物体在特定的场景下针对一定的角度范围内的任意角度α,如果必然存在与α平行的光,直射和近似直射到此物体上,可以认为在此场景下,在该角度范围内,存在对应该物体的万向光照射。在本申请实施例中,车辆行驶在夜间光源照射的场景下,ALSG正上方和偏上方的角度范围内存在万向光照射。
本申请实施例中,将环境光传感器设置为太阳能电池板,既能够满足车辆在行驶时的 充电需求,又能够以低成本、高精度的方式对车辆进行夜间定位。
图11是本申请实施例提供的定位方法的所适用的另一种应用场景,方法300和方法400可适用于该应用场景中。
如图11中的(a)所示,车辆在正常行驶的状态下,车辆的中控大屏显示界面1100以及功能栏1110。该界面1100上包括用户账号登录信息1101、蓝牙功能图标1102、Wi-Fi功能图标1103、蜂窝网络信号图标1104、车载地图应用搜索框1105、切换至显示车辆安装的所有应用程序的卡片1106、切换至显示车载音乐应用的卡片1107、车辆剩余电量以及剩余行驶里程的显示卡片1108、车辆360度(°)环影功能的显示卡片1109。其中,车载地图应用搜索框1105中可以包括用户设置的回家控件11051和定位控件11052。功能栏1110中包括切换至显示中控大屏桌面的图标1111、车辆内循环图标1112、主驾座椅加热功能图标1113,主驾区域空调温度显示图标1114、副驾区域空调温度显示图标1115、副驾座椅加热功能图标1116以及音量设置图标1117。
用户可以通过定位控件11052来实时查看车辆在地图中所在的位置。当车辆检测到用户点击定位控件11052的操作后,界面1100可以显示如图11中(b)所示的图形用户界面(graphical user interface,GUI)。
如图11中的(b)所示的GUI,在某些情况下(例如,网络信号较差的情况),GNSS定位不准确,此时,界面1100上可以显示提示框1118,用于告知用户当前车辆定位精度较差,是否根据附近的光源进行定位优化。当车辆检测到用户点击提示框1118中的优化控件的操作时,车辆可以基于上述方法300或方法400对车辆的定位进行优化。
例如,如图11中的(c)所示,车辆根据初始的定位信息(不精确的定位信息)可以判断出距离车辆最近的光源,并且可以根据该最近的光源确定第一环境光传感器,根据第一环境光传感器在世界坐标系下的位姿可以确定第一环境光传感器的理论入射光线,并确定第一偏移向量,然后可以使用第一偏移向量对车辆的初始定位信息进行校准得到车辆精确的定位信息。
如图11中的(d)所示的GUI,在使用方法300或方法400对车辆的初始定位信息进行优化后,界面1100上能够显示车辆在地图上精确的位置,并且可以显示提示框1119,用于告知用户车辆的定位优化成功。
应理解,上述车辆可以是图1中的智能驾驶设备100,界面1100可以是图1的显示装置140上显示的界面。
本申请实施例还提供用于实现以上任一种方法的装置,该装置包括用于实现以上任一种方法中智能驾驶设备100所执行的各步骤的单元。
图12是本申请实施例提供的定位装置1200的示意图,该装置1200可以包括获取单元1210、存储单元1220和处理单元1230。获取单元1210用于获取指令和/或数据,获取单元1210还可以称为通信接口或通信单元。存储单元1220,用于实现相应的存储功能,存储相应的指令和/或数据。处理单元1230用于进行数据处理。处理单元1230可以读取存储单元中的指令和/或数据,以使得装置1200实现前述定位方法。
一种可能的实现方式中,该装置1200包括:获取单元1210和处理单元1230,该获取单元1210,用于获取第一光源的第一位置信息;该处理单元1230,用于确定第一光源对应的第一环境光传感器,该第一环境光传感器位于环境光传感器组中,该环境光传感器 组安装在智能驾驶设备上;以及根据第一环境光传感器的朝向信息和第一位置信息确定所述智能驾驶设备的第二位置信息。
一种可能的实现方式中,该获取单元1210,还用于获取该环境光传感器组中每个环境光传感器与第一光源对应的光强值;该处理单元1230,用于确定该环境光传感器组中光强值最大的环境光传感器为第一环境光传感器。
一种可能的实现方式中,该朝向信息为第一环境光传感器在世界坐标系下的位姿;该获取单元1210,还用于获取智能驾驶设备的第三位置信息,该第三位置信息是根据全球导航卫星系统确定的;该处理单元1230,用于:根据第一环境光传感器在世界坐标系下的位姿,确定第一光源的第四位置信息,该第四位置信息是根据第一环境光传感器的理论入射光线和第一平面确定的,该第一平面为第一光源所在的平面,该第一平面是基于第二平面确定的,该第二平面为智能驾驶设备所在的平面,该第一平面与第二平面平行;根据第一位置信息和第四位置信息,确定第一偏移向量;根据第一偏移向量和第三位置信息,确定智能驾驶设备的所述第二位置信息。
一种可能的实现方式中,该获取单元1210,还用于获取第二光源的第五位置信息;该处理单元1230,还用于:确定第二光源对应的第二环境光传感器,该第二环境光传感器位于环境光传感器组中;以及根据第二环境光传感器在所述世界坐标系下的位姿,确定第二光源的第六位置信息,该第六位置信息是由第二环境光传感器的理论入射光线和第三平面确定的;该第三平面为该第二光源所在的平面,该第三平面是基于第二平面确定的,该第二平面与该第三平面平行;根据第五位置信息和第六位置信息,确定第二偏移向量;根据第一偏移向量、第二偏移向量和第三位置信息,确定第二位置信息。
一种可能的实现方式中,环境光传感器组包括多个环境光传感器,且多个环境光传感器中的每个环境光传感器表面为平滑无凹陷曲面。
一种可能的实现方式中,环境光传感器组包括太阳能电池板,该太阳能电池板分布于所述智能驾驶设备的顶部。
应理解,以上装置中各单元的划分仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。
可选地,若该装置1200位于智能驾驶设备100中,上述处理单元1230可以是图1所示的处理器131。
可选地,若该装置1200位于系统架构200中,上述获取单元1210可以是感知模块220,上述处理单元1230可以是定位模块230。
图13是本申请实施例提供的另一种定位装置的装置1300示意图。该装置1300可应用于图1的智能驾驶设备100中。
该定位装置的装置1300包括:存储器1310、处理器1320、以及通信接口1330。其中,存储器1310、处理器1320,通信接口1330通过内部连接通路相连,该存储器1310用于存储指令,该处理器1320用于执行该存储器1310存储的指令,以控制通信接口1330获取信息,或者使所述定位装置执行上述各实施例中的定位方法。可选地,存储器1310既可以和处理器1320通过接口耦合,也可以和处理器1320集成在一起。
需要说明的是,上述通信接口1330使用例如但不限于收发器一类的收发装置。上述通信接口1330还可以包括输入/输出接口(input/output interface)。
处理器1320存储有一个或多个计算机程序,该一个或多个计算机程序包括指令。当该指令被所述处理器1320运行时,使得该定位装置1300执行上述各实施例中定位方法。
在实现过程中,上述方法的各步骤可以通过处理器1320中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1310,处理器1320读取存储器1310中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
可选地,上述获取单元1210可以是图13中的通信接口1330可以实现图12中的获取单元1210,图13中的存储器1310可以实现图12中的存储单元,图13中的处理器1320可以实现图12中的处理单元1230。
可选地,该装置1200或装置1300可以是计算平台,该计算平台可以是车载计算平台或云端计算平台。
可选地,该装置1200或装置1300可以位于图1中的智能驾驶设备100中。
可选地,该装置1200或装置1300可以为图1中智能驾驶设备中的计算平台130。
本申请实施例还提供一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得所述计算机执行上述图3或图4中的任一种方法。
本申请实施例还提供一种芯片,包括:电路,该电路用于执行上述图3或图4中的任一种方法。
本申请实施例还提供一种智能驾驶设备,包括图12或图13任一种定位装置以及环境光传感器组,所述环境光传感器组包括多个环境光传感器,且所述多个环境光传感器中的每个环境光传感器表面为平滑无凹陷曲面。该智能驾驶设备可以是车辆。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (16)

  1. 一种定位方法,其特征在于,所述方法包括:
    获取第一光源的第一位置信息;
    确定所述第一光源对应的第一环境光传感器,所述第一环境光传感器位于环境光传感器组中,所述环境光传感器组安装在智能驾驶设备上;
    根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述智能驾驶设备的第二位置信息。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述第一光源发射的光线确定第一环境光传感器,包括:
    获取所述环境光传感器组中每个环境光传感器与所述第一光源对应的光强值;
    确定所述环境光传感器组中光强值最大的环境光传感器为所述第一环境光传感器。
  3. 如权利要求1或2所述的方法,其特征在于,所述朝向信息为所述第一环境光传感器在世界坐标系下的位姿;
    所述根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述智能驾驶设备的第二位置信息之前,所述方法还包括:
    获取所述智能驾驶设备的第三位置信息,所述第三位置信息是根据全球导航卫星系统确定的;
    所述根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述智能驾驶设备的第二位置信息,包括:
    根据所述第一环境光传感器在所述世界坐标系下的位姿,确定所述第一光源的第四位置信息,所述第四位置信息是根据所述第一环境光传感器的理论入射光线和第一平面确定的,所述第一平面为所述第一光源所在的平面,所述第一平面是基于第二平面确定的,所述第二平面为所述智能驾驶设备所在的平面,所述第一平面与所述第二平面平行;
    根据所述第一位置信息和所述第四位置信息,确定第一偏移向量;
    根据所述第一偏移向量和所述第三位置信息,确定所述智能驾驶设备的所述第二位置信息。
  4. 如权利要求3所述的方法,其特征在于,所述方法还包括:
    获取第二光源的第五位置信息;
    确定所述第二光源对应的第二环境光传感器,所述第二环境光传感器位于所述环境光传感器组中;
    根据所述第二环境光传感器在所述世界坐标系下的位姿,确定所述第二光源的第六位置信息,所述第六位置信息是由所述第二环境光传感器的理论入射光线和第三平面确定的;所述第三平面为所述第二光源所在的平面,所述第三平面是基于所述第二平面确定的,所述第二平面与所述第三平面平行;
    根据所述第五位置信息和所述第六位置信息,确定第二偏移向量;
    所述根据所述第一偏移向量和所述第三位置信息,确定所述智能驾驶设备的所述第二位置信息,包括:
    根据所述第一偏移向量、所述第二偏移向量和所述第三位置信息,确定所述第二位置信息。
  5. 如权利要求1至4任一项所述的方法,其特征在于,所述环境光传感器组包括多个环境光传感器,且所述多个环境光传感器中的每个环境光传感器表面为平滑无凹陷曲面。
  6. 如权利要求1至5任一项所述的方法,其特征在于,所述环境光传感器组包括太阳能电池板,所述太阳能电池板分布于所述智能驾驶设备的顶部。
  7. 一种定位装置,其特征在于,所述装置包括获取单元和处理单元:
    所述获取单元,用于获取第一光源的第一位置信息;
    所述处理单元,用于:
    确定所述第一光源对应的第一环境光传感器,所述第一环境光传感器位于环境光传感器组中,所述环境光传感器组安装在智能驾驶设备上;以及
    根据所述第一环境光传感器的朝向信息和所述第一位置信息确定所述智能驾驶设备的第二位置信息。
  8. 如权利要求7所述的装置,其特征在于,
    所述获取单元,还用于获取所述环境光传感器组中每个环境光传感器与所述第一光源对应的光强值;
    所述处理单元,用于确定所述环境光传感器组中光强值最大的环境光传感器为所述第一环境光传感器。
  9. 如权利要求7或8所述的装置,其特征在于,所述朝向信息为所述第一环境光传感器在世界坐标系下的位姿;
    所述获取单元,还用于获取所述智能驾驶设备的第三位置信息,所述第三位置信息是根据全球导航卫星系统确定的;
    所述处理单元,用于:
    根据所述第一环境光传感器在所述世界坐标系下的位姿,确定所述第一光源的第四位置信息,所述第四位置信息是根据所述第一环境光传感器的理论入射光线和第一平面确定的,所述第一平面为所述第一光源所在的平面,所述第一平面是基于第二平面确定的,所述第二平面为所述智能驾驶设备所在的平面,所述第一平面与所述第二平面平行;
    根据所述第一位置信息和所述第四位置信息,确定第一偏移向量;
    根据所述第一偏移向量和所述第三位置信息,确定所述智能驾驶设备的所述第二位置信息。
  10. 如权利要求9所述的装置,其特征在于,
    所述获取单元,还用于获取第二光源的第五位置信息;
    所述处理单元,还用于:
    确定所述第二光源对应的第二环境光传感器,所述第二环境光传感器位于所述环境光传感器组中;以及
    根据所述第二环境光传感器在所述世界坐标系下的位姿,确定所述第二光源的第六位置信息,所述第六位置信息是由所述第二环境光传感器的理论入射光线和第三平面确定的;所述第三平面为所述第二光源所在的平面,所述第三平面是基于所述第二平面确定的,所述第二平面与所述第三平面平行;
    根据所述第五位置信息和所述第六位置信息,确定第二偏移向量;
    根据所述第一偏移向量、所述第二偏移向量和所述第三位置信息,确定所述第二位置信息。
  11. 如权利要求7至10任一项所述的装置,其特征在于,所述环境光传感器组包括多个环境光传感器,且所述多个环境光传感器中的每个环境光传感器表面为平滑无凹陷曲面。
  12. 如权利要求7至11任一项所述的装置,其特征在于,所述环境光传感器组包括太阳能电池板,所述太阳能电池板分布于所述智能驾驶设备的顶部。
  13. 一种定位装置,其特征在于,包括:处理器和存储器,所述处理器与所述存储器耦合,用于读取并执行所述存储器中的指令,以执行如权利要求1至6中任一项所述的方法。
  14. 一种计算机可读介质,其特征在于,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得所述计算机执行如权利要求1至6中任一项所述的方法。
  15. 一种芯片,其特征在于,包括:电路,所述电路用于执行如权利要求1至6中任一项所述的方法。
  16. 一种智能驾驶设备,其特征在于,包括:如权利要求7至13中任意一项所述的定位装置以及环境光传感器组,所述环境光传感器组包括多个环境光传感器,且所述多个环境光传感器中的每个环境光传感器表面为平滑无凹陷曲面。
PCT/CN2022/113626 2022-08-19 2022-08-19 定位方法、装置以及智能驾驶设备 WO2024036607A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/113626 WO2024036607A1 (zh) 2022-08-19 2022-08-19 定位方法、装置以及智能驾驶设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/113626 WO2024036607A1 (zh) 2022-08-19 2022-08-19 定位方法、装置以及智能驾驶设备

Publications (1)

Publication Number Publication Date
WO2024036607A1 true WO2024036607A1 (zh) 2024-02-22

Family

ID=89940325

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113626 WO2024036607A1 (zh) 2022-08-19 2022-08-19 定位方法、装置以及智能驾驶设备

Country Status (1)

Country Link
WO (1) WO2024036607A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105246040A (zh) * 2015-10-21 2016-01-13 宁波大学 无线车联物联网定位系统
CN105242294A (zh) * 2015-10-21 2016-01-13 宁波大学 基于无线车联物联网的车辆定位系统
CN106908779A (zh) * 2017-03-20 2017-06-30 中国科学院光电研究院 一种基于光强信号匹配的隧道内光感测距与定位装置
CN107907897A (zh) * 2017-10-31 2018-04-13 平潭诚信智创科技有限公司 一种基于lifi的智能隧道导航装置及导航系统
US20200183386A1 (en) * 2018-12-11 2020-06-11 GM Global Technology Operations LLC Sun-aware routing and controls of an autonomous vehicle
CN111448591A (zh) * 2018-11-16 2020-07-24 北京嘀嘀无限科技发展有限公司 不良光照条件下用于定位车辆的系统和方法
CN111610484A (zh) * 2020-04-28 2020-09-01 吉林大学 一种基于occ的自动驾驶车辆跟踪定位方法
CN111624551A (zh) * 2020-05-21 2020-09-04 南京晓庄学院 一种基于可见光通信的定位方法、装置及系统
CN112925000A (zh) * 2021-01-25 2021-06-08 东南大学 基于可见光通信和惯性导航的隧道环境下车辆定位方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105246040A (zh) * 2015-10-21 2016-01-13 宁波大学 无线车联物联网定位系统
CN105242294A (zh) * 2015-10-21 2016-01-13 宁波大学 基于无线车联物联网的车辆定位系统
CN106908779A (zh) * 2017-03-20 2017-06-30 中国科学院光电研究院 一种基于光强信号匹配的隧道内光感测距与定位装置
CN107907897A (zh) * 2017-10-31 2018-04-13 平潭诚信智创科技有限公司 一种基于lifi的智能隧道导航装置及导航系统
CN111448591A (zh) * 2018-11-16 2020-07-24 北京嘀嘀无限科技发展有限公司 不良光照条件下用于定位车辆的系统和方法
US20200183386A1 (en) * 2018-12-11 2020-06-11 GM Global Technology Operations LLC Sun-aware routing and controls of an autonomous vehicle
CN111610484A (zh) * 2020-04-28 2020-09-01 吉林大学 一种基于occ的自动驾驶车辆跟踪定位方法
CN111624551A (zh) * 2020-05-21 2020-09-04 南京晓庄学院 一种基于可见光通信的定位方法、装置及系统
CN112925000A (zh) * 2021-01-25 2021-06-08 东南大学 基于可见光通信和惯性导航的隧道环境下车辆定位方法

Similar Documents

Publication Publication Date Title
US11346950B2 (en) System, device and method of generating a high resolution and high accuracy point cloud
JP6441993B2 (ja) レーザー点クラウドを用いる物体検出のための方法及びシステム
US8989944B1 (en) Methods and devices for determining movements of an object in an environment
US9081385B1 (en) Lane boundary detection using images
US11592524B2 (en) Computation of the angle of incidence of laser beam and its application on reflectivity estimation
CN111238494A (zh) 载具、载具定位系统及载具定位方法
CN113591518B (zh) 一种图像的处理方法、网络的训练方法以及相关设备
RU2767949C2 (ru) Способ (варианты) и система для калибровки нескольких лидарных датчиков
CN117295647A (zh) 具有统一的多传感器视图的传感器模拟
WO2024036607A1 (zh) 定位方法、装置以及智能驾驶设备
US11557129B2 (en) Systems and methods for producing amodal cuboids
US20230072966A1 (en) Systems and methods for providing and using confidence estimations for semantic labeling
US20220221585A1 (en) Systems and methods for monitoring lidar sensor health
WO2021212297A1 (en) Systems and methods for distance measurement
CN112384756A (zh) 定位系统和方法
US11698270B2 (en) Method, system, and computer program product for iterative warping of maps for autonomous vehicles and simulators
WO2023216651A1 (zh) 车道定位方法、计算机设备、计算机可读存储介质及车辆
US20230184890A1 (en) Intensity-based lidar-radar target
US20240125940A1 (en) Systems and methods for variable-resolution refinement of geiger mode lidar
US20230150543A1 (en) Systems and methods for estimating cuboid headings based on heading estimations generated using different cuboid defining techniques
CN112805534B (zh) 定位目标对象的系统和方法
CN117671402A (zh) 识别模型训练方法、装置以及可移动智能设备
CN116129377A (zh) 单视角图像重照明

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22955387

Country of ref document: EP

Kind code of ref document: A1