CN117387644A - Positioning method, positioning device, electronic device, storage medium and program product - Google Patents

Positioning method, positioning device, electronic device, storage medium and program product Download PDF

Info

Publication number
CN117387644A
CN117387644A CN202311311758.2A CN202311311758A CN117387644A CN 117387644 A CN117387644 A CN 117387644A CN 202311311758 A CN202311311758 A CN 202311311758A CN 117387644 A CN117387644 A CN 117387644A
Authority
CN
China
Prior art keywords
target vehicle
road
positioning
target
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311311758.2A
Other languages
Chinese (zh)
Inventor
杨占铎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311311758.2A priority Critical patent/CN117387644A/en
Publication of CN117387644A publication Critical patent/CN117387644A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3632Guidance using simplified or iconic instructions, e.g. using arrows
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the application provides a positioning method, a positioning device, electronic equipment, a storage medium and a program product, which are at least applied to the map field or the traffic field, wherein the positioning method comprises the following steps: based on the current running parameters of the target vehicle, matching road data of the current running road of the target vehicle from a preset map database; collecting an environment image within a preset range of the target vehicle; performing target detection on the environment image to obtain a detection target in the preset range and a pixel coordinate corresponding to the detection target in the environment image; determining coordinate values of the detection target under an actual coordinate system where the target vehicle is located based on the road data and the pixel coordinates; and positioning the target vehicle in real time based on the coordinate values. According to the method and the device, the relative position of the calculated target and the camera can be accurately calculated, so that the accuracy of real-time positioning is improved.

Description

Positioning method, positioning device, electronic device, storage medium and program product
Technical Field
Embodiments of the present application relate to the field of internet, and relate to, but are not limited to, a positioning method, an apparatus, an electronic device, a storage medium, and a program product.
Background
With the development of computer technology and communication technology, autopilot has also been rapidly developed. In the automatic driving process, an automatic driving module of the vehicle needs to accurately locate the current position of the vehicle in real time, so that the prediction of the next driving strategy is made based on the locating result.
In the related art, in the real-time positioning process, information (hereinafter, collectively referred to as a target) such as a lane line, an arrow, a vehicle, and the like in a captured front image of a vehicle is extracted by a conventional image recognition method or a machine learning method. After the target is identified, the target needs to be restored to the real world, so that the relative position of the target and the camera is calculated, and the vehicle is positioned.
However, in the positioning method in the related art, a large error exists in the calculated coordinates of the target relative to the camera, so that the positioning accuracy is low.
Disclosure of Invention
The embodiment of the application provides a positioning method, a device, electronic equipment, a storage medium and a program product, which can be at least applied to the map field or the traffic field, and can be used for accurately calculating the relative position of a calculated target and a camera by combining road data of a current running road of a target vehicle, so that the accuracy of real-time positioning is improved.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a positioning method, which comprises the following steps: based on the current running parameters of the target vehicle, matching road data of the current running road of the target vehicle from a preset map database; collecting an environment image within a preset range of the target vehicle; performing target detection on the environment image to obtain a detection target in the preset range and a pixel coordinate corresponding to the detection target in the environment image; determining coordinate values of the detection target under an actual coordinate system where the target vehicle is located based on the road data and the pixel coordinates; and positioning the target vehicle in real time based on the coordinate values.
The embodiment of the application provides a positioning device, which comprises: the matching module is used for matching road data of the current running road of the target vehicle from a preset map database based on the current running parameters of the target vehicle; the image acquisition module is used for acquiring an environment image within the preset range of the target vehicle; the target detection module is used for carrying out target detection on the environment image to obtain a detection target in the preset range and a pixel coordinate corresponding to the detection target in the environment image; the determining module is used for determining coordinate values of the detection target under an actual coordinate system where the target vehicle is located based on the road data and the pixel coordinates; and the real-time positioning module is used for positioning the target vehicle in real time based on the coordinate values.
In some embodiments, the matching module is further to: acquiring current running parameters of the target vehicle; the current running parameters comprise the running position, the running posture and the running speed of the target vehicle; determining a current travel road corresponding to the travel position, the travel posture, and the travel speed from the map database; and acquiring the road data of the current driving road from the map database.
In some embodiments, the matching module is further to: acquiring the driving position of the target vehicle at the current moment through satellite sensing equipment; acquiring the running gesture of the target vehicle at the current moment through an inertial measurement device; and acquiring the running speed of the target vehicle at the current moment through a speed sensing device.
In some embodiments, the matching module is further to: performing information matching from the map database based on the running position, the running gesture and the running speed through a preset machine learning algorithm to obtain a road identifier of a current running road of the target vehicle; the machine learning algorithm comprises a machine learning algorithm corresponding to a hidden Markov model.
In some embodiments, the road data includes gradient information of the current driving road and longitude and latitude information of the current driving road; the apparatus further comprises: the conversion module is used for converting the longitude and latitude information into the actual coordinate system based on the running position and the running gesture of the target vehicle to obtain converted longitude and latitude information; the real-time positioning module is also used for: real-time positioning of the target vehicle with first positioning precision is performed based on the converted longitude and latitude information; based on the real-time positioning result of the first positioning precision, performing real-time positioning of the second positioning precision on the target vehicle based on the coordinate value; the second positioning accuracy is greater than the first positioning accuracy.
In some embodiments, the image acquisition module is further to: acquiring an environment image within a preset range of the target vehicle through image acquisition equipment on the target vehicle; the actual coordinate system of the target vehicle is a coordinate system corresponding to the image acquisition equipment; in the actual coordinate system, the center of the image acquisition device is located at the origin of the actual coordinate system, the optical center direction of the image acquisition device is a first coordinate axis direction of the actual coordinate system, a direction extending along the center of the image acquisition device and perpendicular to a horizontal plane is a second coordinate axis direction of the actual coordinate system, and a direction perpendicular to the first coordinate axis direction and the second coordinate axis direction is a third coordinate axis direction of the actual coordinate system.
In some embodiments, the road data includes grade information of the current driving road; the determining module is further configured to: acquiring internal parameters of the image acquisition equipment; constructing a first equation for solving the coordinate values in the actual coordinate system based on the internal parameters and the pixel coordinates; constructing a second equation for solving coordinate values in the actual coordinate system based on gradient information of the current driving road; and carrying out parameter solving based on the first equation and the second equation to obtain coordinate values under the actual coordinate system.
In some embodiments, the determining module is further to: and calibrating parameters of the image acquisition equipment to obtain internal parameters of the image acquisition equipment.
In some embodiments, the determining module is further to: based on the internal parameters and the pixel coordinates, the first equation is constructed as the following equation (1):
wherein, (x) c ,y c ,z c ) Representing coordinate values of the detection target in the actual coordinate system; (u, v) representing the pixel coordinates of the detection target corresponding in the environmental image; f (f) x And f y Are indicative of the internal parameters of the image acquisition apparatus.
In some embodiments, the slope information of the current travel road includes a slope angle formed between the current travel road and a horizontal plane; the determining module is further configured to: based on the gradient information of the current traveling road, the second equation is constructed as the following equation (2):
wherein h represents the height of the image acquisition equipment from the ground where the target vehicle is located; beta represents the ramp angle.
In some embodiments, the apparatus further comprises: the result acquisition module is used for acquiring a real-time positioning result at the current moment; the real-time positioning result is used for representing the real-time position of the target vehicle on the current driving road; a strategy generation module for generating a driving strategy for the target vehicle and a control instruction corresponding to the driving strategy based on the real-time position; and the control module is used for sending the control instruction to the target vehicle, and responding to the control instruction through the automatic driving module of the target vehicle, and controlling the target vehicle to automatically drive according to the driving strategy.
An embodiment of the present application provides an electronic device, including: a memory for storing executable instructions; and the processor is used for realizing the positioning method when executing the executable instructions stored in the memory.
Embodiments of the present application provide a computer program product comprising executable instructions stored in a computer-readable storage medium; the processor of the electronic device reads the executable instructions from the computer readable storage medium and executes the executable instructions to implement the positioning method.
The embodiment of the application provides a computer readable storage medium, which stores executable instructions for causing a processor to execute the executable instructions to implement the positioning method.
The embodiment of the application has the following beneficial effects:
when the target vehicle is positioned in real time, road data of the current running road of the target vehicle is matched from a preset map database based on the current running parameters of the target vehicle, and the environment image in the preset range of the target vehicle is subjected to target detection, so that the coordinate value of the detection target under the actual coordinate system of the target vehicle is determined based on the road data and the pixel coordinate of the detection target, namely the actual relative position relationship between the detection target and the target vehicle is calculated. Therefore, the relative position between the detection target and the target vehicle is accurately calculated and calculated by combining the road data of the current running road of the target vehicle, and the accuracy of real-time positioning can be greatly improved through the relative position.
Drawings
FIG. 1 is a schematic diagram of identification information being restored to a road surface of different grade;
FIG. 2 is a schematic diagram of an alternative architecture of a positioning system provided by embodiments of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of an alternative positioning method according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of another alternative positioning method provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of an actual coordinate system provided by an embodiment of the present application;
FIG. 7 is a schematic illustration of a vehicle with a camera according to an embodiment of the present application;
FIG. 8 is a block diagram of an algorithm for separating object detection from position calculation provided in an embodiment of the present application;
FIG. 9 is a diagram of an algorithm for performing object detection and location calculation on the same network model according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a camera recognition detection target provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of a relationship between a detection target and a camera provided in an embodiment of the present application;
FIG. 12 is a diagram showing the pure perceived results of the related art on ascending/descending a slope;
FIG. 13 is a schematic diagram of perceived results after combining map gradient data provided by embodiments of the present application;
Fig. 14 is a schematic view of a scene when the camera is at an angle to the horizontal.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict. Unless defined otherwise, all technical and scientific terms used in the embodiments of the present application have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of the present application belong. The terminology used in the embodiments of the present application is for the purpose of describing the embodiments of the present application only and is not intended to be limiting of the present application.
Before describing the positioning method provided in the embodiment of the present application, the technical terms related in the embodiment of the present application will be described first:
(1) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
(2) Positioning in real time: is a technique for real-time following and locating of objects, persons or equipment by using wireless communication, sensors and data processing techniques to provide accurate positional information of objects in three-dimensional space. And also provides positional and motion information of the object in a short time and is typically displayed on a monitoring system or user interface in real-time. The main core of real-time localization is the combination of hardware, software and communication techniques to determine and forward the real-time position of the vehicle.
(3) Navigation path: the navigation method refers to a route calculated according to the set navigation starting point position and the set navigation ending point position, namely, the route passes through a series of roads from the navigation starting point position and finally reaches the navigation ending point position.
(4) The camera perceives road surface information: refers to the recognition of information such as lane lines, arrows, characters and the like on the road surface by a camera.
(5) Global navigation satellite system (GNSS, global Navigation Satellite System): the general Beidou and GPS are one of GNSS, and are used for outputting the current position information (longitude, latitude, altitude and the like).
(6) Real time differential positioning (RTK, real Time Kinematic): the method is a technology for carrying out real-time dynamic relative positioning by utilizing GNSS carrier phase observation, can receive satellite signals to realize high-precision positioning, and has positioning precision reaching centimeter level.
(7) Inertial measurement unit (IMU, inertial Measurement Unit): is a sensor mainly used for detecting and measuring acceleration and rotational angular velocity.
In the related art, when a target vehicle is positioned in real time, information such as a lane line, an arrow, a vehicle and the like (hereinafter collectively referred to as a target) in an image is extracted by a conventional image recognition method or a machine learning method, and after the target is recognized, the target needs to be restored to the real world, and the relative position of the target and a camera is calculated. It is common practice to calculate the relative position of the recognition result from the known camera height from the ground and the imaging coordinates of the object within the image, assuming that the recognition object is on a horizontal road surface, projecting the recognition object onto the plane using a projection method. In the related art, the road surface is assumed to be horizontal, and when the road surface is on an ascending slope or a descending slope, a certain error exists in the calculated relative camera coordinates of the target. As shown in fig. 1, the method is a schematic diagram of restoring identification information to a road surface with different gradient, wherein the identified target B is a target a when projected onto an uphill road surface, is a target C when projected onto a downhill road surface, and is the target a itself when projected onto a horizontal road surface. For example, in an up-down slope scene, parallel lane line coordinates recognized by a camera are restored to a phenomenon (outer "splay" or inner "splay") that occurs in the real world.
Therefore, in the positioning method in the related art, a larger error exists in the calculated coordinates of the target relative to the camera, so that the positioning accuracy is lower.
Based on the problems in the related art, the embodiments of the present application provide a positioning method that can more accurately calculate the coordinates of the object of the road surface relative to the camera (or the vehicle) in conjunction with the road data (e.g., the gradient information of the road) stored in the map database, that is, by combining the gradient information of the road in the map database, the relative position accuracy of the road surface information identified by the camera is improved.
Specifically, in the positioning method provided by the embodiment of the application, first, road data of a current running road of a target vehicle is matched from a preset map database based on a current running parameter of the target vehicle; then, collecting an environment image within a preset range of the target vehicle; performing target detection on the environment image to obtain a detection target in a preset range and a pixel coordinate corresponding to the detection target in the environment image; then, determining coordinate values of the detection target under an actual coordinate system of the target vehicle based on the road data and the pixel coordinates; and finally, positioning the target vehicle in real time based on the coordinate values. Therefore, the relative position between the detection target and the target vehicle is accurately calculated and calculated by combining the road data of the current running road of the target vehicle, and the accuracy of real-time positioning can be greatly improved through the relative position.
Here, first, an exemplary application of the positioning device of the embodiment of the present application, which is an electronic device for implementing the positioning method, will be described. In one implementation manner, the positioning device (i.e., the electronic device) provided in the embodiments of the present application may be implemented as a terminal or may be implemented as a server. In one implementation manner, the positioning device provided in the embodiment of the present application may be implemented as any terminal having a navigation function and a positioning function, such as a notebook computer, a tablet computer, a desktop computer, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, an intelligent robot, an intelligent home appliance, and an intelligent vehicle-mounted device; in another implementation manner, the positioning device provided in the embodiment of the present application may be implemented as a server, where the server may be an independent physical server, or may be a server cluster or a distributed system formed by multiple physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (CDN, content Delivery Net work), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiments of the present application. In the following, an exemplary application when the positioning device is implemented as a server will be described.
Referring to fig. 2, fig. 2 is an optional architecture diagram of a positioning system provided in an embodiment of the present application, to implement real-time positioning of a target vehicle, a navigation application may be provided, through which the target vehicle is positioned in real time while navigating a driving process of the target vehicle, that is, the target vehicle may be positioned in real time to determine an accurate position of the target vehicle, so as to implement a navigation function of the target vehicle based on the accurate position.
In the embodiment of the present application, a navigation application is taken as an example, and the positioning system 10 at least includes a terminal 100, a network 200, a server 300 and a target vehicle 400, where the navigation application is executed on the terminal 100, and the server 300 is a server of the navigation application. The server 300 may constitute a positioning device according to an embodiment of the present application, that is, the positioning method according to an embodiment of the present application is implemented by the server 300. The terminal 100 is connected to the server 300 through the network 200, and the network 200 may be a wide area network or a local area network, or a combination of both. The terminal 100 may be located above the target vehicle 400, and the terminal 100 may be a device connected to the target vehicle 400 or a device fixed to the target vehicle 400, for example, may be an in-vehicle terminal device on the target vehicle 400; alternatively, the terminal 100 may not be connected or fixed to the target vehicle 400. The terminal 100 may be provided with an image acquisition device, and the image acquisition device on the terminal 100 can acquire an environment image within a preset range of the target vehicle; alternatively, the target vehicle 400 may have an image capturing device thereon, through which an environmental image within a preset range of the target vehicle can be captured by the image capturing device on the target vehicle 400, and the target vehicle 400 may transmit the captured environmental image to the terminal 100, thereby implementing the capture of the environmental image by the terminal 100.
Referring to fig. 2, when performing real-time positioning to implement a navigation function for a target vehicle, the terminal 100 may acquire current driving parameters of the target vehicle in real time and acquire an environment image within a preset range of the target vehicle, then package the current driving parameters and the environment image into a real-time positioning request, and transmit the real-time positioning request to the server 300 through the network 200. When receiving the real-time positioning request, the server 300 responds to the real-time positioning request and matches road data of a current running road of the target vehicle from a preset map database based on current running parameters of the target vehicle; meanwhile, performing target detection on the environment image to obtain a detection target in a preset range and pixel coordinates corresponding to the detection target in the environment image; then, based on the road data and the pixel coordinates, determining coordinate values of the detection target in an actual coordinate system where the target vehicle is located; and finally, positioning the target vehicle in real time based on the coordinate values, and determining the relative position relationship between the target vehicle and the detection target. After detecting the relative positional relationship between the target vehicle and the target, the server 300 may generate a driving strategy for the target vehicle and a control instruction corresponding to the driving strategy based on the relative positional relationship, and the server 300 may send the control instruction to an autopilot module of the target vehicle 400, and control the target vehicle to perform autopilot according to the driving strategy by the autopilot module of the target vehicle 400 in response to the control instruction, so as to perform safe and effective driving according to the navigation path in the current navigation process.
In some embodiments, the positioning method of the embodiments of the present application may also be performed by the terminal 100 or the target vehicle 400 itself, that is, the terminal 100 or the target vehicle 400 matches road data of a current driving road of the target vehicle from a preset map database based on a current driving parameter of the target vehicle itself; acquiring an environment image within a preset range of the target vehicle itself by the terminal 100 or the target vehicle 400; performing target detection on an environment image by using the terminal 100 or the target vehicle 400 to obtain a detection target in a preset range and a pixel coordinate corresponding to the detection target in the environment image; determining, by the terminal 100 or the target vehicle 400, a coordinate value of the detection target in an actual coordinate system in which the target vehicle itself is located, based on the road data and the pixel coordinates; the target vehicle itself is positioned in real time based on the coordinate values by the terminal 100 or the target vehicle 400.
The positioning method provided in the embodiment of the present application may also be implemented based on a cloud platform and by using cloud technology, for example, the server 300 may be a cloud server. The method comprises the steps that road data of a current running road of a target vehicle are matched from a preset map database through a cloud server, or the environment image is subjected to target detection through the cloud server, so that a detection target in a preset range and pixel coordinates corresponding to the detection target in the environment image are obtained, or coordinate values of the detection target in an actual coordinate system of the target vehicle are determined through the cloud server based on the road data and the pixel coordinates, or the target vehicle is positioned in real time through the cloud server based on the coordinate values of the detection target in the actual coordinate system of the target vehicle.
In some embodiments, a cloud storage may be further provided, and the map database may be stored in the cloud storage, or may also store the current driving parameters of the target vehicle. Therefore, when the real-time positioning request is received, the data in the map database can be directly acquired from the cloud memory to match the road data of the current running road of the target vehicle, so that the calculation efficiency in the real-time positioning process is improved, and the timeliness and the positioning precision of the real-time positioning are improved.
Here, cloud technology (Cloud technology) refers to a hosting technology that unifies serial resources such as hardware, software, and networks in a wide area network or a local area network to implement calculation, storage, processing, and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data need strong system rear shield support, which can be realized through cloud computing.
Fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device shown in fig. 3 may be a positioning device, and the positioning device includes: at least one processor 310, a memory 350, at least one network interface 320, and a user interface 330. The various components in the positioning device are coupled together by a bus system 340. It is understood that the bus system 340 is used to enable connected communications between these components. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled in fig. 3 as bus system 340.
The processor 310 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, which may be a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 330 includes one or more output devices 331 that enable presentation of media content, and one or more input devices 332.
Memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310. Memory 350 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 350 described in embodiments of the present application is intended to comprise any suitable type of memory. In some embodiments, memory 350 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
The operating system 351 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks; network communication module 352 for reaching other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Se rial Bus), etc.; an input processing module 353 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided in the embodiments of the present application may be implemented in software, and fig. 3 shows a positioning apparatus 354 stored in a memory 350, where the positioning apparatus 354 may be a positioning apparatus in an electronic device, and may be software in the form of a program and a plug-in, and the following software modules include: the matching module 3541, image acquisition module 3542, object detection module 3543, determination module 3544, and real-time localization module 3545 are logical, and thus can be arbitrarily combined or further split depending on the functions implemented. The functions of the respective modules will be described hereinafter.
In some embodiments, the apparatus provided by the embodiments of the present application may be implemented in hardware, and by way of example, the apparatus provided by the embodiments of the present application may be a processor in the form of a hardware decoding processor that is programmed to perform the positioning method provided by the embodiments of the present application, e.g., the processor in the form of a hardware decoding processor may employ one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSPs, programmable logic devices (PLD, progra mmable Logic Device), complex programmable logic devices (CPLD, complex Programmable Logic Devi ce), field programmable gate arrays (FPGA, field-Programmable Gate Array), or other electronic components.
The positioning method provided by the embodiments of the present application may be performed by an electronic device, where the electronic device may be a server or a terminal, that is, the positioning method of the embodiments of the present application may be performed by the server or the terminal, or may be performed by interaction between the server and the terminal.
Fig. 4 is a schematic flowchart of an optional positioning method according to an embodiment of the present application, and the steps shown in fig. 4 will be described below, where, as shown in fig. 4, an execution body of the positioning method is taken as an example of a server, and the method includes the following steps S101 to S105:
step S101, based on the current running parameters of the target vehicle, road data of the current running road of the target vehicle is matched from a preset map database.
Here, the target vehicle may be a vehicle that is currently traveling, and the target vehicle is a vehicle that needs to be positioned by the positioning method of the embodiment of the present application. When the positioning method of the embodiment of the application is realized, the position of the target vehicle is accurately positioned in real time, so that the target vehicle is controlled based on an accurate positioning result, and the target vehicle is ensured to accurately run according to a navigation path.
In the embodiment of the application, the target vehicle may be a vehicle with an automatic driving function, and when the positioning method of the embodiment of the application is implemented, the automatic driving of the target vehicle may be implemented through an accurate positioning result.
The current driving parameters include, but are not limited to, the current driving position, driving posture and driving speed of the target vehicle, and different types of current driving parameters can be obtained through different sensors. For example, acquiring a driving position of a target vehicle at a current moment through a satellite sensing device; acquiring the running gesture of the target vehicle at the current moment through an inertial measurement device; and acquiring the running speed of the target vehicle at the current moment through a speed sensing device.
The preset map database may be a map application database or a navigation application database, in which map information is stored, and the map information includes geographical location information, road data, road identification information, driving specification information, road reminding information, and the like of each region.
In the embodiment of the application, based on the current running parameter of the target vehicle, the current running position of the target vehicle can be determined, so that the current running road of the target vehicle can be matched from a preset map database based on the running position, namely, which road of which region the target vehicle is currently running on is determined. When the current driving road of the target vehicle is determined, road data of the current driving road may be acquired from the map database.
In this embodiment of the present application, the map database may store the collected road data and the corresponding road, where each road has a unique road identifier, and for example, the road identifier may be the name of the road. After the road data of each road is collected, the road data may be associated with a road identifier, and stored in a map database after the association. Road data includes, but is not limited to: the type of the road (whether the main road is an auxiliary road, national road is an provincial road, bidirectional lane is a unidirectional lane), the number of lanes of the road, the attribute parameters of the road (speed limit condition, intersection information, gradient information, longitude and latitude information, construction barrier setting information and the like).
In some embodiments, the road data stored in the map database may be updated periodically or aperiodically, and the road data stored in the map database may be updated after the road data for any one road is collected.
Step S102, collecting environment images within a preset range of a target vehicle.
In this embodiment of the present application, an environmental image within a preset range of a target vehicle may be acquired by an image acquisition device, where the image acquisition device may be a device such as a camera located on a terminal, or may be a device such as a camera located on the target vehicle. In the implementation process, an environment image within a preset range of the target vehicle can be shot in real time through a camera. For example, an image in front of the target vehicle may be photographed, resulting in an environmental image.
In some embodiments, the environmental image may be acquired once every a preset time period, which may be determined according to the positioning accuracy requirements of the positioning task. The higher the positioning accuracy requirement is, the shorter the preset time length is; the lower the positioning accuracy requirement is, the longer the preset time length is.
Step S103, detecting the target of the environment image to obtain a detection target in a preset range and corresponding pixel coordinates of the detection target in the environment image.
Here, the object detection refers to detecting a specific type of object in the environment image to determine the type and the number of detected objects contained in the environment image. In the implementation process, the specific types can be multiple, and multiple specific types of targets can be detected on the environment image at the same time. For example, a particular type of object may be: lane lines, arrows, other vehicles, obstacles, and the like.
In this embodiment of the present application, any one of target detection methods may be used to detect a target in an environmental image, for example, the environmental image may be identified by an image identification technology, so as to determine a type of each target in the environmental image, and then, a target having the same type as a specific type is determined as the detected target.
After the detection targets are detected, pixel coordinates corresponding to each detection target in the environment image can be marked in the environment image, the pixel coordinates are used for reflecting positions of pixels corresponding to the detection targets in the environment image, and the pixel coordinates can be coordinate values of the pixels corresponding to the detection targets in the environment image under an image coordinate system where the environment image is located. The position of the detection target in the environment image can be determined through the pixel coordinates.
Step S104, based on the road data and the pixel coordinates, determines the coordinate values of the detection target in the actual coordinate system in which the target vehicle is located.
In the embodiment of the application, the actual coordinate system of the target vehicle is a coordinate system corresponding to the target vehicle or the image acquisition equipment on the terminal; in the actual coordinate system, the center of the image acquisition device is located at the origin of the actual coordinate system, the optical center direction of the image acquisition device is a first coordinate axis direction of the actual coordinate system, a direction extending along the center of the image acquisition device and perpendicular to the horizontal plane is a second coordinate axis direction of the actual coordinate system, and a direction perpendicular to the first coordinate axis direction and the second coordinate axis direction is a third coordinate axis direction of the actual coordinate system. The relative position relationship between the detection target and the image acquisition equipment can be determined by determining the coordinate value of the detection target under the actual coordinate system, that is, the relative position relationship such as the relative distance, the direction and the like between the detection target and the origin of the actual coordinate system can be determined by determining the coordinate value of the detection target under the actual coordinate system. The image acquisition device is usually an image acquisition device located on the target vehicle or an image acquisition device fixed on the target vehicle, so that the relative position relationship between the target vehicle and the detection target can be determined through the relative position relationship. Thus, based on the relative position relationship between the target vehicle and the detection target, the driving strategy conforming to the current relative position relationship can be further determined.
For example, if the detection target is a double solid line in the middle of the road, if it is determined that the relative positional relationship between the target vehicle and the double solid line is that the distance between the target vehicle and the double solid line is smaller than the distance threshold, it may be determined that the driving strategy of the next step may be to travel in a direction away from the double solid line, and therefore, a corresponding driving instruction, such as fine-tuning the steering wheel to the right, may be given based on the driving strategy.
A specific implementation procedure of determining the coordinate values of the detection target in the actual coordinate system in which the target vehicle is located based on the road data and the pixel coordinates will be described below.
Step S105, positioning the target vehicle in real time based on the coordinate values.
In this embodiment of the present application, after determining the coordinate value of the detection target in the actual coordinate system, the relative positional relationship such as the distance and direction between the target vehicle and the detection target may be determined, where the detection target generally includes targets fixedly set by a lane line, an arrow, an indication mark, and the like, and the targets fixedly set are targets with fixed positions and known positions. Therefore, the target vehicle can be accurately positioned based on the determined coordinate values, and the current position of the target vehicle is determined.
By adopting the positioning method of the embodiment of the application to acquire the current running parameters and the environment image of the target vehicle in real time, the coordinate value is determined in real time, and then the real-time positioning of the target vehicle can be realized according to the coordinate value calculated in real time.
According to the positioning method provided by the embodiment of the application, when the target vehicle is positioned in real time, road data of a current running road of the target vehicle is matched from a preset map database based on current running parameters of the target vehicle, and target detection is carried out on an environment image within a preset range of the target vehicle, so that coordinate values of the detection target under an actual coordinate system of the target vehicle are determined based on the road data and pixel coordinates of the detection target, namely, the actual relative position relationship between the detection target and the target vehicle is calculated. Therefore, the relative position between the detection target and the target vehicle is accurately calculated and calculated by combining the road data of the current running road of the target vehicle, and the accuracy of real-time positioning can be greatly improved through the relative position.
The following schematically describes several possible application scenarios of the embodiments of the present application, and it should be noted that the following scenarios are only examples, and not all scenarios tried out by the present application are included.
Scene one: the positioning method of the embodiment of the invention can be applied to navigation application, and can provide a navigation application for realizing real-time positioning of the target vehicle, wherein the navigation application can be used for carrying out real-time positioning on the target vehicle while navigating the driving process of the target vehicle, that is, the accurate position of the target vehicle can be determined by carrying out real-time positioning on the target vehicle, so that the navigation function of the target vehicle is realized based on the accurate position.
The positioning system at least comprises a terminal, a server and a target vehicle, wherein the navigation application is operated on the terminal, and the server is a background server of the navigation application. The terminal may not be connected or fixed with the target vehicle, for example the terminal may be the driver's mobile phone. The target vehicle may have an image capturing device thereon, by which an environmental image within a preset range of the target vehicle can be captured. The target vehicle can be a vehicle with an automatic driving function, and the terminal runs the navigation application to navigate the driving path so as to realize the automatic driving function of the target vehicle. In the process of running the navigation application by the terminal, in order to ensure that the target vehicle accurately drives based on the current navigation path, the current driving parameters of the target vehicle can be acquired in real time, and the environment image within the preset range of the target vehicle can be acquired, wherein when the environment image is acquired, the image acquisition equipment on the target vehicle can send the environment image to the terminal after acquiring the environment image in real time, the terminal packages the current driving parameters and the environment image into a real-time positioning request, the real-time positioning request is sent to the server in real time by the terminal, and the server is used for positioning the target vehicle in real time by adopting the positioning method provided by the embodiment of the application, so that the driving strategy of the current driving process is generated based on the real-time positioning result.
Scene II: the positioning method can be applied to navigation application. The positioning system at least comprises a terminal, a server and a target vehicle, wherein the navigation application is operated on the terminal, and the server is a background server of the navigation application. The terminal may be located on the target vehicle, the terminal being a device connected to the target vehicle, or a device fixed to the target vehicle, for example the terminal may be an in-vehicle terminal device on the target vehicle; the terminal can be provided with an image acquisition device, and the environment image within the preset range of the target vehicle can be acquired through the image acquisition device on the terminal. The target vehicle may be a vehicle having an autopilot function that is implemented by running a navigation application to navigate a driving path. In the process of running the navigation application by the terminal, in order to ensure that the target vehicle drives accurately based on the current navigation path, the current running parameters of the target vehicle can be acquired in real time, the environment images within the preset range of the target vehicle are collected, the current running parameters and the environment images are packaged into a real-time positioning request, the real-time positioning request is sent to the server in real time through the terminal, and the target vehicle is positioned in real time through the server by adopting the positioning method provided by the embodiment of the application, so that the driving strategy of the current driving process is generated based on the real-time positioning result.
Scene III: the positioning method can be applied to navigation application. The positioning system at least comprises a server and a target vehicle, wherein the target vehicle is provided with an automatic driving module, the navigation application can be operated through the automatic driving module, and the server is a background server of the navigation application. The target vehicle may have an image capturing device thereon, by which an environmental image within a preset range of the target vehicle can be captured. The target vehicle can be a vehicle with an automatic driving function, and the automatic driving module of the automatic driving vehicle is used for running a navigation application to navigate a driving path so as to realize the automatic driving function of the target vehicle. In the process of running the navigation application by the automatic driving module, in order to ensure that the target vehicle drives accurately based on the current navigation path, the current driving parameters of the target vehicle can be acquired in real time, and the environment image within the preset range of the target vehicle can be acquired, wherein when the environment image is acquired, the image acquisition equipment on the target vehicle can directly send the environment image to the server after acquiring the environment image in real time. The automatic driving module packages the current driving parameters into a real-time positioning request, the real-time positioning request is sent to the server in real time through the automatic driving module, and the server is used for positioning the target vehicle in real time by adopting the positioning method provided by the embodiment of the application, so that a driving strategy of the current driving process is generated based on the real-time positioning result.
Scene four: the positioning method of the embodiment of the application can be applied to positioning application, and in order to realize accurate positioning of the target vehicle, positioning application can be provided, and the target vehicle is positioned in real time through the positioning application so as to determine the accurate position of the target vehicle.
The positioning system at least comprises a terminal, a server and a target vehicle, wherein the terminal is provided with the positioning application, and the server is a background server of the positioning application. The terminal may be located on the target vehicle, the terminal being a device connected to the target vehicle, or a device fixed to the target vehicle, for example the terminal may be an in-vehicle terminal device on the target vehicle; the terminal can be provided with an image acquisition device, and the environment image within the preset range of the target vehicle can be acquired through the image acquisition device on the terminal. In the process of running the positioning application by the terminal, the current running parameters of the target vehicle can be acquired in real time, the environment image within the preset range of the target vehicle is acquired, the current running parameters and the environment image are packaged into a real-time positioning request, the real-time positioning request is sent to the server in real time through the terminal, and the real-time positioning is carried out on the target vehicle through the server by adopting the positioning method provided by the embodiment of the application.
Next, a positioning method according to an embodiment of the present application will be described in connection with the above scenario pair.
In this embodiment of the present application, the positioning system at least includes a terminal, a server and a target vehicle, referring to fig. 5, fig. 5 is another optional flowchart of the positioning method provided in the embodiment of the present application, where the positioning method includes the following steps S201 to S219:
in step S201, the terminal obtains the current running parameters of the target vehicle through the client of the navigation application.
In some embodiments, the current travel parameters include a travel position, a travel posture, and a travel speed of the target vehicle. The running position of the target vehicle at the current moment can be obtained through satellite sensing equipment; acquiring the running gesture of the target vehicle at the current moment through an inertial measurement device; and acquiring the running speed of the target vehicle at the current moment through a speed sensing device.
Step S202, the target vehicle collects an environment image within a preset range of the target vehicle through an image collecting device carried by the target vehicle.
Here, an environmental image located in front of the target vehicle or located around the target vehicle may be acquired. The image capturing direction of the image capturing device with respect to the target vehicle may be a fixed direction, for example, the image capturing device may be fixed to capture an environmental image in front of the target vehicle; the image capturing direction of the image capturing apparatus with respect to the target vehicle may be a non-fixed direction, and for example, environmental images in a plurality of different directions such as front, rear, left, and right of the target vehicle may be captured.
When the image is acquired, the environment image within the preset range corresponding to the image acquisition direction can be acquired according to the image acquisition direction.
In step S203, the target vehicle transmits the environment image to the terminal.
In step S204, the terminal encapsulates the current driving parameters of the target vehicle and the environmental image into a real-time positioning request.
In step S205, the terminal sends a real-time positioning request to the server.
In step S206, the server determines a current travel road corresponding to the travel position, the travel posture, and the travel speed from the map database in response to the real-time positioning request.
In the embodiment of the application, the server can analyze the real-time positioning request to obtain the current driving parameters and the environment image. The current travel parameters include the travel position, the travel posture, and the travel speed of the target vehicle, and therefore, the current travel road corresponding to the travel position, the travel posture, and the travel speed may be further determined from the map database.
In some embodiments, the road identifier of the current running road of the target vehicle can be obtained by performing information matching from a map database based on the running position, the running gesture and the running speed through a preset machine learning algorithm; the machine learning algorithm includes a machine learning algorithm corresponding to a hidden markov model, that is, information matching may be performed by using a hidden markov model (HM M, hidden Markov Model), in the implementation process, information such as a driving position, a driving posture, a driving speed and the like may be input into the hidden markov model as a model input parameter, information matching is performed from a map database by using the hidden markov model, and a road identifier of a current driving road of the target vehicle matched from the map database is output.
In step S207, the server acquires road data of the current traveling road from the map database.
In the embodiment of the application, based on the current running parameter of the target vehicle, the current running position of the target vehicle can be determined, so that the current running road of the target vehicle can be matched from a preset map database based on the running position, namely, which road of which region the target vehicle is currently running on is determined. When the current driving road of the target vehicle is determined, road data of the current driving road may be acquired from the map database.
In step S208, the server performs object detection on the environmental image to obtain a detection object within a preset range and a pixel coordinate corresponding to the detection object in the environmental image.
Here, the object detection refers to detecting a specific type of object in the environment image to determine the type and the number of detected objects contained in the environment image. In the implementation process, the specific types can be multiple, and multiple specific types of targets can be detected on the environment image at the same time. For example, a particular type of object may be: lane lines, arrows, other vehicles, obstacles, and the like. The environmental image may be subjected to object detection by any one of object detection methods, for example, the environmental image may be recognized by an image recognition technique to determine the type of each object in the environmental image, and then an object having the same type as the specific type is determined as the detected detection object.
In some embodiments, the road data includes gradient information of the current traveling road, that is, gradient information of each road may also be stored in the map database, the gradient information including a gradient of the road. If the gradient is equal to 0, it means that the road is a horizontal road surface, if the gradient is greater than 0, it means that the road is an ascending road surface, and if the gradient is less than 0, it means that the road is a descending road surface.
In step S209, the server acquires the internal parameters of the image capturing apparatus.
In some embodiments, the internal parameters of the image acquisition device may be obtained by calibrating the parameters of the image acquisition device.
In the embodiment of the application, the calibration of the parameters of the image acquisition equipment can be realized by any one of the following calibration methods: traditional camera calibration method, active vision camera calibration method, camera self-calibration method and zero distortion camera calibration method.
The traditional camera calibration method needs to use a calibration object with known size, and obtains the internal and external parameters of a camera model by a certain algorithm through establishing the correspondence between the points with known coordinates on the calibration object and the image points of the calibration object. The traditional camera calibration method can be used for any camera model, has high precision, but the calibration object is always needed during calibration, two or more images are needed, and the manufacturing precision of the calibration object can influence the calibration result. The two-step Tsai method, the Zhang's calibration method and the like are common.
The active vision camera calibration method is based on active vision, and is to calibrate the camera by knowing certain motion information of the camera. The camera is controlled to do certain specific motions and shoot a plurality of groups of images, and the internal and external parameters of the camera are solved according to the image information and the known displacement change. The method does not need a calibration object, and has simple algorithm and high robustness.
The camera self-calibration method mainly utilizes the constraint of camera motion, has strong flexibility and can calibrate the camera on line. There are commonly layered stepwise calibrations, kruppa equation based, etc.
The zero distortion camera calibration method is to use an LCD display screen as a reference standard and a phase shift grating as a medium to establish a mapping relation between LCD pixels and camera sensor pixels and determine the viewpoint position of each camera pixel point on the LCD.
In step S210, the server constructs a first equation for solving the coordinate values in the actual coordinate system based on the internal parameters and the pixel coordinates.
In some embodiments, the first equation may be constructed based on the internal parameters and pixel coordinates as the following equation (1-1):
wherein, (x) c ,y c ,z c ) Representing coordinate values of the detection target in an actual coordinate system; (u, v) represents the pixel coordinates of the detection target corresponding to the environmental image; f (f) x And f y Are indicative of the internal parameters of the image acquisition device.
In step S211, the server constructs a second equation for solving the coordinate values in the actual coordinate system based on the gradient information of the current traveling road.
In some embodiments, the slope information of the current travel road includes a slope angle formed between the current travel road and a horizontal plane; the second equation may be constructed as the following equation (1-2) based on the gradient information of the current traveling road:
wherein h represents the height of the image acquisition equipment from the ground where the target vehicle is located; beta represents a slope angle.
In step S212, the server performs parameter solving based on the first equation and the second equation to obtain the coordinate value under the actual coordinate system.
In the embodiment of the application, the actual coordinate system in which the target vehicle is located is the coordinate system corresponding to the image acquisition device. Fig. 6 is a schematic diagram of an actual coordinate system provided in the embodiment of the present application, as shown in fig. 6, in the actual coordinate system, a center of the image capturing device is located at an origin of the actual coordinate system, a light center direction of the image capturing device 601 is a first coordinate axis direction z of the actual coordinate system, a second coordinate axis direction y extending along the center of the image capturing device and perpendicular to a horizontal plane is the actual coordinate system, and a direction perpendicular to the first coordinate axis direction z and the second coordinate axis direction y is a third coordinate axis direction x of the actual coordinate system (wherein the third coordinate axis direction x in fig. 6 may be an outward direction perpendicular to a paper surface on which an image is displayed or an inward direction perpendicular to the paper surface on which the image is displayed).
In step S213, the server converts the latitude and longitude information into an actual coordinate system based on the driving position and driving posture of the target vehicle, and obtains the converted latitude and longitude information.
In the embodiment of the application, the longitude and latitude in the road data are map data describing the road, and the targets or objects shot by the camera can only obtain the relative positions of the targets relative to the camera, so that in order to correspond the map data in the map to the targets detected by the camera, the longitude and latitude are required to be converted into coordinates relative to the camera, namely, the longitude and latitude information in the map data is converted into the actual coordinate system where the camera is located, and then the relationship between the data in the map and the targets perceived by the camera can be established.
In step S214, the server performs real-time positioning of the first positioning accuracy on the target vehicle based on the converted longitude and latitude information.
In the embodiment of the present application, after the converted longitude and latitude information is obtained, the longitude and latitude dimension of the target vehicle may be positioned in real time based on the longitude and latitude information converted to the actual coordinate system where the camera is located, that is, the longitude and latitude value of the position where the target vehicle is located is determined. In the embodiment of the application, the real-time positioning process of the first positioning precision is performed on the target vehicle based on the converted longitude and latitude information, which can be understood as a coarse positioning process.
Step S215, the server performs real-time positioning of the second positioning precision on the target vehicle based on the coordinate value on the basis of the real-time positioning result of the first positioning precision; wherein the second positioning accuracy is greater than the first positioning accuracy.
Because the accuracy of positioning based on the longitude and latitude values is not particularly accurate, the coordinate values of the target object calculated by the embodiment of the application are required to be further adopted to position the target vehicle again, so that the accurate positioning of the target vehicle is realized. In the embodiment of the application, on the basis of the real-time positioning result of the first positioning precision, the real-time positioning process of the second positioning precision is performed on the target vehicle based on the coordinate values, and the real-time positioning process can be understood as a fine positioning process.
Step S216, the server acquires a real-time positioning result at the current time; the real-time positioning result is used for representing the real-time position of the target vehicle on the current driving road.
Here, the real-time positioning result includes accurate real-time position information of the target vehicle.
In step S217, the server generates a driving strategy for the target vehicle and a control instruction corresponding to the driving strategy based on the real-time position.
In this embodiment of the present application, a navigation path for the target vehicle may also be obtained, and the current driving policy of the target vehicle may be determined based on the real-time location by combining the navigation path, and what the driving policy of the next step should be. Here, the driving strategy includes all strategies related to driving of the vehicle, including but not limited to: driving direction, driving speed, steering angle, driving road and lane, acceleration/deceleration control, etc.
The relative position relation between the current and the detected targets of the target vehicle can be determined according to the real-time position, and for the detected targets such as lane lines, arrows and the like, the position of the detected targets is fixed and known, so that based on the relative position relation, the specific strategy of the driving strategy of the next step can be determined, for example, whether the driving strategy changes lanes leftwards or rightwards, whether the steering wheel is finely tuned leftwards or rightwards so as to ensure that the target vehicle runs in the lane middle position of the target lane and the like. In addition, for a detection target such as another vehicle, since the relative distance and the azimuth angle between the target vehicle and the other vehicle can be determined based on the real-time position, the driving strategy of the next step can also be determined based on the relative distance and the azimuth angle, for example, whether the driving strategy is acceleration driving or deceleration driving, whether the driving strategy is lane-changing left passing or following driving, and the like can be determined for the other vehicle traveling directly in front of the target vehicle.
In step S218, the server transmits a control instruction to the target vehicle.
In step S219, the autopilot module of the target vehicle controls the target vehicle to perform autopilot in accordance with the driving strategy in response to the control instruction.
According to the positioning method provided by the embodiment of the application, when the target vehicle is positioned in real time, road data of a current running road of the target vehicle is matched from a preset map database based on current running parameters of the target vehicle, and target detection is carried out on an environment image within a preset range of the target vehicle, so that coordinate values of the detection target under an actual coordinate system of the target vehicle are determined based on the road data and pixel coordinates of the detection target, namely, the actual relative position relationship between the detection target and the target vehicle is calculated. Therefore, the relative position between the detection target and the target vehicle is accurately calculated and calculated by combining the road data of the current running road of the target vehicle, and the accuracy of real-time positioning can be greatly improved through the relative position.
In the following, an exemplary application of the embodiments of the present application in a practical application scenario will be described.
The embodiment of the application provides a positioning method, which combines gradient information of a road in map data to improve relative position accuracy of road surface information identified by a camera.
The method and the device can be applied to scenes related to vehicle-mounted camera perception, such as specific use products including driving assistance, high-speed cruising and the like, the relative position accuracy of a camera perception target is improved, and after a perceived result is fused with a sensor and a high-accuracy map, the position accuracy of a vehicle can be further improved. Meanwhile, the optimized perceived lane line precision is improved, and the lane keeping of the vehicle is also improved to a certain extent, for example, the stability of vehicle control is improved, and the riding comfort of passengers is improved.
The embodiment of the application can be used in a vehicle with a camera, and as shown in fig. 7, the vehicle with the camera is provided.
Fig. 8 is an algorithm structure diagram of separation of object detection and position calculation according to an embodiment of the present application, and fig. 9 is an algorithm structure diagram of the same network model as that of object detection and position calculation according to an embodiment of the present application. Next, a positioning method according to an embodiment of the present application will be described with reference to an algorithm configuration diagram in which target detection and position calculation are separated as shown in fig. 8.
In the positioning method provided by the embodiment of the application, firstly, sensor data such as GNSS/RTK, IMU, automobile wheel speed and the like are fused, and more accurate information (namely, the current running parameters of the target vehicle, namely, the running position, the running posture and the running speed) of the vehicle is obtained, so that two general methods of Kalman filtering and optimization are available.
Then, according to the information such as the position, the posture, the speed and the like output in the previous step, the information is matched with the road data in the map database, and common matching methods include an HMM (hidden Markov model), a machine learning method and the like. This step outputs the road on which the vehicle is currently traveling, gradient information of the road (i.e., road data of the current traveling road), and converts the longitude and latitude of the road data into a camera coordinate system (i.e., an actual coordinate system in which the target vehicle is located) according to the position and posture of the vehicle.
Then, an environment image of the vehicle within a preset range is acquired through a camera, a detection target in the environment image, such as a lane line, an arrow, other vehicles and the like, is detected, and pixel coordinates (u, v) where the detection target is located are output. As shown in fig. 10, a schematic diagram of identifying a detection target by using a camera according to an embodiment of the present application, an image identification method or a machine learning method may be used to extract information (collectively referred to as a detection target or a target in the embodiment of the present application) such as lane lines, arrows, and other vehicles in an environmental image.
Finally, according to the gradient information of the road and the pixel coordinates of all the detection targets obtained in the steps, the positions of all the detection targets relative to the camera are calculated, namely the coordinate values of the detection targets in a camera coordinate system (namely the actual coordinate system in which the target vehicle is located).
When calculating the coordinate value of the detection target in the camera coordinate system, it may be assumed that the coordinate of one detection target in the image is (u, v), the gradient of the current road is β, and the height of the camera from the ground is a known quantity h, as shown in fig. 11, which is a schematic diagram of the relationship between the detection target and the camera provided in the embodiment of the present application. The following first mode (2-1) can be constructed based on the coordinates of the detection target in the camera
Wherein, (x) c ,y c ,z c ) Representing the coordinate value of the detection target in the actual coordinate system, namely x c 、y c And z c Respectively representing coordinate values of the detection target A on all coordinate axes under a camera coordinate system; wherein z is a coordinate axis along the optical center direction of the camera, i.e., a position relative to the camera; (u, v) represents the pixel coordinates of the detection target corresponding to the environmental image, namely the coordinates of the detection target in the image; f (f) x And f y All representing internal parameters of the camera (i.e. the image acquisition device) may be obtained by calibration, and the camera calibration method will not be described here.
The normal vector of the uphill road surface under the camera coordinate system can be obtained through obtaining the slope angle beta of the ground in the map databaseIs represented by the following formula (2-2):
since the camera ground height is h and the coordinates of the G point under the camera system are (0, h, 0), the vector GA is perpendicular to the normal vector n, and the second equation (2-3) is obtained:
then, the coordinate value of the detection target A can be obtained by solving the formula (2-1) and the formula (2-3), namely, the coordinate value (x) is obtained by calculation c ,y c ,z c )。
In the related art, the perceived lane line (white) is narrowed at the far end when ascending/descending a slope, as shown in fig. 12, which is a diagram of the pure perceived result when ascending/descending a slope. After the scheme provided by the embodiment of the application is used, the sensing line is in a parallel state with reality, as shown in fig. 13, and is a schematic diagram of the sensing result after the map gradient data combination provided by the embodiment of the application is adopted.
It should be noted that, in the embodiment of the present application, only the optical axis of the camera is assumed to be installed parallel to the horizontal plane, when the camera has a certain included angle with the horizontal plane, the same effect can be achieved by optimizing the calculation scheme, as in the case of fig. 14, the embodiment is a schematic view of a scene when the camera has a certain included angle with the horizontal plane, and the positioning method provided in the embodiment of the present application may also be used for calculation.
In addition, the above embodiment is exemplified by the algorithm structure of separating the target detection and the position calculation shown in fig. 8, and of course, the positioning method of the embodiment of the present application may also be applied to a method based on machine learning, that is, the algorithm structure of the same network model as the target detection and the position calculation in fig. 9 may be applied, and the slope angle in the map may be input into the target detection network model for training, so that the same effect may be achieved.
It can be understood that, in the embodiments of the present application, the content of the user information, for example, information such as a real-time positioning result, a navigation path, etc., if data related to the user information or the enterprise information is involved, when the embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, or the information needs to be subjected to blurring processing, so as to eliminate the correspondence between the information and the user; and the related data collection and processing should be strictly according to the requirements of relevant national laws and regulations when the example is applied, obtain the informed consent or independent consent of the personal information body, and develop the subsequent data use and processing behaviors within the authorized scope of laws and regulations and personal information body.
Continuing with the description below, the positioning device 354 provided in the embodiments of the present application is implemented as an exemplary structure of a software module, and in some embodiments, as shown in fig. 3, the positioning device 354 includes: a matching module 3541, configured to match road data of a current driving road of a target vehicle from a preset map database based on a current driving parameter of the target vehicle; an image acquisition module 3542, configured to acquire an environmental image within a preset range of the target vehicle; the target detection module 3543 is configured to perform target detection on the environmental image, so as to obtain a detection target in the preset range and a pixel coordinate corresponding to the detection target in the environmental image; a determining module 3544, configured to determine, based on the road data and the pixel coordinates, coordinate values of the detection target in an actual coordinate system in which the target vehicle is located; and a real-time positioning module 3545, configured to position the target vehicle in real time based on the coordinate values.
In some embodiments, the matching module 3541 is further to: acquiring current running parameters of the target vehicle; the current running parameters comprise the running position, the running posture and the running speed of the target vehicle; determining a current travel road corresponding to the travel position, the travel posture, and the travel speed from the map database; and acquiring the road data of the current driving road from the map database.
In some embodiments, the matching module 3541 is further to: acquiring the driving position of the target vehicle at the current moment through satellite sensing equipment; acquiring the running gesture of the target vehicle at the current moment through an inertial measurement device; and acquiring the running speed of the target vehicle at the current moment through a speed sensing device.
In some embodiments, the matching module 3541 is further to: performing information matching from the map database based on the running position, the running gesture and the running speed through a preset machine learning algorithm to obtain a road identifier of a current running road of the target vehicle; the machine learning algorithm comprises a machine learning algorithm corresponding to a hidden Markov model.
In some embodiments, the road data includes gradient information of the current driving road and longitude and latitude information of the current driving road; the apparatus further comprises: the conversion module is used for converting the longitude and latitude information into the actual coordinate system based on the running position and the running gesture of the target vehicle to obtain converted longitude and latitude information; the real-time location module 3545 is further configured to: real-time positioning of the target vehicle with first positioning precision is performed based on the converted longitude and latitude information; based on the real-time positioning result of the first positioning precision, performing real-time positioning of the second positioning precision on the target vehicle based on the coordinate value; the second positioning accuracy is greater than the first positioning accuracy.
In some embodiments, the image acquisition module 3542 is further configured to: acquiring an environment image within a preset range of the target vehicle through image acquisition equipment on the target vehicle; the actual coordinate system of the target vehicle is a coordinate system corresponding to the image acquisition equipment; in the actual coordinate system, the center of the image acquisition device is located at the origin of the actual coordinate system, the optical center direction of the image acquisition device is a first coordinate axis direction of the actual coordinate system, a direction extending along the center of the image acquisition device and perpendicular to a horizontal plane is a second coordinate axis direction of the actual coordinate system, and a direction perpendicular to the first coordinate axis direction and the second coordinate axis direction is a third coordinate axis direction of the actual coordinate system.
In some embodiments, the road data includes grade information of the current driving road; the determining module 3544 is further configured to: acquiring internal parameters of the image acquisition equipment; constructing a first equation for solving the coordinate values in the actual coordinate system based on the internal parameters and the pixel coordinates; constructing a second equation for solving coordinate values in the actual coordinate system based on gradient information of the current driving road; and carrying out parameter solving based on the first equation and the second equation to obtain coordinates in the actual coordinate system.
In some embodiments, the determining module 3544 is further to: and calibrating parameters of the image acquisition equipment to obtain internal parameters of the image acquisition equipment.
In some embodiments, the determining module 3544 is further to: based on the internal parameters and the pixel coordinates, the first equation is constructed as the following equation (1):
wherein, (x) c ,y c ,z c ) Representing coordinate values of the detection target in the actual coordinate system; (u, v) representing the pixel coordinates of the detection target corresponding in the environmental image; f (f) x And f y Are indicative of the internal parameters of the image acquisition apparatus.
In some embodiments, the slope information of the current travel road includes a slope angle formed between the current travel road and a horizontal plane; the determining module 3544 is further configured to: based on the gradient information of the current traveling road, the second equation is constructed as the following equation (2):
wherein h represents the height of the image acquisition equipment from the ground where the target vehicle is located; beta represents the ramp angle.
In some embodiments, the apparatus further comprises: the result acquisition module is used for acquiring a real-time positioning result at the current moment; the real-time positioning result is used for representing the real-time position of the target vehicle on the current driving road; a strategy generation module for generating a driving strategy for the target vehicle and a control instruction corresponding to the driving strategy based on the real-time position; and the control module is used for sending the control instruction to the target vehicle, and responding to the control instruction through the automatic driving module of the target vehicle, and controlling the target vehicle to automatically drive according to the driving strategy.
It should be noted that, the description of the apparatus in the embodiment of the present application is similar to the description of the embodiment of the method described above, and has similar beneficial effects as the embodiment of the method, so that a detailed description is omitted. For technical details not disclosed in the embodiments of the present apparatus, please refer to the description of the embodiments of the method of the present application for understanding.
Embodiments of the present application provide a computer program product comprising executable instructions that are a computer instruction; the executable instructions are stored in a computer readable storage medium. The executable instructions, when read from the computer readable storage medium by a processor of an electronic device, when executed by the processor, cause the electronic device to perform the methods described in embodiments of the present application.
The present embodiments provide a storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform a method provided by the embodiments of the present application, for example, the method as shown in fig. 4.
In some embodiments, the storage medium may be a computer readable storage medium, such as a ferroelectric Memory (FRAM, ferromagnetic Random Access Memory), read Only Memory (ROM), programmable Read Only Memory (PROM, programmable Read Only Memory), erasable programmable Read Only Memory (EPROM, erasable Programmable Read Only Memory), charged erasable programmable Read Only Memory (EEPR OM, electrically Erasable Programmable Read Only Memory), flash Memory, magnetic surface Memory, optical Disk, or Compact Disk-Read Only Memory (CD-ROM), among others; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Mar kup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or, alternatively, on multiple electronic devices distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (15)

1. A method of positioning, the method comprising:
based on the current running parameters of the target vehicle, matching road data of the current running road of the target vehicle from a preset map database;
collecting an environment image within a preset range of the target vehicle;
performing target detection on the environment image to obtain a detection target in the preset range and a pixel coordinate corresponding to the detection target in the environment image;
determining coordinate values of the detection target under an actual coordinate system where the target vehicle is located based on the road data and the pixel coordinates;
and positioning the target vehicle in real time based on the coordinate values.
2. The method according to claim 1, wherein the matching road data of the current driving road of the target vehicle from a preset map database based on the current driving parameter of the target vehicle includes:
Acquiring current running parameters of the target vehicle; the current running parameters comprise the running position, the running posture and the running speed of the target vehicle;
determining a current travel road corresponding to the travel position, the travel posture, and the travel speed from the map database;
and acquiring the road data of the current driving road from the map database.
3. The method of claim 2, wherein the obtaining the current driving parameters of the target vehicle comprises:
acquiring the driving position of the target vehicle at the current moment through satellite sensing equipment;
acquiring the running gesture of the target vehicle at the current moment through an inertial measurement device;
and acquiring the running speed of the target vehicle at the current moment through a speed sensing device.
4. The method of claim 2, wherein the determining, from the map database, a current travel road corresponding to the travel position, the travel pose, and the travel speed comprises:
performing information matching from the map database based on the running position, the running gesture and the running speed through a preset machine learning algorithm to obtain a road identifier of a current running road of the target vehicle; the machine learning algorithm comprises a machine learning algorithm corresponding to a hidden Markov model.
5. The method according to claim 1, wherein the road data includes gradient information of the current traveling road and longitude and latitude information of the current traveling road; the method further comprises the steps of:
converting the longitude and latitude information into the actual coordinate system based on the running position and the running posture of the target vehicle to obtain converted longitude and latitude information;
the real-time positioning of the target vehicle based on the coordinate values includes:
real-time positioning of the target vehicle with first positioning precision is performed based on the converted longitude and latitude information;
based on the real-time positioning result of the first positioning precision, performing real-time positioning of the second positioning precision on the target vehicle based on the coordinate value; the second positioning accuracy is greater than the first positioning accuracy.
6. The method of any one of claims 1 to 5, wherein the acquiring an image of the environment within a predetermined range of the target vehicle comprises:
acquiring an environment image within a preset range of the target vehicle through image acquisition equipment on the target vehicle;
the actual coordinate system of the target vehicle is a coordinate system corresponding to the image acquisition equipment; in the actual coordinate system, the center of the image acquisition device is located at the origin of the actual coordinate system, the optical center direction of the image acquisition device is a first coordinate axis direction of the actual coordinate system, a direction extending along the center of the image acquisition device and perpendicular to a horizontal plane is a second coordinate axis direction of the actual coordinate system, and a direction perpendicular to the first coordinate axis direction and the second coordinate axis direction is a third coordinate axis direction of the actual coordinate system.
7. The method of claim 6, wherein the road data includes grade information of the current traveling road; the determining, based on the road data and the pixel coordinates, a coordinate value of the detection target in an actual coordinate system where the target vehicle is located, includes:
acquiring internal parameters of the image acquisition equipment;
constructing a first equation for solving the coordinate values in the actual coordinate system based on the internal parameters and the pixel coordinates;
constructing a second equation for solving coordinate values in the actual coordinate system based on gradient information of the current driving road;
and carrying out parameter solving based on the first equation and the second equation to obtain coordinate values under the actual coordinate system.
8. The method of claim 7, wherein the acquiring the internal parameters of the image acquisition device comprises:
and calibrating parameters of the image acquisition equipment to obtain internal parameters of the image acquisition equipment.
9. The method of claim 7, wherein constructing a first equation for solving the coordinate values in the actual coordinate system based on the internal parameters and the pixel coordinates comprises:
Based on the internal parameters and the pixel coordinates, the first equation is constructed as the following equation (1):
wherein, (x) c ,y c ,z c ) Representing coordinate values of the detection target in the actual coordinate system; (u, v) representing the pixel coordinates of the detection target corresponding in the environmental image; f (f) x And f y Are indicative of the internal parameters of the image acquisition apparatus.
10. The method according to claim 9, wherein the gradient information of the current travel road includes a slope angle formed between the current travel road and a horizontal plane;
the constructing a second equation for solving the coordinate values in the actual coordinate system based on the gradient information of the current traveling road includes:
based on the gradient information of the current traveling road, the second equation is constructed as the following equation (2):
wherein h represents the height of the image acquisition equipment from the ground where the target vehicle is located; beta represents the ramp angle.
11. The method according to any one of claims 1 to 5, further comprising:
acquiring a real-time positioning result at the current time; the real-time positioning result is used for representing the real-time position of the target vehicle on the current driving road;
Generating a driving strategy for the target vehicle and a control instruction corresponding to the driving strategy based on the real-time position;
and sending the control instruction to the target vehicle, and controlling the target vehicle to automatically drive according to the driving strategy by an automatic driving module of the target vehicle in response to the control instruction.
12. A positioning device, the device comprising:
the matching module is used for matching road data of the current running road of the target vehicle from a preset map database based on the current running parameters of the target vehicle;
the image acquisition module is used for acquiring an environment image within the preset range of the target vehicle;
the target detection module is used for carrying out target detection on the environment image to obtain a detection target in the preset range and a pixel coordinate corresponding to the detection target in the environment image;
the determining module is used for determining coordinate values of the detection target under an actual coordinate system where the target vehicle is located based on the road data and the pixel coordinates;
and the real-time positioning module is used for positioning the target vehicle in real time based on the coordinate values.
13. An electronic device, comprising:
a memory for storing executable instructions; a processor for implementing the positioning method according to any one of claims 1 to 11 when executing executable instructions stored in said memory.
14. A computer readable storage medium, characterized in that executable instructions are stored for causing a processor to execute the executable instructions for implementing the positioning method according to any one of claims 1 to 11.
15. A computer program product or computer program comprising executable instructions stored in a computer readable storage medium;
the positioning method of any of claims 1 to 11 is implemented when a processor of an electronic device reads the executable instructions from the computer readable storage medium and executes the executable instructions.
CN202311311758.2A 2023-10-10 2023-10-10 Positioning method, positioning device, electronic device, storage medium and program product Pending CN117387644A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311311758.2A CN117387644A (en) 2023-10-10 2023-10-10 Positioning method, positioning device, electronic device, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311311758.2A CN117387644A (en) 2023-10-10 2023-10-10 Positioning method, positioning device, electronic device, storage medium and program product

Publications (1)

Publication Number Publication Date
CN117387644A true CN117387644A (en) 2024-01-12

Family

ID=89467631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311311758.2A Pending CN117387644A (en) 2023-10-10 2023-10-10 Positioning method, positioning device, electronic device, storage medium and program product

Country Status (1)

Country Link
CN (1) CN117387644A (en)

Similar Documents

Publication Publication Date Title
JP6812404B2 (en) Methods, devices, computer-readable storage media, and computer programs for fusing point cloud data
EP3759562B1 (en) Camera based localization for autonomous vehicles
CN111108342B (en) Visual range method and pair alignment for high definition map creation
CN112204343B (en) Visualization of high definition map data
US11353589B2 (en) Iterative closest point process based on lidar with integrated motion estimation for high definition maps
US11080216B2 (en) Writing messages in a shared memory architecture for a vehicle
US10747597B2 (en) Message buffer for communicating information between vehicle components
WO2020264222A1 (en) Image-based keypoint generation
US20220194412A1 (en) Validating Vehicle Sensor Calibration
JP6950832B2 (en) Position coordinate estimation device, position coordinate estimation method and program
US11616737B2 (en) Reading messages in a shared memory architecture for a vehicle
US11327489B2 (en) Shared memory architecture for a vehicle
US12067869B2 (en) Systems and methods for generating source-agnostic trajectories
JP2022538097A (en) Collection of user-provided data about navigable networks
CN111351502A (en) Method, apparatus and computer program product for generating an overhead view of an environment from a perspective view
KR20200032776A (en) System for information fusion among multiple sensor platforms
CN112461249A (en) Sensor localization from external source data
CN113252022A (en) Map data processing method and device
CN117163049A (en) System and method for autopilot
CN114127511A (en) Method and communication system for assisting at least partially automatic vehicle control
Brata et al. An Enhancement of Outdoor Location-Based Augmented Reality Anchor Precision through VSLAM and Google Street View
CN108416044B (en) Scene thumbnail generation method and device, electronic equipment and storage medium
CN118089758A (en) Collaborative awareness method, apparatus, device, storage medium, and program product
US20220122316A1 (en) Point cloud creation
CN117387644A (en) Positioning method, positioning device, electronic device, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication