WO2023050646A1 - 设备定位的系统和方法 - Google Patents

设备定位的系统和方法 Download PDF

Info

Publication number
WO2023050646A1
WO2023050646A1 PCT/CN2022/071041 CN2022071041W WO2023050646A1 WO 2023050646 A1 WO2023050646 A1 WO 2023050646A1 CN 2022071041 W CN2022071041 W CN 2022071041W WO 2023050646 A1 WO2023050646 A1 WO 2023050646A1
Authority
WO
WIPO (PCT)
Prior art keywords
target device
determined
scene
positioning information
key
Prior art date
Application number
PCT/CN2022/071041
Other languages
English (en)
French (fr)
Inventor
黄超
孟泽楠
Original Assignee
上海仙途智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海仙途智能科技有限公司 filed Critical 上海仙途智能科技有限公司
Publication of WO2023050646A1 publication Critical patent/WO2023050646A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Definitions

  • the embodiments of this specification relate to the field of positioning, and in particular, to a system and method for device positioning.
  • an automatic driving device usually needs to obtain the position of the device on the map in real time, so as to control the moving direction of the device. It is usually necessary to pre-make a high-precision map, and obtain the position of the device in the high-precision map through a variety of sensor fusion methods to complete the positioning.
  • sensor fusion methods For example, Global Navigation Satellite System (GNSS), wheel speedometers, inertial sensors, lidar, and cameras can be utilized.
  • GNSS Global Navigation Satellite System
  • wheel speedometers wheel speedometers
  • inertial sensors lidar
  • lidar lidar
  • cameras can be utilized.
  • existing positioning methods may be difficult to locate. For example, in a tunnel, due to the poor signal, it is difficult for the GNSS system to communicate and obtain positioning data; because the environmental characteristics of the tunnel are similar, they are all similar tunnel walls, and it is difficult for the lidar and the camera to extract reliable environmental characteristics for positioning; and Only using the wheel speedometer and inertial sensor for positioning has a large error.
  • the embodiments of this specification provide a system and method for device positioning.
  • the technical scheme is as follows.
  • a device positioning system which pre-acquires the corresponding relationship between key scenes in a fixed route and global positioning information, the system includes: a judging unit, used to judge whether a target device is currently in any key scene; On the route; a global positioning unit, configured to determine the global positioning information corresponding to the current key scene as the current positioning information of the target device when it is determined that the target device is currently in any key scene; local positioning A unit, configured to determine the current displacement of the target device when it is determined that the target device is not currently in any key scene, and obtain starting point positioning information corresponding to the determined displacement; the determined displacement is from The starting point positioning information starts monitoring; based on the determined displacement and the starting point positioning information, determine the current positioning information of the target device.
  • a device positioning method which pre-obtains the corresponding relationship between key scenes in a fixed route and global positioning information.
  • the method includes: judging whether a target device is currently in any key scene; the target device is on the fixed route; When the target device is currently in any key scene, determine the global positioning information corresponding to the key scene it is currently in as the current positioning information of the target device; when it is determined that the target device is not currently in any key scene In the case of , determine the current displacement of the target device, and obtain the starting point positioning information corresponding to the determined displacement; the determined displacement starts monitoring from the starting point positioning information; based on the determined displacement and the The origin location information determines the current location information of the target device.
  • the above technical solution can locate by judging whether the target device is currently in a key scene, combined with the global positioning information corresponding to the key scene, and in a non-key scene, it can also be positioned in combination with the displacement, so that it can be located in the existing positioning method.
  • target devices are positioned on a fixed route.
  • Fig. 1 is a schematic diagram of a fixed route provided by the embodiment of this specification
  • Fig. 2 is a schematic diagram of another fixed route provided by the embodiment of this specification.
  • Fig. 3 is a schematic structural diagram of a device positioning system provided by an embodiment of this specification.
  • Fig. 4 is a schematic diagram of the principle of scene feature acquisition provided by the embodiment of this specification.
  • Fig. 5 is a schematic diagram of the principle of device positioning provided by the embodiment of this specification.
  • FIG. 6 is a schematic flow chart of a device positioning method provided by an embodiment of this specification.
  • Fig. 7 is a schematic structural diagram of a device for configuring the method of the embodiment of the present specification.
  • the automatic driving device usually needs to obtain the position of the device in the map in real time, so as to control the moving direction of the device. It is usually necessary to pre-make a high-precision map, and obtain the position of the device in the high-precision map through a variety of sensor fusion methods to complete the positioning.
  • GNSS Global Navigation Satellite System
  • wheel speedometers wheel speedometers
  • inertial sensors lidar, and cameras
  • Different sensors can obtain different location information to help positioning.
  • the GNSS system can be positioned by satellites, the lidar and the camera can obtain the corresponding environmental characteristics, and compare with the high-precision map to obtain the precise position, the wheel speedometer and the inertial sensor can obtain the speed and acceleration, and assist the automatic driving equipment position.
  • existing positioning methods may be difficult to locate. For example, in a tunnel, due to the poor signal, it is difficult for the GNSS system to communicate and obtain positioning data; because the environmental characteristics of the tunnel are similar, they are all similar tunnel walls, and it is difficult for the lidar and the camera to extract reliable environmental characteristics for positioning; and Only using the wheel speedometer and inertial sensor for positioning has a large error.
  • autonomous driving device itself may have various application functions, such as transportation and cleaning.
  • the automatic driving equipment may include: automatic driving transportation equipment, automatic driving cleaning equipment, and the like.
  • these automatic driving equipment can usually obtain the map of the tunnel in advance and set the cleaning task when it needs to clean in the tunnel.
  • the specific cleaning task can be the tunnel route that needs to be cleaned, and in order for the automatic driving cleaning equipment to clearly identify the cleaning route, it is necessary for the automatic driving cleaning equipment to be able to position itself, so as to clarify its position in the tunnel and facilitate determination direction of movement.
  • the embodiment of this specification provides a device positioning system.
  • the device may specifically be a device that is difficult to use or cannot use an existing positioning method. Specifically, it may be that the environment where the device is located is difficult to use the existing positioning method for positioning, or it may be that the device cannot use the existing positioning method in order to save costs.
  • the equipment can move on a fixed route, and the fixed route can specifically be a tunnel, a track, etc.
  • the fixed route may include one or more routes.
  • FIG. 1 it is a schematic diagram of a fixed route provided by the embodiment of this specification. It contains 3 bifurcations, which are respectively bifurcations 1-3.
  • the route is fixed, when the device moves on a single route, it can directly move in the predetermined moving direction.
  • self-driving sweeping equipment can move through a single tunnel to sweep.
  • the device usually needs to select a subsequent moving route.
  • a fixed route can be obtained in advance. Specifically, it may be obtaining a complete map of a fixed route, for example, obtaining a complete map of a tunnel.
  • key scenes with prominent scene characteristics can be included.
  • the bifurcation scene of the fixed route the start point scene of the fixed route, the end point scene of the fixed route, etc.
  • These key scenes can be identified by the device positioning system through scene recognition.
  • the corresponding relationship between key scenes in the fixed route and global positioning information can be determined in advance.
  • the global positioning information corresponding to the key scene may specifically be the specific position of the key scene in the fixed route, or the position information of the key scene itself.
  • the global positioning information corresponding to a certain key scene may specifically be the Nth fork in the fixed route, or a certain point as the coordinate origin, and the corresponding global positioning information may specifically be the coordinates (4,5).
  • the global positioning information corresponding to the key scene it can assist the device positioning system to locate and improve the accuracy of positioning.
  • the corresponding global positioning information may be directly determined as the current positioning information of the device.
  • positioning may be performed according to previous positioning information and displacement. Specifically, the positioning may be performed according to the displacement amount and the last determined positioning information.
  • the device regardless of whether the device is in a key scene, it can be positioned, and with the help of the global positioning information corresponding to the key scene, the positioning information determined according to the displacement can be corrected, thereby improving the accuracy of the positioning information.
  • the device positioning system provided by the embodiment of this specification may be configured on the device, or may not be configured on the device, but assists the device in positioning through information transmission.
  • FIG. 2 it is a schematic diagram of another fixed route provided by the embodiment of this specification, which includes 4 key scenarios corresponding to the global positioning information respectively. Specifically including A(0,0), B(3,0), C(6,1) and D(6,0).
  • A may be a starting point of the fixed route
  • B may be a fork in the fixed route
  • C and D are both end points of the fixed route.
  • (0,0) may be directly determined as the location information of the automatic driving device.
  • the positioning information can be determined as (2,0) according to the displacement of 2 meters and the moving direction.
  • (3,0) can be directly determined as the positioning information of the automatic driving device.
  • the automatic driving device may specifically be an automatic driving cleaning device, and a cleaning route from A to B and then to C is preset. Since there are two different routes at B, when it is determined that the automatic driving device is currently at B, the moving direction can be determined as moving to C according to the preset cleaning route.
  • FIG. 3 it is a schematic structural diagram of a device positioning system provided by the embodiment of this specification.
  • the device positioning system can pre-acquire the correspondence between key scenarios in fixed routes and global positioning information. Among them, the equipment that needs to be positioned can move on a fixed route.
  • the device positioning system when configured on any device, it can perform positioning for the configured device. In the case that the device positioning system is not configured on any device, it can locate one or more devices moving on a fixed route.
  • any device is referred to as a target device.
  • the target device can move on a fixed route.
  • the target device may specifically be any device moving along the fixed route, or any device on the fixed route, and may have the ability to move along the fixed route.
  • a device location system may include the following units.
  • the judging unit 101 is configured to judge whether the target device is currently in any critical scene.
  • the global positioning unit 102 is configured to determine the global positioning information corresponding to the current key scene as the current positioning information of the target device when it is determined that the target device is currently in any key scene.
  • the local positioning unit 103 is configured to determine the current displacement of the target device when it is determined that the target device is not currently in any key scene, and obtain starting point positioning information corresponding to the determined displacement; the determined displacement is from the starting point Monitoring of positioning information starts; based on the determined displacement and starting point positioning information, the current positioning information of the target device is determined.
  • the above system can locate by judging whether the target device is currently in a key scene, combined with the global positioning information corresponding to the key scene, and in a non-key scene, it can also be positioned by combining the displacement, so that it can be difficult to use in existing positioning methods.
  • the target device is positioned on a fixed route, which improves the accuracy of the positioning information.
  • the target device may be configured with a device positioning system, and may also establish a data transmission channel with the device positioning system, so as to facilitate positioning through data transmission.
  • the judging unit 101 may be configured to judge whether the target device is currently in any critical scene by means of scene recognition. Specifically, it can be used to obtain the features of the scene where the target device is currently located, and determine whether the target device is currently in any key scene by judging whether the features of the current scene match the features of any key scene.
  • the feature of the current scene matches the feature of the key scene.
  • the feature of the current scene may be the same as that of any key scene or the similarity is greater than the preset similarity.
  • the acquired correspondence between key scenes and global positioning information may specifically be a correspondence between features of key scenes and global positioning information.
  • the key scenarios may include a bifurcation scenario, a starting point scenario, and an end point scenario.
  • a bifurcation scenario a starting point scenario
  • an end point scenario a starting point scenario
  • the scene has recognizable and prominent scene features in the fixed route
  • it can be determined as a key scene.
  • a damaged track in the track can be determined as a damaged scene
  • a collapsed scene in a tunnel leading to no progress can be determined as a collapsed scene
  • a broken tunnel scene can be determined as an interrupted scene, and so on.
  • a branching scene of a track there are usually at least 2 tracks, and the camera can detect the change of the front track in the moving direction to determine whether it is in a branching scene.
  • the shape characteristics of the tunnel walls are more obvious, and the distance between the two tunnel walls at the bifurcation usually increases, and the distance between the two tunnel walls can be detected by the camera. to determine whether it is in a bifurcation scene.
  • the scene change characteristics outside the tunnel and inside the tunnel are more significant. Specifically, it can be a change in light, or it is difficult to detect the tunnel wall outside the tunnel. You can use the camera to detect light changes or Tunnel wall, to determine whether it is in the starting scene.
  • the lidar can be used to detect obstacles ahead to determine whether it is in the end scene.
  • the unique scene feature may be determined through the branching directions of different branching routes in the branching scene.
  • the correct key scene may be further determined. This method flow does not limit the specific method for determining the correct key scene.
  • the judgment may be made in combination with historical positioning information. Specifically, the latest determined historical positioning information may be determined, and a key scene closest to the determined historical positioning information may be selected from the matched multiple key scenes, and determined as the correct key scene.
  • the current positioning information of the target device may also be determined in combination with the current displacement, and then a key scene closest to the determined current positioning information may be selected to be determined as the correct key scene.
  • the judging unit 101 may include: a feature acquiring subunit 101a, configured to acquire features of a scene where the target device is currently located.
  • the feature matching subunit 101b is used to judge whether the acquired scene features match the features of any key scene; if it is determined that the acquired scene features do not match the features of any key scene, it is determined that the target device is not currently in the Any key scene; when it is determined that the acquired scene features match the features of a single key scene, determine that the target device is currently in the matched single key scene; when it is determined that the acquired scene features match the features of multiple key scenes In the case of , select a key scene from the matched key scenes, and determine that the target device is currently in the selected key scene.
  • the feature matching subunit 101b may be configured to: in the case where it is determined that the acquired scene features match features of multiple key scenes, acquire the latest determined historical positioning information, and select from the matched multiple key scenes A key scene closest to the determined historical positioning information is selected, and it is determined that the target device is currently in the selected key scene.
  • the feature matching subunit 101b can also be used to: determine the current displacement of the target device when it is determined that the acquired scene features match the features of multiple key scenes, and obtain the corresponding Starting point positioning information; based on the determined displacement and starting point positioning information, determine the current positioning information of the target device, select a key scene closest to the determined current positioning information from the matched multiple key scenes, and determine the target The device is currently in the selected key scene.
  • the feature matching subunit 101b can also be used to trigger the execution of the local positioning unit 103 when it is determined that the target device is not currently in any critical scene.
  • execution of the global positioning unit 102 may be triggered.
  • execution of the global positioning unit 102 may be triggered.
  • specifically acquiring the features of the scene where the target device is currently located may be acquired through a lidar or a camera configured on the target device.
  • the distance between the target device and the tunnel wall can be scanned by lidar, so that in the bifurcation scene, it can be determined that the distance between the target device and the tunnel wall gradually increases;
  • the camera can also be used to photograph the moving front of the target device, and in a track branching scene, it can be determined that there are at least two tracks in front of the target device.
  • FIG. 4 it is a schematic diagram of a principle of scene feature acquisition provided by an embodiment of this specification. It includes 4 target devices, namely target devices 1-4.
  • the target device 1 moves in the tunnel, and the distance between the target device 1 and the tunnel walls on both sides is detected by the lidar.
  • the distance detected by the lidar will generally increase. Therefore, the distance between the target device 1 detected by the lidar and the tunnel walls on both sides can be used as a scene feature, and if the acquired distance increases to a preset distance, it can be determined that the target device 1 is currently in a bifurcation scene.
  • the target device 2 moves on the track, and the number of tracks is detected by the camera.
  • the camera can detect multiple different tracks. Therefore, the number of tracks detected by the camera can be used as a scene feature, and if the number of detected tracks is greater than 1, it can be determined that the target device 2 is currently in a branching scene.
  • the target device 3 moves in the tunnel, and detects whether there is an obstacle ahead of the moving direction of the target device 3 through a laser radar or a camera.
  • a laser radar or a camera Obviously, in the destination scene or landslide scene, there is an obstacle in front of the moving direction of the target device 3, and it is impassable. Therefore, the obstacle detected by the lidar or the camera can be used as a scene feature, and if an obstacle is detected in front of the moving direction of the target device 3, it can be determined that the target device 3 is currently in the terminal scene or the collapse scene. After that, the key scene where the target device 3 is currently located can be further determined.
  • the target device 4 moves in the tunnel, and whether there are tunnel walls on both sides of the target device 4 is detected by a laser radar or a camera. Obviously, in the starting point scene of the tunnel, when the target device 4 is outside the tunnel, the tunnel walls cannot be detected on both sides. Therefore, whether there are tunnel walls on both sides of the target device 4 detected by lidar or camera can be used as a scene feature. If it is detected that there are no tunnel walls on both sides of the target device 4 in the scene where the target device 4 is currently located, then the target device 4 can be determined. Currently in the starting scene.
  • the judging unit 101 can obtain the characteristics of the scene where the target device is currently located through the sensor or other devices configured by the target device itself, and then can compare and match with the key scene to judge whether the target device is currently in the key scene, and determine The key scene where the target device is located, so as to trigger the execution of different units for positioning according to different scenes.
  • the target device needs to determine the moving direction and moving route according to the positioning information.
  • the target device may be an automatic driving device.
  • the subsequent moving route and moving direction need to be determined according to the location; in the ending scene, the automatic driving device cannot continue to move, so the moving direction needs to be re-determined.
  • the direction of movement needs to be determined based on positioning information.
  • the fixed route includes a starting point A, end points B and C, and a branch point D, and the fixed route includes two different routes, namely A-D-B and A-D-C.
  • the automatic driving cleaning equipment presets the cleaning route as A-D-C.
  • the automatic driving cleaning equipment is currently in a bifurcated scene, it needs to determine its own position, so as to determine the moving direction and moving route.
  • the self-driving cleaning device determines that it is currently in D, it can determine that the moving direction is moving to C according to the preset cleaning route A-D-C.
  • the fixed route may be a tunnel. Therefore, on a single route in the fixed route, the moving direction may be determined according to the positioning information. For example, if it is determined that the target device is currently at a certain curve on a single route, the moving direction can be determined according to the angle of the curve.
  • the distance between the target device and the tunnel wall can be detected in real time, so that collision can be avoided.
  • the device positioning system may further include: a moving direction determining unit 104, configured to determine the moving direction of the target device on a fixed route according to the current positioning information of the target device.
  • a moving direction determining unit 104 configured to determine the moving direction of the target device on a fixed route according to the current positioning information of the target device.
  • the specific manner of determining the moving direction is not limited.
  • the moving direction determining unit 104 specifically determines the moving direction according to the current positioning information
  • the current positioning information can represent the current position of the target device in the fixed route, so that the subsequent moving route of the target device can be determined, and then according to the determined The path of movement determines the direction of movement.
  • the method for determining the subsequent moving route of the target device is not limited. Specifically, the movement route after the target device is determined according to a preset algorithm, or the movement route can be manually determined remotely.
  • the key scene may generally be a scene with prominent scene features, for example, a branch scene, a start point scene, an end point scene, a damage scene, and the like.
  • the bifurcation scenario may include at least two fixed routes. Therefore, at least in the bifurcation scenario, one of the at least two routes included in the bifurcation scenario needs to be selected as the subsequent moving route of the target device.
  • the moving direction determining unit 104 may be configured to: in the case of determining that the target device is in any branching scene, according to the global positioning information corresponding to the branching scene where the target device is currently located, from the location where the target device is currently located Among the at least two routes included in the branching scene, one route is selected to be determined as the moving route of the target device, and the moving direction of the target device on the fixed route is determined according to the determined moving route.
  • a specific manner of selecting a route from at least two routes included in the branching scene where the target device is currently located is not limited in this embodiment.
  • the route included in the designated route may be selected from at least two routes included in the branching scene where the target device is currently located, and determined as the subsequent moving route of the target device.
  • the transportation route has been preset. After determining the current positioning information of the target device, one of the routes belonging to the transportation route can be directly selected and determined as the moving route after the target device.
  • a route may be selected from at least two routes included in the branching scene where the target device is currently located according to a preset algorithm. Specifically, among at least two routes included in the branching scene where the target device is currently located, the route that the target device has not driven is determined as the subsequent moving route of the target device.
  • the route of the tunnel that has been cleaned may not be determined as the future movement route, but the uncleaned tunnel route may be determined as the future movement route.
  • the target device may return the current positioning information to the remote control device, and then select a route from at least two routes included in the branching scene where the target device is currently located according to the operation of the remote control device.
  • the transportation demand on the track may be continuously updated, so the moving route after the self-driving transportation equipment can be determined in real time by remotely controlling the equipment.
  • the device positioning system can be used to periodically, irregularly or continuously locate the device, and determine the current positioning information of the device, specifically, the current position information of the device on a fixed route.
  • the device positioning system can perform positioning periodically, specifically every 2 seconds; it can also perform positioning irregularly, specifically multiple positioning before approaching a special scene, specifically in a tunnel Positioning is performed every 10 seconds on the straight road in the middle of the road, and it can be positioned every 2 seconds when approaching a curve in a tunnel and other scenes that need to change the direction of movement, or a key scene in the tunnel.
  • the judging unit 101 may be configured to periodically, continuously or aperiodically judge whether the target device is currently in any critical scenario.
  • the global positioning unit 102 can be used to determine the global positioning information corresponding to the key scene where the target device is currently in when it is determined that the target device is currently in any key scene. location information.
  • the local positioning unit 103 can be used to determine the current displacement of the target device when it is determined that the target device is not currently in any key scene, and obtain the corresponding The starting point positioning information; the determined displacement is monitored from the starting point positioning information; based on the determined displacement and the starting point positioning information, the current positioning information of the target device is determined.
  • the acquisition of starting point location information corresponding to the determined displacement may be obtained from historical location information.
  • the historical positioning information may be the positioning information of the target device previously determined by the device positioning system.
  • the displacement of the target device can be recorded through monitoring, so as to facilitate the location based on the starting point information and the current displacement of the target device. , to determine the current location information of the target device, that is, the location in the fixed route.
  • the determined displacement can be monitored starting from the location information of the starting point. Therefore, usually, the current location information of the target device can be obtained by directly adding the determined displacement to the location information of the starting point.
  • the origin location information may specifically be certain location information of the target device previously determined by the device location system.
  • the starting point positioning information can be updated. After a device positioning system determines the current positioning information of the target device, it can determine the current positioning information as the starting point positioning information, and start monitoring the displacement of the target device based on the starting point positioning information.
  • the starting point positioning information may be updated, and the displacement of the target device may be monitored again.
  • the determined global positioning information can be used as the starting point positioning information to monitor the displacement of the target device.
  • the device positioning system may also include: a first monitoring unit 105, configured to monitor the displacement of the target device; after determining the current positioning information of the target device, determine the current positioning information as the starting point positioning information, according to The determined starting point positioning information starts to monitor the displacement of the target device again.
  • a first monitoring unit 105 configured to monitor the displacement of the target device; after determining the current positioning information of the target device, determine the current positioning information as the starting point positioning information, according to The determined starting point positioning information starts to monitor the displacement of the target device again.
  • the first monitoring unit 105 can be used to re-determine the starting point location information after each determination of the current location information of the target device, and restart monitoring the displacement of the target device for the next location. may be used by the local positioning unit 103 .
  • the device positioning system may also include: a second monitoring unit 106, configured to monitor the displacement of the target device; after determining the global positioning information corresponding to any key scene as the current positioning information of the target device, the current The positioning information is determined as starting point positioning information, and monitoring of the displacement of the target device is resumed according to the determined starting point positioning information.
  • a second monitoring unit 106 configured to monitor the displacement of the target device; after determining the global positioning information corresponding to any key scene as the current positioning information of the target device, the current The positioning information is determined as starting point positioning information, and monitoring of the displacement of the target device is resumed according to the determined starting point positioning information.
  • various devices configured on the target device can be used for monitoring.
  • various devices configured on the target device can be used for monitoring.
  • wheel speedometers for example, wheel speedometers, inertial sensors, etc.
  • the current displacement of the target device can be calculated by determining the speed and/or acceleration of the target device's movement and the duration of the target device's movement.
  • the displacement of the target device can be monitored to facilitate the local positioning unit 103 to perform positioning according to the current displacement of the target device.
  • the above device positioning system can locate and determine the current positioning information of the target device by monitoring the displacement of the target device through the global positioning information corresponding to the key scene, regardless of whether the target device is currently in any key scene.
  • FIG. 5 it is a schematic diagram of a device positioning principle provided by the embodiment of this specification.
  • the self-driving cleaning equipment is equipped with a positioning system, including a judgment unit, a global positioning unit and a local positioning unit.
  • the self-driving cleaning equipment can pre-obtain the tunnel map (that is, a fixed route) and pre-configure the cleaning task.
  • the specific cleaning task can be a pre-configured cleaning route, or cleaning all tunnels.
  • the automatic driving cleaning equipment can also pre-acquire the correspondence between key scenes in the tunnel map and global positioning information.
  • Fig. 5 it includes the tunnel map, and the key scenes A, B, C and D in it, where A is the starting point scene, B is the bifurcation scene, and C and D are the end point scenes.
  • A is the starting point scene
  • B is the bifurcation scene
  • C and D are the end point scenes.
  • the tunnel route from A to B there is a curve, and the distance from A to the curve is 5 meters.
  • the automatic driving cleaning equipment can move and clean in the tunnel, and can monitor the scene characteristics of the current automatic driving cleaning equipment in real time.
  • lidar and cameras can be used to obtain various information, including the distance between the automatic driving cleaning equipment and the tunnel wall.
  • the automatic driving cleaning equipment can also determine the displacement through the wheel speedometer and inertial sensor.
  • the displacement can be monitored.
  • the automatic driving cleaning equipment is positioned, and it is determined that the distance between the automatic driving cleaning equipment and the tunnel wall is greater than the preset distance. There is only one bifurcation scene B in the tunnel, so it can be determined that the automatic driving cleaning equipment is currently in B.
  • the route to C can be directly selected as the moving route.
  • the route of the tunnel that has not been cleaned can be randomly selected as the moving route. For example, if the routes A-B-C and A-B-D have not been cleaned, a route can be randomly selected as the moving route; if A-B-C has been cleaned, but A-B-D has not been cleaned, A-B-D can be selected as the moving route.
  • the moving direction can be determined according to the fixed route tunnel and continue to move.
  • the system described in the above-mentioned system embodiment can also be used for positioning.
  • the means of transportation in the tunnel specifically cars, trains, bicycles, etc., all need to be positioned in the tunnel, and the system described in the above system embodiments can be used for positioning in the tunnel.
  • a moving route if a moving route needs to be selected, it can be selected by the controller of the transportation means. For example, at a fork, the driver of the car can continue to move along the selected movement route after positioning.
  • the embodiment of this specification also provides a method embodiment. As shown in FIG. 6 , it is a schematic flowchart of a method for locating a device provided in the embodiment of this specification.
  • the device positioning method may specifically include the following steps.
  • S201 Determine whether the target device is currently in any key scene. If it is determined that the target device is currently in any key scene, perform S202; if it is determined that the target device is not currently in any key scene, perform S203.
  • the target device may be on a fixed route.
  • S202 Determine the global positioning information corresponding to the current key scene as the current positioning information of the target device.
  • S203 Determine the current displacement of the target device, and acquire starting point positioning information corresponding to the determined displacement; based on the determined displacement and starting point positioning information, determine the current positioning information of the target device.
  • the determined displacement starts to be monitored from the starting point positioning information.
  • judging whether the target device is currently in any key scene may specifically include: acquiring features of the scene where the target device is currently located. Judging whether the acquired scene features match the features of any key scene.
  • a key scene is selected from the matched multiple key scenes, and it is determined that the target device is currently in the selected key scene.
  • selecting a key scene from the matched key scenes may specifically include: obtaining the latest determined historical positioning information, and selecting the key scene closest to the determined historical positioning information from the matched multiple key scenes a key scenario; or determine the current displacement of the target device, and obtain the starting point positioning information corresponding to the determined displacement; based on the determined displacement and starting point positioning information, determine the current positioning information of the target device, from the matched A key scene closest to the determined current positioning information is selected from the multiple key scenes.
  • the closest key scene can be selected from the matching multiple key scenes by means of historical positioning information or local positioning, so as to facilitate After the position, determine the correct key scene.
  • the device positioning method may further include: S204: Determine the moving direction of the target device on the fixed route according to the current positioning information of the target device.
  • S204 may be performed after determining the current location information of the target device, specifically, it may be performed after S202 and S203.
  • the key scenario may include a bifurcation scenario
  • the bifurcation scenario may include at least two routes in the fixed routes.
  • determining the moving direction of the target device on the fixed route may include: in the case of determining that the target device is in any fork scene, according to the global positioning corresponding to the fork scene where the target device is currently located Information, from at least two routes contained in the branching scene where the target device is currently located, select one route to determine as the moving route of the target device, and determine the moving direction of the target device on the fixed route according to the determined moving route.
  • the subsequent moving direction of the target device may be determined through the positioning information. Specifically, after the moving route is selected in the bifurcation scene, the subsequent moving direction may be determined according to the selected moving route.
  • the device positioning method may further include: S205: Monitor the displacement of the target device.
  • S205 can be executed in parallel with S201-S204. Since S205 includes monitoring the displacement of the target device, S205 can be continuously executed.
  • S206 may be performed after determining the current location information of the target device, specifically, it may be performed after S202 and S203.
  • the device positioning method may further include: S207: After determining the global positioning information corresponding to any key scene as the current positioning information of the target device, determine the current positioning information as the starting point positioning information, and according to the determined starting point Positioning information starts monitoring the displacement of the target device again.
  • S207 may be performed after any global positioning information is determined as the current positioning information of the target device, specifically, it may be performed after S202.
  • the above embodiment of the device positioning method can use the global positioning information corresponding to the key scene and monitor the displacement of the target device to determine the current positioning information of the target device regardless of whether the target device is currently in any key scene.
  • the embodiment of the present specification also provides a computer device, which at least includes a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein a method for locating a device is implemented when the processor executes the program.
  • FIG. 7 shows a schematic diagram of a more specific hardware structure of a computer device provided by the embodiment of this specification.
  • the device may include: a processor 1010 , a memory 1020 , an input/output interface 1030 , a communication interface 1040 and a bus 1050 .
  • the processor 1010 , the memory 1020 , the input/output interface 1030 and the communication interface 1040 are connected to each other within the device through the bus 1050 .
  • the processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs to realize the technical solutions provided by the embodiments of this specification.
  • a general-purpose CPU Central Processing Unit, central processing unit
  • a microprocessor an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits
  • ASIC Application Specific Integrated Circuit
  • the memory 1020 can be implemented in the form of ROM (Read Only Memory, read-only memory), RAM (Random Access Memory, random access memory), static storage device, dynamic storage device, etc.
  • the memory 1020 can store operating systems and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 1020 and invoked by the processor 1010 for execution.
  • the input/output interface 1030 is used to connect the input/output module to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions.
  • the input device may include a keyboard, mouse, touch screen, microphone, various sensors, etc.
  • the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 1040 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices.
  • the communication module can realize communication through wired means (such as USB, network cable, etc.), and can also realize communication through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
  • Bus 1050 includes a path that carries information between the various components of the device (eg, processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
  • the above device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in the specific implementation process, the device may also include other components.
  • the above-mentioned device may only include components necessary to implement the solutions of the embodiments of this specification, and does not necessarily include all the components shown in the figure.
  • the embodiment of this specification also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, a method for locating a device is implemented.
  • Computer-readable media including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, A magnetic tape cartridge, disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
  • a typical implementing device is a computer, which may take the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, e-mail device, game control device, etc. desktops, tablets, wearables, or any combination of these.
  • each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiment.
  • the device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the functions of each module may be integrated in the same or multiple software and/or hardware implementations. Part or all of the modules can also be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Abstract

一种设备定位的系统和方法。该方法包括:判断目标设备当前是否处于任一关键场景(S201);目标设备在固定路线上;在确定目标设备当前处于任一关键场景的情况下,将当前所处的关键场景对应的全局定位信息确定为目标设备当前的定位信息(S202);在确定目标设备当前不处于任一关键场景的情况下,确定目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;所确定的位移量从起点定位信息开始监测;基于所确定的位移量和起点定位信息,确定目标设备当前的定位信息(S203)。

Description

设备定位的系统和方法 技术领域
本说明书实施例涉及定位领域,尤其涉及设备定位的系统和方法。
背景技术
目前,多种设备都具有定位需求。例如,自动驾驶设备通常需要实时获取设备在地图中的位置,以便于控制设备的移动方向。通常需要预先制作高精度地图,并通过多种传感器融合方法获取设备在高精度地图中的位置,完成定位。例如,可以利用全球导航卫星系统(Global Navigation Satellite System,GNSS)、轮速计、惯性传感器、激光雷达和相机等。
但在部分场景中,现有的定位方式可能难以进行定位。例如在隧道中,由于信号较差,GNSS系统难以进行通信,无法获取到定位数据;由于隧道环境特征相似,都是相似的隧道壁,激光雷达和相机也难以抽取可靠的环境特征进行定位;而仅仅使用轮速计和惯性传感器进行定位的误差较大。
发明内容
为了解决设备难以定位的技术问题,本说明书实施例提供了用于设备定位的系统和方法。技术方案如下所示。
一种设备定位系统,预先获取固定路线中关键场景与全局定位信息的对应关系,所述系统包括:判断单元,用于判断目标设备当前是否处于任一关键场景;所述目标设备在所述固定路线上;全局定位单元,用于在确定所述目标设备当前处于任一关键场景的情况下,将当前所处的关键场景对应的全局定位信息确定为所述目标设备当前的定位信息;局部定位单元,用于在确定所述目标设备当前不处于任一关键场景的情况下,确定所述目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;所确定的位移量从所述起点定位信息开始监测;基于所确定的位移量和所述起点定位信息,确定所述目标设备当前的定位信息。
一种设备定位方法,预先获取固定路线中关键场景与全局定位信息的对应关系,所述方法包括:判断目标设备当前是否处于任一关键场景;所述目标设备在所述固定路线上;在确定所述目标设备当前处于任一关键场景的情况下,将当前所处的关键场景对应的全局定位信息确定为所述目标设备当前的定位信息;在确定所述目标设备当前不处于任一关键场景的情况下,确定所述目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;所确定的位移量从所述起点定位信息开始监测;基于所确定的位移量和所述起点定位信息,确定所述目标设备当前的定位信息。
上述技术方案通过判断目标设备当前是否处于关键场景,结合关键场景对应的全局定位信息,从而可以进行定位,而在非关键场景中,也可以结合位移量进行定位,从而可以在现有定位方式难以使用的情况下,在固定路线上针对目标设备进行定位。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1是本说明书实施例提供的一种固定路线的示意图;
图2是本说明书实施例提供的另一种固定路线的示意图;
图3是本说明书实施例提供的一种设备定位系统的结构示意图;
图4是本说明书实施例提供的一种场景特征获取的原理示意图;
图5是本说明书实施例提供的一种设备定位的原理示意图;
图6是本说明书实施例提供的一种设备定位方法的流程示意图;
图7是用于配置本说明书实施例方法的一种设备的结构示意图。
具体实施方式
为了使本领域技术人员更好地理解本说明书实施例中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行详细地描述,显然,所描述的实施例仅仅是本说明书的一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于公开的范围。
目前,多种设备都具有定位需求。例如,自动驾驶设备、无人设备、移动终端设备等等。
其中,自动驾驶设备通常需要实时获取设备在地图中的位置,以便于控制设备的移动方向。通常需要预先制作高精度地图,并通过多种传感器融合方法获取设备在高精度地图中的位置,完成定位。
例如,可以利用全球导航卫星系统(Global Navigation Satellite System,GNSS)、轮速计、惯性传感器、激光雷达和相机等。不同的传感器可以获取到不同的位置信息,用于帮助定位。
其中,GNSS系统可以通过卫星进行定位,激光雷达和相机可以获取对应的环境特征,与高精度地图进行比较,获得精准的位置,轮速计和惯性传感器可以获取速度和加速度,辅助自动驾驶设备进行定位。
但在部分场景中,现有的定位方式可能难以进行定位。例如在隧道中,由于信号较 差,GNSS系统难以进行通信,无法获取到定位数据;由于隧道环境特征相似,都是相似的隧道壁,激光雷达和相机也难以抽取可靠的环境特征进行定位;而仅仅使用轮速计和惯性传感器进行定位的误差较大。
因此,在现有定位方式难以发挥作用的情况下,亟需一种设备定位系统,帮助设备进行定位。
需要说明的是,自动驾驶设备本身可能具有多种应用功能,例如,运输、清扫等。
在一种具体的示例中,自动驾驶设备可以包括:自动驾驶运输设备、自动驾驶清扫设备等。
而这些自动驾驶设备,例如自动驾驶清扫设备,需要在隧道中清扫时,通常可以预先获取到隧道的地图,并且设置清扫任务。具体的清扫任务可以是所需要清扫的隧道路线,而为了使得自动驾驶清扫设备可以明确所需要清扫的路线,需要自动驾驶清扫设备能够针对自身进行定位,从而明确自身在隧道中的位置,方便确定移动方向。
为了解决上述多种设备在部分情况下难以定位的技术问题,本说明书实施例提供了一种设备定位系统。
其中,设备具体可以是难以使用或者无法使用现有定位方式的设备。具体可以是设备所处环境难以使用现有定位方式进行定位,也可以是为了节约成本,设备无法使用现有定位方式。
设备可以在固定路线上移动,固定路线具体可以是隧道、轨道等。
其中,固定路线可以包括一条或多条路线。例如,隧道中可以存在分岔口,不同的岔口对应于不同路线,轨道也可以存在分岔口,不同的岔口对应于不同路线。
如图1所示,为本说明书实施例提供的一种固定路线的示意图。其中包含3个分岔口,分别为分岔口1-3。
由于路线是固定的,因此,设备在单条路线上移动时,可以直接按照既定的移动方向进行移动。例如,自动驾驶清扫设备可以在单条隧道中移动进行清扫。
而在固定路线的分岔口处,设备通常需要选择之后的移动路线。
在本说明书实施例提供的设备定位系统中,可以预先获取固定路线。具体可以是获取固定路线的完整地图,例如,获取隧道的完整地图。
而在固定路线中,可以包含具有显著场景特征的关键场景。例如,固定路线的分岔场景,固定路线的起点场景,固定路线的终点场景等。这些关键场景可以由设备定位系统通过场景识别的方式识别出来。
因此,可以预先确定固定路线中关键场景与全局定位信息的对应关系。
其中,关键场景对应的全局定位信息,具体可以是关键场景在固定路线中的具体位置,也可以是关键场景本身的位置信息。
例如,某一关键场景对应的全局定位信息,具体可以是固定路线中第N号分岔口,也可以是以某一点为坐标原点,对应的全局定位信息具体是坐标(4,5)。
根据关键场景对应的全局定位信息,可以辅助设备定位系统进行定位,提高定位的精准度。
具体可以判断设备当前是否处于任一关键场景中。
如果确定设备当前处于任一关键场景中,则可以直接将对应的全局定位信息确定为设备当前的定位信息。
如果确定设备当前不处于任一关键场景中,则可以根据之前的定位信息和位移量,进行定位。具体可以是根据位移量和上次确定的定位信息进行定位。
可见,无论设备是否处于关键场景,都可以进行定位,并且借助关键场景对应的全局定位信息,可以校正根据位移量确定的定位信息,从而可以提高定位信息的精准度。
其中,本说明书实施例提供的设备定位系统可以配置在设备上,也可以并不配置在设备上,而是通过信息传输,辅助设备进行定位。
为了便于理解,下面提供一种具体的实施例。
如图2所示,为本说明书实施例提供的另一种固定路线的示意图,其中包括4个关键场景,分别对应于全局定位信息。具体包括A(0,0),B(3,0),C(6,1)和D(6,0)。
其中,A可以是固定路线的起点,B可以是固定路线中的分岔口,C和D都是固定路线中的终点。
固定路线上还存在一个自动驾驶设备从A向B移动。
当确定自动驾驶设备处于A时,可以直接将(0,0)确定为自动驾驶设备的定位信息。当确定自动驾驶设备从A向B移动2米时,再次进行定位时,可以根据位移量2米和移动方向,确定出定位信息为(2,0)。当确定自动驾驶设备处于B时,可以直接将(3,0)确定为自动驾驶设备的定位信息。
其中,自动驾驶设备具体可以是自动驾驶清扫设备,预先设定了从A到B再到C的清扫路线。由于B处存在2条不同的路线,因此,在确定自动驾驶设备当前处于B时,可以根据预先设定的清扫路线,确定移动方向为向C移动。
当然,具体确定移动方向的方式并不限定,上述实施例仅仅用于示例性说明。
如图3所示,为本说明书实施例提供的一种设备定位系统的结构示意图。设备定位系统可以预先获取固定路线中关键场景与全局定位信息的对应关系。其中,需要定位的设备可以在固定路线上移动。
可选地,在设备定位系统配置于任一设备的情况下,可以为所配置的设备进行定位。在设备定位系统没有配置于任一设备的情况下,可以为在固定路线上移动的一个或多个设备进行定位。
为了便于描述,将任一设备称为目标设备。目标设备可以在固定路线上移动。可选地,目标设备具体可以是沿着固定路线移动的任一设备,也可以是在固定路线上的任一设备,并且可以具有沿着固定路线移动的能力。
设备定位系统可以包括以下单元。
判断单元101,用于判断目标设备当前是否处于任一关键场景。
全局定位单元102,用于在确定目标设备当前处于任一关键场景的情况下,将当前所处的关键场景对应的全局定位信息确定为目标设备当前的定位信息。
局部定位单元103,用于在确定目标设备当前不处于任一关键场景的情况下,确定目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;所确定的位移量从起点定位信息开始监测;基于所确定的位移量和起点定位信息,确定目标设备当前的定位信息。
上述系统通过判断目标设备当前是否处于关键场景,结合关键场景对应的全局定位信息,从而可以进行定位,而在非关键场景中,也可以结合位移量进行定位,从而可以在现有定位方式难以使用的情况下,在固定路线上针对目标设备进行定位,提高了定位信息的准确度。
针对判断单元101,可选地,目标设备可以配置有设备定位系统,也可以与设备定位系统构建数据传输通道,以便于通过数据传输进行定位。
可选地,判断单元101可以用于通过场景识别的方式,判断目标设备当前是否处于任一关键场景。具体可以用于获取目标设备当前所处场景的特征,通过判断当前所处场景的特征是否与任一关键场景的特征相匹配,来判断目标设备当前是否处于任一关键场景。
其中,可选地,当前所处场景的特征与关键场景的特征相匹配,具体可以是当前所处场景的特征与任一关键场景的特征相同或者相似度大于预设相似度。
可选地,所获取的关键场景与全局定位信息的对应关系具体可以是关键场景的特征与全局定位信息之间的对应关系。
为了便于理解,下面给出几种关键场景的特征示例。可选地,关键场景可以包括分岔场景、起点场景和终点场景。当然,只要是在固定路线中具有可识别的显著场景特征的场景,都可以确定为关键场景。例如,轨道中损坏的轨道可以确定为损坏场景,隧道中塌方导致无法前进的场景可以确定为塌方场景,隧道中断的场景可以确定为中断场景,等等。
关键场景的特征例如,针对轨道的分岔场景,通常会存在至少2条轨道,可以通过相机检测移动方向的前方轨道变化,确定是否处于分岔场景。
针对隧道的分岔场景,由于不同的分岔隧道的方向不同,隧道壁的形状特征较为显 著,分岔口处的两边隧道壁之间的距离通常会增大,可以通过相机检测两边隧道壁之间的距离,确定是否处于分岔场景。
针对隧道的起点场景,通常是在隧道的入口,隧道外和隧道内的场景变化特征较为显著,具体可以是光线上的变化,或者在隧道外难以检测到隧道壁,可以通过相机检测光线变化或者隧道壁,确定是否处于起点场景。
针对隧道的终点场景,通常是在隧道的尽头或者死路,移动方向的前方被堵死,无法继续向前移动,因此可以通过激光雷达检测前方的障碍物,确定是否处于终点场景。
需要注意的是,由于不同关键场景对应的全局定位信息不同,需要区分不同的关键场景,使得可以唯一确定出所处的关键场景,进而可以确定出唯一的全局定位信息。因此,不同关键场景的特征可以并不相同。
例如,不同的分岔场景可以具有不同的场景特征,不同的终点场景可以具有不同的场景特征。具体可以是通过分岔场景中不同分岔路线的分岔方向等确定唯一的场景特征。
可选地,在匹配到多个关键场景的情况下,可以进一步确定正确的关键场景。本方法流程并不限定具体的确定正确关键场景的方法。
例如,可以结合历史定位信息进行判断。具体可以确定最近一次确定的历史定位信息,从所匹配的多个关键场景中,选择距离所确定的历史定位信息最近的一个关键场景,确定为正确的关键场景。
当然,也可以结合当前的位移量,确定目标设备当前的定位信息,再选择距离所确定的当前的定位信息最近的一个关键场景,确定为正确的关键场景。
因此,可选地,判断单元101可以包括:特征获取子单元101a,用于获取目标设备当前所处场景的特征。
特征匹配子单元101b,用于判断所获取的场景特征是否与任一关键场景的特征匹配;在确定所获取的场景特征不与任一关键场景的特征匹配的情况下,确定目标设备当前不处于任一关键场景;在确定所获取的场景特征与单个关键场景的特征匹配的情况下,确定目标设备当前处于所匹配的单个关键场景;在确定所获取的场景特征与多个关键场景的特征匹配的情况下,从所匹配的多个关键场景中选择一个关键场景,确定目标设备当前处于所选择的关键场景。
可选地,特征匹配子单元101b可以用于:在确定所获取的场景特征与多个关键场景的特征匹配的情况下,获取最近一次确定的历史定位信息,从所匹配的多个关键场景中选择与所确定的历史定位信息距离最近的一个关键场景,确定目标设备当前处于所选择的关键场景。
可选地,特征匹配子单元101b也可以用于:在确定所获取的场景特征与多个关键场景的特征匹配的情况下,确定目标设备当前的位移量,并获取所确定的位移量对应的起 点定位信息;基于所确定的位移量和起点定位信息,确定目标设备当前的定位信息,从所匹配的多个关键场景中选择与所确定的当前的定位信息距离最近的一个关键场景,确定目标设备当前处于所选择的关键场景。
其中,特征匹配子单元101b还可以用于,在确定目标设备当前不处于任一关键场景的情况下,触发执行局部定位单元103。
在确定目标设备当前处于所匹配的单个关键场景的情况下,可以触发执行全局定位单元102。
在确定目标设备当前处于从所匹配的多个关键场景中选择的单个关键场景的情况下,可以触发执行全局定位单元102。
可选地,具体获取目标设备当前所处场景的特征,可以通过目标设备上配置的激光雷达或者相机获取。
例如,在关键场景包括分岔场景的情况下,可以通过激光雷达扫描目标设备与隧道壁之间的距离,从而可以在分岔场景中,确定目标设备与隧道壁之间的距离逐渐增大;也可以通过相机拍摄目标设备移动的前方,在轨道的分岔场景中,可以确定目标设备移动的前方存在至少2条轨道。
为了便于理解,如图4所示,为本说明书实施例提供的一种场景特征获取的原理示意图。其中包括4个目标设备,分别是目标设备1-4。
目标设备1在隧道中移动,通过激光雷达检测目标设备1与两边隧道壁之间的距离,显然,在分岔场景中,通常激光雷达所检测的距离会增大。因此,可以将激光雷达检测的目标设备1与两边隧道壁之间的距离作为场景特征,如果所获取的距离增大到预设距离,则可以确定目标设备1当前处于分岔场景中。
目标设备2在轨道上移动,通过相机检测轨道数量。显然,在分岔场景中,通常相机可以检测出多条不同的轨道。因此,可以将相机检测的轨道数量作为场景特征,如果检测到的轨道数量大于1,可以确定目标设备2当前处于分岔场景中。
目标设备3在隧道中移动,通过激光雷达或相机检测目标设备3移动方向的前方是否存在障碍物。显然,在终点场景或者塌方场景中,目标设备3移动方向的前方存在障碍物,并且无法通行。因此,可以将激光雷达或相机检测的障碍物作为场景特征,如果检测到目标设备3移动方向的前方存在障碍物,则可以确定目标设备3当前处于终点场景或者塌方场景。之后可以进一步确定目标设备3当前所处的关键场景。
目标设备4在隧道中移动,通过激光雷达或相机检测目标设备4两边是否存在隧道壁。显然,在隧道的起点场景中,目标设备4在从隧道外时,两边无法检测到隧道壁。因此,可以将激光雷达或相机检测的目标设备4两边是否存在隧道壁作为场景特征,如果检测到目标设备4当前所处的场景中,目标设备4两边不存在隧道壁,则可以确定目 标设备4当前处于起点场景。
上述实施例中,判断单元101可以通过目标设备本身配置的传感器或者其他装置,获取目标设备当前所处场景的特征,进而可以与关键场景进行对比匹配,判断目标设备当前是否处于关键场景,并确定目标设备所处的关键场景,以便于根据不同的场景,触发执行不同的单元进行定位。
在一种可选的实施例中,目标设备需要根据定位信息确定移动方向和移动路线。具体地,目标设备可以是自动驾驶设备。
例如,在分岔场景中包含多条路线,需要根据所在的位置确定之后的移动路线和移动方向;在终点场景中,自动驾驶设备无法继续移动,因此,需要重新确定移动方向。在单条路线上,可能存在直行道和弯道,因此,需要根据定位信息确定移动方向。
在一种具体的示例中,固定路线中包括起点A,终点B和C,以及分岔点D,固定路线中包括2条不同的路线,分别是A-D-B和A-D-C。自动驾驶清扫设备预先设置清扫路线为A-D-C,在自动驾驶清扫设备当前处于分岔场景的情况下,需要确定自身的位置,从而确定移动方向和移动路线。在自动驾驶清扫设备确定自身当前处于D的情况下,就可以根据预设的清扫路线A-D-C,确定移动方向是向C移动。
在另一种具体的示例中,固定路线可以是隧道,因此,在固定路线中的单条路线上,可以根据定位信息确定移动方向。例如,确定目标设备当前在单条路线上的某一弯道处,则可以根据弯道的角度,确定移动方向。
需要说明的是,目标设备在隧道中移动时,可以实时检测目标设备与隧道壁之间的距离,从而可以避免碰撞。
因此,可选地,设备定位系统还可以包括:移动方向确定单元104,用于根据目标设备当前的定位信息,确定目标设备在固定路线上的移动方向。
具体确定移动方向的方式并不限定。可选地,移动方向确定单元104具体根据当前的定位信息确定移动方向时,当前的定位信息可以表征目标设备当前在固定路线中的位置,从而可以确定目标设备之后的移动路线,进而根据所确定的移动路线确定移动方向。
本实施例中,并不限定确定目标设备之后的移动路线的方法。具体可以是根据预设算法确定目标设备之后的移动路线,也可以由人工远程确定移动路线。
在一种可选的实施例中,关键场景通常可以是具有显著场景特征的场景,例如,分岔场景、起点场景、终点场景、损坏场景等。
其中分岔场景中可以包括固定路线中的至少两条路线,因此至少在分岔场景中,需要从分岔场景包括的至少两条路线中选择一条路线,作为目标设备后续的移动路线。
可选地,移动方向确定单元104可以用于:在确定目标设备处于任一分岔场景的情况下,根据目标设备当前所处的分岔场景对应的全局定位信息,从目标设备当前所处的 分岔场景包含的至少两条路线中,选择一个路线确定为目标设备的移动路线,根据所确定的移动路线确定目标设备在固定路线上的移动方向。
具体从目标设备当前所处的分岔场景包含的至少两条路线中选择路线的方式,本实施例并不限定。
可选地,可以是根据预设的指定路线,从目标设备当前所处的分岔场景包含的至少两条路线中,选择出指定路线中包含的路线,确定为目标设备之后的移动路线。
例如,针对自动驾驶运输设备,已经预先设置运输路线,可以在确定目标设备当前的定位信息后,直接选择出其中属于运输路线的一条路线,确定为目标设备之后的移动路线。
可选地,可以是根据预设的算法,从目标设备当前所处的分岔场景包含的至少两条路线中,选择路线。具体可以是在目标设备当前所处的分岔场景包含的至少两条路线中,将目标设备并未行驶过的路线,确定为目标设备之后的移动路线。
例如,针对自动驾驶清扫设备,由于需要清扫全部的隧道,因此,针对已经清扫过的隧道路线,可以并不确定为之后的移动路线,而将未清扫的隧道路线确定为之后的移动路线。
可选地,可以是目标设备将当前的定位信息返回到远程控制设备,再根据远程控制设备的操作,从目标设备当前所处的分岔场景包含的至少两条路线中选择路线。
例如,针对自动驾驶运输设备,轨道上的运输需求可能会不断更新,因此,可以通过远程控制设备,实时确定自动驾驶运输设备之后的移动路线。
在一种可选的实施例中,设备定位系统可以用于周期性或者不定期或者持续性地针对设备进行定位,确定设备当前的定位信息,具体可以是设备当前在固定路线上的位置信息。
例如,设备定位系统可以周期性地进行定位,具体可以是每隔2秒就进行一次定位;也可以不定期地进行定位,具体可以是在接近特殊场景之前进行多次定位,具体可以是在隧道中的直行道上每隔10秒进行一次定位,而在接近隧道中的弯道等需要改变移动方向的场景,或者隧道中的关键场景时,可以每隔2秒就进行一次定位。
因此,可选地,判断单元101可以用于周期性或者持续性或者不定期判断目标设备当前是否处于任一关键场景。
针对全局定位单元102,在一种可选的实施例中,可以用于在确定目标设备当前处于任一关键场景的情况下,将当前所处的关键场景对应的全局定位信息确定为目标设备当前的定位信息。
针对局部定位单元103,在一种可选的实施例中,可以用于在确定目标设备当前不处于任一关键场景的情况下,确定目标设备当前的位移量,并获取所确定的位移量对应 的起点定位信息;所确定的位移量从起点定位信息开始监测;基于所确定的位移量和起点定位信息,确定目标设备当前的定位信息。
可选地,获取所确定的位移量对应的起点定位信息,可以从历史定位信息中获取。其中,历史定位信息可以是设备定位系统之前所确定的目标设备的定位信息。
由于确定目标设备当前不处于任一关键场景,无法根据关键场景对应的全局定位信息进行精准定位,因此,可以通过监测记录目标设备的位移量,以便于根据起点定位信息和目标设备当前的位移量,确定目标设备当前的定位信息,也就是在固定路线中的位置。
其中,可选地,所确定的位移量可以从起点定位信息开始监测。因此,通常可以直接在起点定位信息的基础上,加上所确定的位移量,可以得到目标设备当前的定位信息。
而起点定位信息具体可以是设备定位系统之前所确定的目标设备某一定位信息。
需要说明的是,由于设备定位系统可以进行多次定位,因此,起点定位信息可以进行更新。在某一次设备定位系统确定目标设备当前的定位信息之后,可以将这一当前的定位信息确定为起点定位信息,开始基于这一起点定位信息监测目标设备的位移量。
如果之前已经存在基于其他起点定位信息监测的位移量,则需要重新开始监测。
可选地,可以在每一次设备定位系统进行定位之后,都更新起点定位信息,重新开始监测目标设备的位移量。
可选地,由于可以根据关键场景对应的全局定位信息进行精准定位,为了提高定位的准确度,可以将所确定的全局定位信息作为起点定位信息,监测目标设备的位移量。
因此,可选地,设备定位系统还可以包括:第一监测单元105,用于监测目标设备的位移量;在确定目标设备当前的定位信息后,将当前的定位信息确定为起点定位信息,根据所确定的起点定位信息重新开始监测目标设备的位移量。
在本实施例中,第一监测单元105可以用于在每次确定目标设备当前的定位信息后,重新确定起点定位信息,并且重新开始监测目标设备的位移量,以便于下次定位使用,具体可以是由局部定位单元103使用。
可选地,设备定位系统还可以包括:第二监测单元106,用于监测目标设备的位移量;在将任一关键场景对应的全局定位信息确定为目标设备当前的定位信息后,将当前的定位信息确定为起点定位信息,根据所确定的起点定位信息重新开始监测目标设备的位移量。
而具体监测目标设备的位移量,可以使用目标设备上配置的多种装置进行监测。例如,轮速计、惯性传感器等。通过确定目标设备移动的速度和/或加速度,以及目标设备移动的时长,可以计算出目标设备当前的位移量。
在上述实施例中,可以通过监测目标设备的位移量,方便之后的局部定位单元103根据目标设备当前的位移量进行定位。
上述设备定位系统可以通过关键场景对应的全局定位信息,以及监测目标设备的位移量,无论目标设备当前是否处于任一关键场景,都可以进行定位,确定目标设备当前的定位信息。
为了便于理解,本说明书实施例还提供了一种具体的应用实施例。其中,一种自动驾驶清扫设备可以在隧道中移动,执行清扫任务。如图5所示,为本说明书实施例提供的一种设备定位的原理示意图。
自动驾驶清扫设备上配置有定位系统,包括判断单元、全局定位单元和局部定位单元。
自动驾驶清扫设备可以预先获取隧道地图(即固定路线),并预先配置清扫任务。清扫任务具体可以是预先配置的清扫路线,也可以是清扫全部隧道。
自动驾驶清扫设备还可以预先获取隧道地图中关键场景与全局定位信息的对应关系。
在图5中,包括隧道地图,以及其中的关键场景A、B、C和D,其中A是起点场景,B是分岔场景,C和D是终点场景。在A到B的隧道路线中,存在一个弯道,A到弯道的距离是5米。
自动驾驶清扫设备可以在隧道中移动并清扫,并且可以实时监测当前自动驾驶清扫设备所处的场景特征。具体可以是通过激光雷达和相机,获取多方面的信息,其中包括自动驾驶清扫设备与隧道壁之间的距离。
同时自动驾驶清扫设备也可以通过轮速计和惯性传感器确定位移量。
针对自动驾驶清扫设备在A处确定定位信息之后,可以开始监测位移量。
再次针对自动驾驶清扫设备进行定位,确定自动驾驶清扫设备当前不处于任一关键场景,进一步地,确定当前的位移量为5米,根据预先获取的隧道地图,确定当前处于弯道,因此,可以根据隧道地图调整移动方向,继续移动。
第三次针对自动驾驶清扫设备进行定位,确定自动驾驶清扫设备当前与隧道壁之间的距离大于预设距离,因此,可以确定自动驾驶清扫设备当前处于关键场景中的分岔场景,而固定路线隧道中只有一处分岔场景B,因此,可以确定自动驾驶清扫设备当前处于B。
由于分岔场景包含两条路线,因此,需要进一步选择其中一条路线作为移动路线进行移动。
在清扫任务包括预先配置的清扫路线A-B-C的情况下,可以直接选择前往C的路线作为移动路线。
在清扫任务包括清扫全部隧道的情况下,可以随机选择还未清扫过的隧道路线作为移动路线。例如,如果路线A-B-C和A-B-D都未清扫过,则可以随机选择一条路线作为移动路线;如果A-B-C已经清扫过,而A-B-D还未清扫过,则可以选择A-B-D作为移动路线。
确定移动路线之后,就可以根据固定路线隧道确定移动方向,继续移动。
需要说明的是,除了上述的自动驾驶设备,其他设备在固定路线上移动时,也可以利用上述系统实施例所述的系统进行定位。例如,针对隧道中的运输工具,具体可以是汽车、火车、自行车等,都需要在隧道中进行定位,都可以利用上述系统实施例所述的系统在隧道中进行定位。
具体在分岔场景或者其他关键场景中,如果需要选择移动路线,则可以由运输工具的控制方选择。例如,在分岔口,汽车司机在进行定位之后,可以继续沿着选择的移动路线移动。
除了上述的系统实施例,本说明书实施例还提供了一种方法实施例。如图6所示,为本说明书实施例提供的一种设备定位方法的流程示意图。
该方法中,可以预先获取固定路线中关键场景与全局定位信息的对应关系。设备定位方法具体可以包括以下步骤。
S201:判断目标设备当前是否处于任一关键场景。在确定目标设备当前处于任一关键场景的情况下,执行S202;在确定目标设备当前不处于任一关键场景的情况下,执行S203。
其中,目标设备可以在固定路线上。
S202:将当前所处的关键场景对应的全局定位信息确定为目标设备当前的定位信息。
S203:确定目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;基于所确定的位移量和起点定位信息,确定目标设备当前的定位信息。
其中,所确定的位移量从起点定位信息开始监测。
可选地,判断目标设备当前是否处于任一关键场景,具体可以包括:获取目标设备当前所处场景的特征。判断所获取的场景特征是否与任一关键场景的特征匹配。
在确定所获取的场景特征不与任一关键场景的特征匹配的情况下,确定目标设备当前不处于任一关键场景。
在确定所获取的场景特征与单个关键场景的特征匹配的情况下,确定目标设备当前处于所匹配的单个关键场景。
在确定所获取的场景特征与多个关键场景的特征匹配的情况下,从所匹配的多个关键场景中选择一个关键场景,确定目标设备当前处于所选择的关键场景。
本实施例中,可以通过将目标当前所处场景特征与预先获取的任一关键场景进行匹配,确定目标设备的当前是否处于关键场景,以便于后续确定定位方式。
可选地,从所匹配的多个关键场景中选择一个关键场景,具体可以包括:获取最近一次确定的历史定位信息,从所匹配的多个关键场景中选择与所确定的历史定位信息距离最近的一个关键场景;或者确定目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;基于所确定的位移量和起点定位信息,确定目标设备当前的定位信息,从所匹配的多个关键场景中选择与所确定的当前的定位信息距离最近的一个关键场景。
本实施例中,在存在多个匹配的关键场景的情况下,可以通过历史定位信息或者局部定位的方式,从匹配的多个关键场景中选择出距离最近的关键场景,方便根据固定路线和已经过的位置,确定出正确的关键场景。
可选地,设备定位方法还可以包括:S204:根据目标设备当前的定位信息,确定目标设备在固定路线上的移动方向。S204可以确定目标设备当前的定位信息之后执行,具体可以在S202和S203之后执行。
其中,可选地,关键场景可以包括分岔场景,分岔场景中可以包括固定路线中的至少两条路线。
根据目标设备当前的定位信息,确定目标设备在固定路线上的移动方向,可以包括:在确定目标设备处于任一分岔场景的情况下,根据目标设备当前所处的分岔场景对应的全局定位信息,从目标设备当前所处的分岔场景包含的至少两条路线中,选择一个路线确定为目标设备的移动路线,根据所确定的移动路线确定目标设备在固定路线上的移动方向。
具体选择的方法可以参见上述系统实施例。
本实施例中可以通过定位信息确定目标设备之后的移动方向,具体可以是在分岔场景中选择移动路线之后,根据所选择的移动路线确定之后的移动方向。
可选地,设备定位方法还可以包括:S205:监测目标设备的位移量。
S206:在确定目标设备当前的定位信息后,将当前的定位信息确定为起点定位信息,根据所确定的起点定位信息重新开始监测目标设备的位移量。
其中,S205可以与S201-S204并行执行。由于S205中包括对目标设备位移量的监测,因此,可以持续执行S205。
可选地,S206可以在确定目标设备当前的定位信息后执行,具体可以在S202和S203之后执行。
可选地,设备定位方法还可以包括:S207:在将任一关键场景对应的全局定位信息确定为目标设备当前的定位信息后,将当前的定位信息确定为起点定位信息,根据所 确定的起点定位信息重新开始监测目标设备的位移量。
其中,S207可以在将任一全局定位信息确定为目标设备当前的定位信息之后执行,具体可以是在S202之后执行。
上述方法实施例的具体解释可以参见上述装置实施例。
上述设备定位方法实施例可以通过关键场景对应的全局定位信息,以及监测目标设备的位移量,无论目标设备当前是否处于任一关键场景,都可以进行定位,确定目标设备当前的定位信息。
本说明书实施例还提供一种计算机设备,其至少包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,处理器执行所述程序时实现一种设备定位方法。
图7示出了本说明书实施例所提供的一种更为具体的计算机设备硬件结构示意图,该设备可以包括:处理器1010、存储器1020、输入/输出接口1030、通信接口1040和总线1050。其中处理器1010、存储器1020、输入/输出接口1030和通信接口1040通过总线1050实现彼此之间在设备内部的通信连接。
处理器1010可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。
存储器1020可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器1020可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器1020中,并由处理器1010来调用执行。
输入/输出接口1030用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。
通信接口1040用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。
总线1050包括一通路,在设备的各个组件(例如处理器1010、存储器1020、输入/输出接口1030和通信接口1040)之间传输信息。
需要说明的是,尽管上述设备仅示出了处理器1010、存储器1020、输入/输出接口 1030、通信接口1040以及总线1050,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。
本说明书实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现一种设备定位方法。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本说明书实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本说明书实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本说明书实施例各个实施例或者实施例的某些部分所述的方法。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,在实施本说明书实施例方案时可以把各模块的功能在同一个或多个软件和/或硬件中实现。也可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅是本说明书实施例的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本说明书实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本说明书实施例的保护。

Claims (15)

  1. 一种设备定位系统,预先获取固定路线中关键场景与全局定位信息的对应关系,所述系统包括:
    判断单元,用于判断目标设备当前是否处于任一关键场景;所述目标设备在所述固定路线上;
    全局定位单元,用于在确定所述目标设备当前处于任一关键场景的情况下,将当前所处的关键场景对应的全局定位信息确定为所述目标设备当前的定位信息;
    局部定位单元,用于在确定所述目标设备当前不处于任一关键场景的情况下,确定所述目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;所确定的位移量从所述起点定位信息开始监测;基于所确定的位移量和所述起点定位信息,确定所述目标设备当前的定位信息。
  2. 根据权利要求1所述的系统,所述判断单元,包括:
    特征获取子单元,用于获取目标设备当前所处场景的特征;
    特征匹配子单元,用于判断所获取的场景特征是否与任一关键场景的特征匹配;在确定所获取的场景特征不与任一关键场景的特征匹配的情况下,确定所述目标设备当前不处于任一关键场景;在确定所获取的场景特征与单个关键场景的特征匹配的情况下,确定所述目标设备当前处于所述单个关键场景;在确定所获取的场景特征与多个关键场景的特征匹配的情况下,从所述多个关键场景中选择一个关键场景,确定所述目标设备当前处于所选择的关键场景。
  3. 根据权利要求2所述的系统,所述特征匹配子单元,用于:
    在确定所获取的场景特征与多个关键场景的特征匹配的情况下,获取最近一次确定的历史定位信息,从所述多个关键场景中选择与所确定的历史定位信息距离最近的一个关键场景,确定所述目标设备当前处于所选择的关键场景;或者
    在确定所获取的场景特征与多个关键场景的特征匹配的情况下,确定所述目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;基于所确定的位移量和所述起点定位信息,确定所述目标设备当前的定位信息,从所述多个关键场景中选择与所确定的当前的定位信息距离最近的一个关键场景,确定所述目标设备当前处于所选择的关键场景。
  4. 根据权利要求1所述的系统,还包括:
    移动方向确定单元,用于根据所述目标设备当前的定位信息,确定所述目标设备在所述固定路线上的移动方向。
  5. 根据权利要求4所述的系统,所述关键场景包括分岔场景,所述分岔场景中包括所述固定路线中的至少两条路线;所述移动方向确定单元用于:
    在确定所述目标设备处于任一分岔场景的情况下,根据所述目标设备当前所处的分岔场景对应的全局定位信息,从所述目标设备当前所处的分岔场景包含的至少两条路线中,选择一个路线确定为所述目标设备的移动路线,根据所确定的移动路线确定所述目标设备在所述固定路线上的移动方向。
  6. 根据权利要求1所述的系统,还包括:
    第一监测单元,用于监测所述目标设备的位移量;在确定所述目标设备当前的定位信息后,将所述当前的定位信息确定为起点定位信息,根据所确定的起点定位信息重新开始监测所述目标设备的位移量。
  7. 根据权利要求1所述的系统,还包括:
    第二监测单元,用于监测所述目标设备的位移量;在将任一关键场景对应的全局定位信息确定为所述目标设备当前的定位信息后,将所述当前的定位信息确定为起点定位信息,根据所确定的起点定位信息重新开始监测所述目标设备的位移量。
  8. 一种设备定位方法,预先获取固定路线中关键场景与全局定位信息的对应关系,所述方法包括:
    判断目标设备当前是否处于任一关键场景;所述目标设备在所述固定路线上;
    在确定所述目标设备当前处于任一关键场景的情况下,将当前所处的关键场景对应的全局定位信息确定为所述目标设备当前的定位信息;
    在确定所述目标设备当前不处于任一关键场景的情况下,确定所述目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;所确定的位移量从所述起点定位信息开始监测;基于所确定的位移量和所述起点定位信息,确定所述目标设备当前的定位信息。
  9. 根据权利要求8所述的方法,所述判断目标设备当前是否处于任一关键场景,包括:
    获取目标设备当前所处场景的特征;
    判断所获取的场景特征是否与任一关键场景的特征匹配;
    在确定所获取的场景特征不与任一关键场景的特征匹配的情况下,确定所述目标设备当前不处于任一关键场景;
    在确定所获取的场景特征与单个关键场景的特征匹配的情况下,确定所述目标设备当前处于所述单个关键场景;
    在确定所获取的场景特征与多个关键场景的特征匹配的情况下,从所述多个关键场景中选择一个关键场景,确定所述目标设备当前处于所选择的关键场景。
  10. 根据权利要求9所述的方法,所述从所述多个关键场景中选择一个关键场景,包括:
    获取最近一次确定的历史定位信息,从所述多个关键场景中选择与所确定的历史定位信息距离最近的一个关键场景;或者
    确定所述目标设备当前的位移量,并获取所确定的位移量对应的起点定位信息;基于所确定的位移量和所述起点定位信息,确定所述目标设备当前的定位信息,从所述多个关键场景中选择与所确定的当前的定位信息距离最近的一个关键场景。
  11. 根据权利要求8所述的方法,还包括:
    根据所述目标设备当前的定位信息,确定所述目标设备在所述固定路线上的移动方向。
  12. 根据权利要求11所述的方法,所述关键场景包括分岔场景,所述分岔场景中包括所述固定路线中的至少两条路线;
    所述根据所述目标设备当前的定位信息,确定所述目标设备在所述固定路线上的移动方向,包括:
    在确定所述目标设备处于任一分岔场景的情况下,根据所述目标设备当前所处的分岔场景对应的全局定位信息,从所述目标设备当前所处的分岔场景包含的至少两条路线中,选择一个路线确定为所述目标设备的移动路线,根据所确定的移动路线确定所述目标设备在所述固定路线上的移动方向。
  13. 根据权利要求8所述的方法,还包括:
    监测所述目标设备的位移量;
    在确定所述目标设备当前的定位信息后,将所述当前的定位信息确定为起点定位信息,根据所确定的起点定位信息重新开始监测所述目标设备的位移量。
  14. 根据权利要求8所述的方法,还包括:
    监测所述目标设备的位移量;
    在将任一关键场景对应的全局定位信息确定为所述目标设备当前的定位信息后,将所述当前的定位信息确定为起点定位信息,根据所确定的起点定位信息重新开始监测所述目标设备的位移量。
  15. 一种自动驾驶设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如权利要求8至14任一项所述的方法。
PCT/CN2022/071041 2021-09-28 2022-01-10 设备定位的系统和方法 WO2023050646A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111144554.5 2021-09-28
CN202111144554.5A CN115236713A (zh) 2021-09-28 2021-09-28 一种设备定位系统和方法

Publications (1)

Publication Number Publication Date
WO2023050646A1 true WO2023050646A1 (zh) 2023-04-06

Family

ID=83665815

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071041 WO2023050646A1 (zh) 2021-09-28 2022-01-10 设备定位的系统和方法

Country Status (2)

Country Link
CN (1) CN115236713A (zh)
WO (1) WO2023050646A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004042148A (ja) * 2002-07-09 2004-02-12 Mitsubishi Heavy Ind Ltd 移動ロボット
CN103576686A (zh) * 2013-11-21 2014-02-12 中国科学技术大学 一种机器人自主导引及避障的方法
CN106289285A (zh) * 2016-08-20 2017-01-04 南京理工大学 一种关联场景的机器人侦察地图及构建方法
EP3413270A1 (en) * 2017-06-07 2018-12-12 Thomson Licensing Device and method for editing a virtual reality scene represented in a curved shape form
CN110514181A (zh) * 2018-05-22 2019-11-29 杭州萤石软件有限公司 一种电子设备定位方法和装置
CN111220153A (zh) * 2020-01-15 2020-06-02 西安交通大学 基于视觉拓扑节点和惯性导航的定位方法
CN112435333A (zh) * 2020-10-14 2021-03-02 腾讯科技(深圳)有限公司 一种道路场景的生成方法以及相关装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004042148A (ja) * 2002-07-09 2004-02-12 Mitsubishi Heavy Ind Ltd 移動ロボット
CN103576686A (zh) * 2013-11-21 2014-02-12 中国科学技术大学 一种机器人自主导引及避障的方法
CN106289285A (zh) * 2016-08-20 2017-01-04 南京理工大学 一种关联场景的机器人侦察地图及构建方法
EP3413270A1 (en) * 2017-06-07 2018-12-12 Thomson Licensing Device and method for editing a virtual reality scene represented in a curved shape form
CN110514181A (zh) * 2018-05-22 2019-11-29 杭州萤石软件有限公司 一种电子设备定位方法和装置
CN111220153A (zh) * 2020-01-15 2020-06-02 西安交通大学 基于视觉拓扑节点和惯性导航的定位方法
CN112435333A (zh) * 2020-10-14 2021-03-02 腾讯科技(深圳)有限公司 一种道路场景的生成方法以及相关装置

Also Published As

Publication number Publication date
CN115236713A (zh) 2022-10-25

Similar Documents

Publication Publication Date Title
US10262234B2 (en) Automatically collecting training data for object recognition with 3D lidar and localization
CN104575079B (zh) 一种停车场内车辆定位方法及寻车方法
US8515610B2 (en) Navigation apparatus and driving route information providing method using the same and automatic driving system and method
RU2016136822A (ru) Система вождения транспортного средства, способ восприятия и локализации для дорожного транспортного средства и компьютерно-читаемый носитель данных
US20200353914A1 (en) In-vehicle processing device and movement support system
US11113971B2 (en) V2X communication-based vehicle lane system for autonomous vehicles
CN113519019B (zh) 自身位置推断装置、配备其的自动驾驶系统以及自身生成地图共享装置
US20220291012A1 (en) Vehicle and method for generating map corresponding to three-dimensional space
EP3023740B1 (en) Method, apparatus and computer program product for route matching
KR20200109595A (ko) 경로 제공 장치 및 경로 제공 방법
KR101248868B1 (ko) 주행이력정보 기반의 차량 자율주행 시스템
JP2019095210A (ja) 車両制御装置、車両制御方法、およびプログラム
JP2018122738A (ja) 制御装置および制御方法
JP6507841B2 (ja) 先行車両推定装置及びプログラム
EP3339808B1 (en) Positioning objects in an augmented reality display
WO2023050646A1 (zh) 设备定位的系统和方法
JP2003279361A (ja) カーナビゲーションシステム及び方法並びにナビゲーション用プログラム
CN110146074B (zh) 一种应用于自动驾驶的实时定位方法及装置
CN114689074B (zh) 信息处理方法和导航方法
JP2009281799A (ja) ナビゲーション装置、ナビゲーション方法及びナビゲーションプログラム
CN110579619A (zh) 车辆测速、定位方法和装置以及电子设备
EP3791241B1 (en) Method for control of an autonomous vehicle with internal delays
CN110928277B (zh) 智能路侧单元的障碍物提示方法、装置及设备
US10830906B2 (en) Method of adaptive weighting adjustment positioning
CN111928863A (zh) 一种高精地图数据采集方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22874049

Country of ref document: EP

Kind code of ref document: A1