CN114577198B - High-reflection object positioning method and device and terminal equipment - Google Patents

High-reflection object positioning method and device and terminal equipment Download PDF

Info

Publication number
CN114577198B
CN114577198B CN202210052269.9A CN202210052269A CN114577198B CN 114577198 B CN114577198 B CN 114577198B CN 202210052269 A CN202210052269 A CN 202210052269A CN 114577198 B CN114577198 B CN 114577198B
Authority
CN
China
Prior art keywords
data
laser
algorithm
information
optimized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210052269.9A
Other languages
Chinese (zh)
Other versions
CN114577198A (en
Inventor
丁武
郝金龙
李林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Huadun Safety Technology Co ltd
Original Assignee
Liaoning Huadun Safety Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Huadun Safety Technology Co ltd filed Critical Liaoning Huadun Safety Technology Co ltd
Priority to CN202210052269.9A priority Critical patent/CN114577198B/en
Publication of CN114577198A publication Critical patent/CN114577198A/en
Application granted granted Critical
Publication of CN114577198B publication Critical patent/CN114577198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a high-reflectivity object positioning method, a device and terminal equipment, which are suitable for the technical field of artificial intelligence, and the method comprises the following steps: acquiring environment data, constructing an initial environment map based on the environment data, acquiring sensor data, creating laser point clouds of high-reflection objects according to the laser intensity of the sensor data, performing de-distortion processing and position estimation processing on the laser point clouds to obtain position estimation information, performing position optimization on the position estimation information, and mapping the optimized position information to the initial environment map to obtain a standard environment map. The invention also provides a high-reflection object positioning device and terminal equipment. The invention can solve the problem of inaccurate positioning of the high-reflection object.

Description

High-reflection object positioning method and device and terminal equipment
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a high-reflectivity object positioning method, a high-reflectivity object positioning device and terminal equipment.
Background
At present, most indoor mobile robots use a map-building positioning algorithm based on a two-dimensional laser sensor, however, in some environments with high-reflection objects such as glass and mirrors, the mobile robots can lose laser data due to the characteristics of the laser sensor, so that high-reflection object information can not be accurately described on a map when the map is created by using the slam algorithm, and dangers can be generated when the mobile robots autonomously navigate, so that a method for detecting and identifying the high-reflection objects and accurately positioning the high-reflection objects in the actual environment is needed.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method, an apparatus, and a terminal device for positioning a high-reflectivity object, which can solve the problem of inaccurate positioning of the high-reflectivity object.
A first aspect of an embodiment of the present application provides a method for positioning a high-reflectivity object, including:
acquiring environment data, and constructing an initial environment map based on the environment data;
acquiring sensor data, creating a laser point cloud of a high-reflection object according to the laser intensity of the sensor data, and performing de-distortion processing and position estimation processing on the laser point cloud to obtain position estimation information;
and performing position optimization on the position estimation information, and mapping the optimized position information to the initial environment map to obtain a standard environment map.
In detail, the acquiring the environment data, constructing an initial environment map based on the environment data, includes:
receiving environment data fed back by a preset sensor, and detecting position coordinates of obstacles in the environment data;
and constructing a grid map according to the position coordinates of the obstacle, and taking the grid map as an initial environment map.
In detail, the acquiring the sensor data, creating a laser point cloud of the high-reflection object according to the laser intensity of the sensor data includes:
acquiring a laser beam data set reflected by an object by using the sensor, wherein the laser beam data set is used as the sensor data;
detecting the laser intensity of a laser beam in the sensor data;
and marking the laser beam with the laser intensity being greater than or equal to a preset intensity threshold value to obtain marking laser point information, and collecting all marking laser point information to obtain the laser point cloud.
In detail, the performing the de-distortion process and the position estimation process on the laser point cloud to obtain position estimation information includes:
acquiring a preset positioning mapping algorithm, and binding data in the laser point cloud to a front end key frame node of the positioning mapping algorithm;
and performing de-distortion treatment on the front-end key frame node by using a preset de-distortion algorithm to obtain a de-distorted data frame.
In detail, the performing the position optimization on the position estimation information includes:
performing local optimization and global optimization on the position estimation information by using the positioning mapping algorithm to obtain an optimized key frame;
and traversing the optimized key frame, and determining the optimized key frame containing the high-reflection object point cloud information as optimized position information.
In detail, the mapping the optimized location information to the initial environment map to obtain a standard environment map includes:
mapping the high-inverse object point cloud information into the initial environment map based on the position of the optimization key frame in the initial environment map:
and marking the high-reflection object in the initial environment map to obtain the standard environment map.
In detail, after mapping the optimized location information to the initial environment map to obtain a standard environment map, the method further includes:
and acquiring real-time laser data received by the sensor, and planning a path of a target object by using the real-time laser data and the standard environment map.
A second aspect of an embodiment of the present application provides a high-reflectivity object positioning device, including:
the initial map construction module is used for acquiring environment data and constructing an initial environment map based on the environment data;
the position estimation module is used for acquiring sensor data, creating laser point clouds of the high-reflection object according to the laser intensity of the sensor data, and carrying out de-distortion processing and position estimation processing on the laser point clouds to obtain position estimation information;
and the high-reflection object positioning module is used for carrying out position optimization on the position estimation information, and mapping the optimized position information to the initial environment map to obtain a standard environment map.
A third aspect of the embodiments of the present application provides a terminal device, the terminal device comprising a memory, a processor, the memory having stored thereon a computer program executable on the processor, the processor implementing the steps of the high-reflectivity object positioning method according to any one of the first aspects when the computer program is executed.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium comprising: a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the high-reflectivity object positioning method according to any one of the above first aspects.
A fifth aspect of embodiments of the present application provides a computer program product, which when run on a terminal device, causes the terminal device to perform the high-contrast object localization method according to any of the above-mentioned first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
according to the invention, an initial environment map is constructed through environment data, and based on the reflection characteristics of the high-reflection object, the laser point cloud of the high-reflection object can be accurately created according to the laser intensity of the sensor data, the position estimation information containing the high-reflection object information can be obtained through de-distortion processing and position estimation processing of the laser point cloud, the position of the position estimation information is optimized, and the optimized position information is mapped to the initial environment map, so that the accurate positioning of the high-reflection object can be realized. Therefore, the high-reflection object positioning method, the high-reflection object positioning device and the terminal equipment can solve the problem of inaccurate positioning of the high-reflection object.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow chart of a high-reflectivity object positioning method provided in an embodiment of the present application;
fig. 2 is a schematic implementation flow chart of a high-reflectivity object positioning method according to an embodiment of the present application;
fig. 3 is a schematic implementation flow chart of a high-reflectivity object positioning method according to an embodiment of the present application;
fig. 4 is a schematic implementation flow chart of a high-reflectivity object positioning method according to an embodiment of the present application;
fig. 5 is a schematic implementation flow chart of a high-reflectivity object positioning method according to an embodiment of the present application;
FIG. 6 is a schematic structural view of a high-reflectivity object positioning device according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
Fig. 1 shows a flowchart of an implementation of a high-reflectivity object positioning method according to an embodiment of the present application, which is described in detail below:
s1, acquiring environment data, and constructing an initial environment map based on the environment data.
In the embodiment of the invention, the environment data refer to various types of original data in an actual environment, including laser scanning data, video image data and the like, and for example, the image data in the actual environment is acquired through a camera arranged in a sweeping robot.
In detail, referring to fig. 2, the acquiring the environment data, constructing an initial environment map based on the environment data includes:
s10, receiving environment data fed back by a preset sensor, and detecting position coordinates of obstacles in the environment data;
s11, constructing a grid map according to the position coordinates of the obstacle, and taking the grid map as an initial environment map.
In an alternative embodiment of the present invention, the preset sensor may be a laser sensor. Taking a sweeping robot as an example, calculating the actual position of each obstacle detected by laser data, calculating the position coordinates (x 0, y 0) of the obstacle on a grid map according to the actual position of the obstacle, calculating the position (x, y) of the sweeping robot in the grid map, calculating a grid point set of non-obstacle according to the two coordinates by using a Bresenham algorithm, and summarizing the grid point sets of the obstacle and the fee obstacle to obtain the initial environment map.
S2, acquiring sensor data, creating laser point clouds of the high-reflection object according to the laser intensity of the sensor data, and performing de-distortion processing and position estimation processing on the laser point clouds to obtain position estimation information.
In the embodiment of the invention, the sensor data may be laser data returned after the laser beam irradiates the object. The laser point cloud of the high-reflection object refers to a set of scanning points of the high-reflection object, for example, three-dimensional coordinates of reflection points of the high-reflection object obtained by scanning the high-reflection object (such as glass and the like) by using a laser radar, and the reflection points of each high-reflection object are distributed in a three-dimensional space in the form of points according to the three-dimensional coordinates.
In detail, referring to fig. 3, the acquiring sensor data, creating a laser point cloud of a high-reflection object according to the laser intensity of the sensor data includes:
s200, acquiring a laser beam data set reflected by an object by using the sensor, and taking the laser beam data set as the sensor data;
s201, detecting the laser intensity of a laser beam in the sensor data;
and S202, marking the laser beam with the laser intensity larger than or equal to a preset intensity threshold value to obtain marking laser point information, and collecting all marking laser point information to obtain the laser point cloud.
In an alternative embodiment of the invention, taking a sweeping robot as an example, laser beams are emitted to different directions, then the laser sensor is utilized to receive the laser beams reflected from different directions, the laser intensity of the reflected laser beams is monitored, and a laser point cloud with the laser intensity reaching a preset intensity threshold is created. Because the laser information reflected by the high-reflection object and the laser information reflected by the common object are obviously different in laser intensity, the high-reflection object can be accurately determined from the laser intensity, and the accuracy of positioning the high-reflection object is improved.
In the embodiment of the invention, as the laser scanning is accompanied with the movement of the robot, the laser data of each angle is not obtained instantaneously, and the positions of the robot are different when the laser is emitted at different moments, so that the movement distortion can be generated.
In detail, referring to fig. 4, the performing the de-distortion process and the position estimation process on the laser point cloud to obtain position estimation information includes:
s210, acquiring a preset positioning mapping algorithm, and binding data in the laser point cloud to a front end key frame node of the positioning mapping algorithm;
s211, performing de-distortion processing on the front-end key frame node by using a preset de-distortion algorithm to obtain a de-distorted data frame;
s212, performing inter-frame motion estimation and local coordinate drawing on the de-distorted data frame by using the positioning mapping algorithm to obtain position estimation information comprising contour information and position information.
In an optional embodiment of the present invention, the preset positioning mapping algorithm is a two-dimensional laser SLAM algorithm. The core of the two-dimensional laser SLAM algorithm is that the mobile robot obtains real-time data of surrounding environment of the mobile robot through a laser sensor, obtains mileage information of the mobile robot through a code disc and imu, estimates the most probable position of the mobile robot through a map existing at the previous moment, and updates grid map data of the mobile robot through laser data of a current frame after estimating the position of the mobile robot. The two-dimensional laser SLAM algorithm comprises a front-end algorithm and a back-end algorithm, wherein the front-end algorithm utilizes sensor information to perform inter-frame motion estimation and local road marking to obtain position estimation information (including contour information and position information) of an object (high-inverse object).
And S3, performing position optimization on the position estimation information, and mapping the optimized position information to the initial environment map to obtain a standard environment map.
In the embodiment of the invention, the position estimation information can be optimized by utilizing the back-end algorithm of the two-dimensional laser SLAM algorithm, the back-end algorithm locally optimizes the result (comprising contour information and position information) of the front-end algorithm, globally optimizes the pose key frame obtained by the local optimization as a node of the pose graph, outputs the pose track and the global map, and judges whether the current position is accessed before based on Loop detection (Loop close) by detecting the similarity of the current scene and the historical scene, thereby correcting the deviation of the pose track.
In detail, referring to fig. 5, the performing position optimization on the position estimation information includes:
s30, performing local optimization and global optimization on the position estimation information by using the positioning mapping algorithm to obtain an optimized key frame;
s31, traversing the optimized key frame, and determining the optimized key frame containing the high-reflection object point cloud information as optimized position information.
In the embodiment of the invention, the result of the front-end algorithm is followed by the mapping key frame to participate in the rear-end optimization of the SALM algorithm, after the mapping process is finished, all key frames are already optimized at the rear end to obtain optimized key frames, at the moment, all the optimized key frames are considered to be at the optimal positions, all the optimized key frames are traversed, and the optimized key frames containing the high-inverse point cloud information data are found out to serve as the optimized position information.
In an optional embodiment of the present invention, mapping the optimized location information to the initial environment map to obtain a standard environment map includes:
mapping the high-inverse object point cloud information into the initial environment map based on the position of the optimization key frame in the initial environment map:
and marking the high-reflection object in the initial environment map to obtain the standard environment map.
Further, the high-inverse point cloud data contained in the optimization key frame is mapped into an established map based on the position information of the optimization key frame in the map, and the high-inverse point cloud mapped data is marked, so that a standard environment map containing high-inverse object (such as glass) outlines can be obtained.
In another optional embodiment of the present invention, after mapping the optimized location information to the initial environment map to obtain a standard environment map, the method further includes:
and acquiring real-time laser data received by the sensor, and planning a path of a target object by using the real-time laser data and the standard environment map.
In the embodiment of the invention, the target object can be a mobile robot and the like, in the autonomous navigation walking process of the mobile robot, real-time laser data and a created standard environment map are required to be registered for determining the position of the mobile robot in the environment, in the registration process, high-inverse profile information does not participate in data registration, because the real-time laser data is influenced by a high-inverse object and is invalid, the position deviation of the mobile robot can be caused by registering the real-time laser data of the high-inverse object with the high-inverse profile which is not detected, the high-inverse profile information in the standard environment map only participates in path planning of the mobile robot, so that the planned path of the mobile robot and a high-inverse object area do not conflict, and the safety of the mobile robot is improved. The invention can use BFS algorithm, DFS algorithm, dijkstra algorithm, astar algorithm and the like to carry out path planning of the mobile robot.
According to the invention, an initial environment map is constructed through environment data, and based on the reflection characteristics of the high-reflection object, the laser point cloud of the high-reflection object can be accurately created according to the laser intensity of the sensor data, the position estimation information containing the high-reflection object information can be obtained through de-distortion processing and position estimation processing of the laser point cloud, the position of the position estimation information is optimized, and the optimized position information is mapped to the initial environment map, so that the accurate positioning of the high-reflection object can be realized. Therefore, the high-reflection object positioning method provided by the invention can solve the problem of inaccurate positioning of the high-reflection object.
Corresponding to the method of the above embodiment, fig. 6 shows a block diagram of the high-reflection object positioning device provided in the embodiment of the present application, and for convenience of explanation, only the portion relevant to the embodiment of the present application is shown. The high-reflectivity object positioning device illustrated in fig. 6 may be the execution subject of the high-reflectivity object positioning method provided in the first embodiment.
Referring to fig. 6, the high-reflection object positioning apparatus includes:
the initial map construction module 61 is configured to acquire environment data, and construct an initial environment map based on the environment data;
the position estimation module 62 is configured to obtain sensor data, create a laser point cloud of the high-reflectivity object according to the laser intensity of the sensor data, and perform de-distortion processing and position estimation processing on the laser point cloud to obtain position estimation information;
the high-reflectivity object positioning module 63 is configured to perform position optimization on the position estimation information, and map the optimized position information to the initial environment map to obtain a standard environment map.
The process of implementing respective functions by each module in the high-reflection object positioning device provided in this embodiment of the present application may refer to the description of the first embodiment shown in fig. 1, which is not repeated here.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance. It will also be understood that, although the terms "first," "second," etc. may be used in this document to describe various elements in some embodiments of the present application, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first table may be named a second table, and similarly, a second table may be named a first table without departing from the scope of the various described embodiments. The first table and the second table are both tables, but they are not the same table.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The high-reflectivity object positioning method provided by the embodiment of the application can be applied to terminal equipment such as mobile phones, tablet computers, wearable equipment, vehicle-mounted equipment, augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the specific type of the terminal equipment is not limited.
For example, the terminal device may be a Station (ST) in a WLAN, a cellular telephone, a cordless telephone, a Session initiation protocol (Session InitiationProtocol, SIP) telephone, a wireless local loop (Wireless Local Loop, WLL) station, a personal digital assistant (Personal Digital Assistant, PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, an in-vehicle device, a car networking terminal, a computer, a laptop computer, a handheld communication device, a handheld computing device, a satellite radio, a wireless modem card, a television Set Top Box (STB), a customer premise equipment (customer premise equipment, CPE) and/or other devices for communicating over a wireless system as well as next generation communication systems, such as a mobile terminal in a 5G network or a mobile terminal in a future evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
By way of example, but not limitation, when the terminal device is a wearable device, the wearable device may also be a generic name for applying wearable technology to intelligently design daily wear, developing wearable devices, such as glasses, gloves, watches, apparel, shoes, and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device comprises full functions, large size, and complete or partial functions which can be realized independent of a smart phone, such as a smart watch or a smart glasses, and is only focused on certain application functions, and needs to be matched with other devices such as the smart phone for use, such as various smart bracelets, smart jewelry and the like for physical sign monitoring.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71, said memory 71 having stored therein a computer program 72 executable on said processor 70. The processor 70, when executing the computer program 72, performs the steps of the various high-reflectivity object positioning method embodiments described above, such as those shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 61 to 63 shown in fig. 6.
The terminal device 7 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the terminal device 7 and does not constitute a limitation of the terminal device 7, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input transmitting device, a network access device, a bus, etc.
The processor 70 may be a central processing unit (Central Processing Unit, CPU), or may be another general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. The memory 71 may be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 71 may also be used for temporarily storing data that has been transmitted or is to be transmitted.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The embodiment of the application also provides a terminal device, which comprises at least one memory, at least one processor and a computer program stored in the at least one memory and capable of running on the at least one processor, wherein the processor executes the computer program to enable the terminal device to realize the steps in any of the method embodiments.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
The present embodiments provide a computer program product which, when run on a terminal device, causes the terminal device to perform steps that enable the respective method embodiments described above to be implemented.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (6)

1. A method of locating a high-reflectivity object, comprising:
acquiring environment data, and constructing an initial environment map based on the environment data;
the obtaining the environment data, constructing an initial environment map based on the environment data, includes: receiving environment data fed back by a preset sensor, and detecting position coordinates of obstacles in the environment data; constructing a grid map according to the position coordinates of the obstacle, and taking the grid map as an initial environment map;
the sensor is a laser sensor; calculating the actual position of the obstacle detected by each piece of laser data by the sweeping robot, calculating the position coordinates of the obstacle on the grid map according to the actual position of the obstacle, calculating the position of the sweeping robot in the grid map, calculating a non-obstacle grid point set by using a Bresenham algorithm according to the two coordinates, and summarizing the grid point sets of the obstacle and the non-obstacle to obtain the initial environment map;
the environment data refer to various types of original data in an actual environment, including laser scanning data and video image data;
acquiring sensor data, creating a laser point cloud of a high-reflection object according to the laser intensity of the sensor data, and performing de-distortion processing and position estimation processing on the laser point cloud to obtain position estimation information;
the sensor data are returned laser data after the object is irradiated by the laser beam; the laser point cloud of the high-reflection object refers to a set of scanning points of the high-reflection object;
the performing de-distortion processing and position estimation processing on the laser point cloud to obtain position estimation information includes: acquiring a preset positioning mapping algorithm, and binding data in the laser point cloud to a front end key frame node of the positioning mapping algorithm; performing de-distortion treatment on the front-end key frame node by using a preset de-distortion algorithm to obtain a de-distorted data frame; performing inter-frame motion estimation and local coordinate drawing on the de-distorted data frame by using the positioning mapping algorithm to obtain position estimation information comprising contour information and position information;
as the laser scanning is accompanied with the movement of the robot, the laser data of each angle is not obtained instantaneously, and the positions of the robot are different when the laser is emitted at different moments, so that movement distortion can be generated;
performing position optimization on the position estimation information, and mapping the optimized position information to the initial environment map to obtain a standard environment map;
the performing position optimization on the position estimation information includes: performing local optimization and global optimization on the position estimation information by using the positioning mapping algorithm to obtain an optimized key frame; traversing the optimized key frame, and determining the optimized key frame containing the high-reflection object point cloud information as optimized position information;
the preset positioning mapping algorithm is a two-dimensional laser SLAM algorithm; the two-dimensional laser SLAM algorithm comprises a front-end algorithm and a back-end algorithm, wherein the front-end algorithm utilizes sensor information to perform inter-frame motion estimation and local road marking to obtain position estimation information of a high-reflection object; performing position optimization on the position estimation information by utilizing a back-end algorithm of the two-dimensional laser SLAM algorithm, performing local optimization on a result of a front-end algorithm by utilizing the back-end algorithm, performing global optimization by taking a pose key frame obtained by the local optimization as a node of a pose graph, outputting a pose track and a global map, and judging whether the current position is accessed before based on loop detection by detecting the similarity of a current scene and a historical scene, thereby correcting the deviation of the pose track;
the result of the front-end algorithm is followed by the mapping key frame to participate in the rear-end optimization of the SALM algorithm, after the mapping process is finished, all key frames are optimized at the rear end to obtain optimized key frames, at the moment, all the optimized key frames are considered to be at optimal positions, all the optimized key frames are traversed, and the optimized key frames containing high-inverse point cloud information data are found out to serve as optimized position information;
the mapping the optimized position information to the initial environment map to obtain a standard environment map comprises the following steps: mapping the high-inverse object point cloud information into the initial environment map based on the position of the optimization key frame in the initial environment map: marking the high-reflection object in the initial environment map to obtain the standard environment map;
and mapping the high-inverse point cloud data contained in the optimization key frame into an established map based on the position information of the optimization key frame in the map, and marking the high-inverse point cloud mapped data to obtain a standard environment map containing the high-inverse object outline.
2. The method for positioning a high-reflectivity object according to claim 1, wherein the acquiring sensor data creates a laser point cloud of the high-reflectivity object according to the laser intensity of the sensor data, comprising:
acquiring a laser beam data set reflected by an object by using the sensor, wherein the laser beam data set is used as the sensor data;
detecting the laser intensity of a laser beam in the sensor data;
and marking the laser beam with the laser intensity being greater than or equal to a preset intensity threshold value to obtain marking laser point information, and collecting all marking laser point information to obtain the laser point cloud.
3. The high-reflectivity object positioning method according to claim 1, wherein after mapping the optimized position information to the initial environment map to obtain a standard environment map, the method further comprises:
and acquiring real-time laser data received by the sensor, and planning a path of a target object by using the real-time laser data and the standard environment map.
4. A high-reflectivity object positioning device, comprising:
the initial map construction module is used for acquiring environment data and constructing an initial environment map based on the environment data;
the obtaining the environment data, constructing an initial environment map based on the environment data, includes: receiving environment data fed back by a preset sensor, and detecting position coordinates of obstacles in the environment data; constructing a grid map according to the position coordinates of the obstacle, and taking the grid map as an initial environment map;
the sensor is a laser sensor; calculating the actual position of the obstacle detected by each piece of laser data by the sweeping robot, calculating the position coordinates of the obstacle on the grid map according to the actual position of the obstacle, calculating the position of the sweeping robot in the grid map, calculating a non-obstacle grid point set by using a Bresenham algorithm according to the two coordinates, and summarizing the grid point sets of the obstacle and the non-obstacle to obtain the initial environment map;
the environment data refer to various types of original data in an actual environment, including laser scanning data and video image data;
the position estimation module is used for acquiring sensor data, creating laser point clouds of the high-reflection object according to the laser intensity of the sensor data, and carrying out de-distortion processing and position estimation processing on the laser point clouds to obtain position estimation information;
the sensor data are returned laser data after the object is irradiated by the laser beam; the laser point cloud of the high-reflection object refers to a set of scanning points of the high-reflection object;
the performing de-distortion processing and position estimation processing on the laser point cloud to obtain position estimation information includes: acquiring a preset positioning mapping algorithm, and binding data in the laser point cloud to a front end key frame node of the positioning mapping algorithm; performing de-distortion treatment on the front-end key frame node by using a preset de-distortion algorithm to obtain a de-distorted data frame; performing inter-frame motion estimation and local coordinate drawing on the de-distorted data frame by using the positioning mapping algorithm to obtain position estimation information comprising contour information and position information; as the laser scanning is accompanied with the movement of the robot, the laser data of each angle is not obtained instantaneously, and the positions of the robot are different when the laser is emitted at different moments, so that movement distortion can be generated;
the high-reflection object positioning module is used for carrying out position optimization on the position estimation information, and mapping the optimized position information to the initial environment map to obtain a standard environment map;
the performing position optimization on the position estimation information includes: performing local optimization and global optimization on the position estimation information by using the positioning mapping algorithm to obtain an optimized key frame; traversing the optimized key frame, and determining the optimized key frame containing the high-reflection object point cloud information as optimized position information;
the preset positioning mapping algorithm is a two-dimensional laser SLAM algorithm; the two-dimensional laser SLAM algorithm comprises a front-end algorithm and a back-end algorithm, wherein the front-end algorithm utilizes sensor information to perform inter-frame motion estimation and local road marking to obtain position estimation information of a high-reflection object; performing position optimization on the position estimation information by utilizing a back-end algorithm of the two-dimensional laser SLAM algorithm, performing local optimization on a result of a front-end algorithm by utilizing the back-end algorithm, performing global optimization by taking a pose key frame obtained by the local optimization as a node of a pose graph, outputting a pose track and a global map, and judging whether the current position is accessed before based on loop detection by detecting the similarity of a current scene and a historical scene, thereby correcting the deviation of the pose track;
the result of the front-end algorithm is followed by the mapping key frame to participate in the rear-end optimization of the SALM algorithm, after the mapping process is finished, all key frames are optimized at the rear end to obtain optimized key frames, at the moment, all the optimized key frames are considered to be at optimal positions, all the optimized key frames are traversed, and the optimized key frames containing high-inverse point cloud information data are found out to serve as optimized position information;
the mapping the optimized position information to the initial environment map to obtain a standard environment map comprises the following steps: mapping the high-inverse object point cloud information into the initial environment map based on the position of the optimization key frame in the initial environment map: marking the high-reflection object in the initial environment map to obtain the standard environment map;
and mapping the high-inverse point cloud data contained in the optimization key frame into an established map based on the position information of the optimization key frame in the map, and marking the high-inverse point cloud mapped data to obtain a standard environment map containing the high-inverse object outline.
5. A terminal device, characterized in that it comprises a memory, a processor, on which a computer program is stored which is executable on the processor, the processor executing the computer program to carry out the steps of the method according to any one of claims 1 to 3.
6. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 3.
CN202210052269.9A 2022-01-18 2022-01-18 High-reflection object positioning method and device and terminal equipment Active CN114577198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210052269.9A CN114577198B (en) 2022-01-18 2022-01-18 High-reflection object positioning method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210052269.9A CN114577198B (en) 2022-01-18 2022-01-18 High-reflection object positioning method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN114577198A CN114577198A (en) 2022-06-03
CN114577198B true CN114577198B (en) 2024-02-02

Family

ID=81770838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210052269.9A Active CN114577198B (en) 2022-01-18 2022-01-18 High-reflection object positioning method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN114577198B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407073A (en) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 Reflected value map constructing method and device
CN113432600A (en) * 2021-06-09 2021-09-24 北京科技大学 Robot instant positioning and map construction method and system based on multiple information sources

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665541B (en) * 2018-04-09 2019-06-07 北京三快在线科技有限公司 A kind of ground drawing generating method and device and robot based on laser sensor
US11041957B2 (en) * 2018-06-25 2021-06-22 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for mitigating effects of high-reflectivity objects in LiDAR data
CN111708043B (en) * 2020-05-13 2023-09-26 阿波罗智能技术(北京)有限公司 Positioning method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407073A (en) * 2017-08-15 2019-03-01 百度在线网络技术(北京)有限公司 Reflected value map constructing method and device
CN113432600A (en) * 2021-06-09 2021-09-24 北京科技大学 Robot instant positioning and map construction method and system based on multiple information sources

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
融合反光柱的2D激光SLAM和高精度定位系统;周凯月 等;现代计算机(第04期);第3-7页 *

Also Published As

Publication number Publication date
CN114577198A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
US11638997B2 (en) Positioning and navigation method for a robot, and computing device thereof
EP3779360B1 (en) Indoor positioning method, indoor positioning system, indoor positioning device, and computer readable medium
US9014970B2 (en) Information processing device, map update method, program, and information processing system
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN112528831A (en) Multi-target attitude estimation method, multi-target attitude estimation device and terminal equipment
JP2002048513A (en) Position detector, method of detecting position, and program for detecting position
CN112284400B (en) Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN104378735A (en) Indoor positioning method, client side and server
Caldini et al. Smartphone-based obstacle detection for the visually impaired
Houben et al. Park marking-based vehicle self-localization with a fisheye topview system
CN111353453A (en) Obstacle detection method and apparatus for vehicle
WO2015168460A1 (en) Dead reckoning system based on locally measured movement
CN113984068A (en) Positioning method, positioning apparatus, and computer-readable storage medium
CN117908536A (en) Robot obstacle avoidance method, terminal equipment and computer readable storage medium
He et al. WiFi based indoor localization with adaptive motion model using smartphone motion sensors
CN114577198B (en) High-reflection object positioning method and device and terminal equipment
CN114384486A (en) Data processing method and device
CN113297259B (en) Robot and environment map construction method and device thereof
CN112880675B (en) Pose smoothing method and device for visual positioning, terminal and mobile robot
JP7125927B2 (en) Information terminal device, method and program
CN109711363B (en) Vehicle positioning method, device, equipment and storage medium
CN114187509A (en) Object positioning method and device, electronic equipment and storage medium
Cheng et al. Two-Phase Positioning System Based on the Fusion of Wi-Fi Signal Strength and Pose Estimation
CN112364115A (en) Target acquisition method, device, terminal equipment and storage medium
CN111383337A (en) Method and device for identifying objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A high anti object positioning method, device, and terminal equipment

Granted publication date: 20240202

Pledgee: Bank of China Shenyang Heping Branch

Pledgor: Liaoning Huadun Safety Technology Co.,Ltd.

Registration number: Y2024210000109

PE01 Entry into force of the registration of the contract for pledge of patent right