CN112631303A - Robot positioning method and device and electronic equipment - Google Patents

Robot positioning method and device and electronic equipment Download PDF

Info

Publication number
CN112631303A
CN112631303A CN202011570106.7A CN202011570106A CN112631303A CN 112631303 A CN112631303 A CN 112631303A CN 202011570106 A CN202011570106 A CN 202011570106A CN 112631303 A CN112631303 A CN 112631303A
Authority
CN
China
Prior art keywords
robot
data
pose information
image data
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011570106.7A
Other languages
Chinese (zh)
Other versions
CN112631303B (en
Inventor
龚汉越
支涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunji Technology Co Ltd
Original Assignee
Beijing Yunji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunji Technology Co Ltd filed Critical Beijing Yunji Technology Co Ltd
Priority to CN202011570106.7A priority Critical patent/CN112631303B/en
Publication of CN112631303A publication Critical patent/CN112631303A/en
Application granted granted Critical
Publication of CN112631303B publication Critical patent/CN112631303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Abstract

The invention provides a robot positioning method, a robot positioning device and electronic equipment. According to the invention, the initial positioning is carried out through the image data collected by the robot to obtain the approximate position of the robot, then the laser data collected by the robot is used for correcting the initial pose information to obtain the accurate positioning of the robot, the position of the robot is accurately determined, and further the navigation and path planning can be carried out based on the accurate positioning information, so that the service robot can accurately reach the service place, and the quality and the user experience of the service provided by the service robot are improved.

Description

Robot positioning method and device and electronic equipment
Technical Field
The invention relates to the field of robots, in particular to a robot positioning method and device and electronic equipment.
Background
With the development of science and technology and the improvement of living standard of people, the service robot gradually enters the life of people and provides various humanized services for human beings. The working environment of a service robot is relatively complex compared to other types of robots, and thus positioning in a complex environment is a prerequisite for the service robot to provide high quality services to humans. The positioning is to determine coordinates of a world coordinate system of the robot in the motion environment of the robot.
If the robot is not accurately positioned, navigation and path planning based on the positioning information are also inaccurate, so that the service robot cannot accurately reach a service place, and the user experience is reduced.
Disclosure of Invention
In view of this, the present invention provides a robot positioning method, an apparatus and an electronic device, so as to solve the problem that navigation and path planning based on the positioning information are also inaccurate when the robot is inaccurately positioned, so that a service robot cannot accurately reach a service location, and user experience is reduced.
In order to solve the technical problems, the invention adopts the following technical scheme:
a robot positioning method, comprising:
acquiring image data acquired by image acquisition equipment of the robot;
calling a pre-trained data processing model to process the image data to obtain initial pose information of the robot; the data processing model is obtained based on training data; the training data comprises image data samples and pose information corresponding to the image data samples;
acquiring laser data acquired by laser acquisition equipment of the robot;
and correcting the initial pose information based on the laser data to obtain the current pose information of the robot.
Optionally, based on the laser data, modifying the initial pose information to obtain current pose information of the robot, including:
determining a target position corresponding to the initial pose information;
acquiring standard laser data and pose information of a plurality of positions in the preset range of the target position;
and determining the current pose information of the robot based on the laser data, the standard laser data of the plurality of positions and the pose information.
Optionally, determining the current pose information of the robot based on the laser data, the standard laser data of the plurality of positions, and the pose information comprises:
comparing the laser data with each standard laser data to obtain comparison scoring values;
screening out the maximum comparison score value from all the obtained comparison score values, and determining target standard laser data corresponding to the maximum comparison score value;
and determining the pose information corresponding to the target standard laser data as the current pose information of the robot.
Optionally, acquiring image data acquired by an image acquisition device of the robot includes:
acquiring RGB data acquired by first image acquisition equipment of the robot;
acquiring depth image data acquired by second image acquisition equipment of the robot;
converting the RGB data and the depth image data into an image data matrix.
Optionally, the processing procedure of the data processing model includes:
acquiring a robot walking track of which the walking track evaluation result meets a preset condition;
extracting image data and pose information at a plurality of different moments from the walking track of the robot;
determining image data at a plurality of different moments as image data samples, and determining pose information at a plurality of different moments as pose information corresponding to the image data samples;
and training a data processing model by using the image data sample and the pose information corresponding to the image data sample, and stopping training until the loss function value of the data processing model is smaller than a preset threshold value.
A robotic positioning device, comprising:
the first data acquisition module is used for acquiring image data acquired by image acquisition equipment of the robot;
the pose determining module is used for calling a pre-trained data processing model to process the image data to obtain initial pose information of the robot; the data processing model is obtained based on training data; the training data comprises image data samples and pose information corresponding to the image data samples;
the second data acquisition module is used for acquiring laser data acquired by laser acquisition equipment of the robot;
and the pose adjusting module is used for correcting the initial pose information based on the laser data to obtain the current pose information of the robot.
Optionally, the pose adjustment module includes:
the position determining submodule is used for determining a target position corresponding to the initial pose information;
the data acquisition submodule is used for acquiring standard laser data and pose information of a plurality of positions in the preset range of the target position;
and the pose determination submodule is used for determining the current pose information of the robot based on the laser data, the standard laser data of the positions and the pose information.
Optionally, the pose determination sub-module comprises:
the comparison unit is used for comparing the laser data with each standard laser data to obtain comparison scoring values;
the screening unit is used for screening out the maximum comparison score value from all the obtained comparison score values and determining the target standard laser data corresponding to the maximum comparison score value;
and the determining unit is used for determining the pose information corresponding to the target standard laser data as the current pose information of the robot.
Optionally, the first data obtaining module is specifically configured to:
the method comprises the steps of obtaining RGB data collected by first image collecting equipment of the robot, obtaining depth image data collected by second image collecting equipment of the robot, and converting the RGB data and the depth image data into an image data matrix.
An electronic device, comprising: a memory and a processor;
wherein the memory is used for storing programs;
the processor calls a program and is used to:
acquiring image data acquired by image acquisition equipment of the robot;
calling a pre-trained data processing model to process the image data to obtain initial pose information of the robot; the data processing model is obtained based on training data; the training data comprises image data samples and pose information corresponding to the image data samples;
acquiring laser data acquired by laser acquisition equipment of the robot;
and correcting the initial pose information based on the laser data to obtain the current pose information of the robot.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a robot positioning method, a robot positioning device and electronic equipment. According to the invention, the initial positioning is carried out through the image data collected by the robot to obtain the approximate position of the robot, then the laser data collected by the robot is used for correcting the initial pose information to obtain the accurate positioning of the robot, the position of the robot is accurately determined, and further the navigation and path planning can be carried out based on the accurate positioning information, so that the service robot can accurately reach the service place, and the quality and the user experience of the service provided by the service robot are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method of positioning a robot according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for positioning a robot according to an embodiment of the present invention;
fig. 3 is a scene diagram of image data according to an embodiment of the present invention;
fig. 4 is a scene schematic diagram of a CNN model according to an embodiment of the present invention;
FIG. 5 is a scene diagram of an LSTM model according to an embodiment of the present invention;
fig. 6 is a schematic view of a robot positioning scenario according to an embodiment of the present invention;
FIG. 7 is a schematic view of another robot positioning scenario provided by an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a robot positioning device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
To achieve positioning of the robot, the inventors have found that two methods can be employed. The first method is to use image acquisition equipment arranged on the robot to acquire image data of the robot, and use visual instantaneous positioning and mapping to process the image data to obtain positioning information of the robot. However, the method is limited by the influence of factors such as sensor pose calibration, different illumination acquisition and operation time and the like, and has certain limitation in the use environment. As in the standard map used by vslam, the location has a chair, but this is later removed, causing the environment to change. Or, the data used in the standard map is collected on a cloudy day, but in practical application, the collected image is collected under a condition with good illumination conditions, such as collection on a sunny day, and inaccurate positioning can also be caused.
Another method is to collect laser data using a laser radar provided on a robot, process the laser data using slam (synchronous positioning and mapping), but since the laser radar is an active optical sensor, it emits a laser beam toward a target while moving along a specific measurement path. The receiver in the lidar sensor detects and analyzes the laser light reflected from the target. These receivers record the precise time of the laser pulse from leaving the system to returning to the system, thereby calculating the range distance between the sensor and the target. These range measurements, along with the position information (GPS and INS), are converted into measurements reflecting the actual three-dimensional points of the target in object space. That is, the laser data includes a distance value between the robot and an edge of the environment around the robot, but in the standard map used by slam, there may be a plurality of positions corresponding to the laser data, and thus the current position of the robot cannot be accurately determined.
In order to solve the technical problems, the inventor finds that image data is more suitable for a coarse positioning scene, laser data is more suitable for an accurate positioning scene, if the image data can be used for performing coarse positioning to obtain a primary positioning result, and then the laser data is used for performing accurate positioning near the primary positioning result, the problem that the positioning of a robot is inaccurate by using the image data due to environmental factors can be avoided, the problem that the positioning is inaccurate due to single positioning condition during the positioning of the laser data can also be avoided, and the quality of service provided by a service robot and the user experience are improved.
More specifically, according to the invention, initial pose information of the robot is determined according to image data acquired by an image acquisition device, then laser data acquired by a laser acquisition device is acquired, and the initial pose information is corrected based on the laser data to obtain current pose information of the robot. According to the invention, the initial positioning is carried out through the image data collected by the robot to obtain the approximate position of the robot, then the laser data collected by the robot is used for correcting the initial pose information to obtain the accurate positioning of the robot, the position of the robot is accurately determined, and further the navigation and path planning can be carried out based on the accurate positioning information, so that the service robot can accurately reach the service place, and the quality and the user experience of the service provided by the service robot are improved.
On the basis of the above, an embodiment of the present invention provides a robot positioning method, and with reference to fig. 1, the method may include:
and S11, acquiring image data acquired by the image acquisition equipment of the robot.
In practical application, the image capturing device is disposed on the robot, and may include a first image capturing device and a second image capturing device, where the first image capturing device may be a common camera and may capture common image data, that is, RGB data, of an environment around the robot. The second image acquisition device may be a depth camera, which may acquire depth image data (depth point cloud) of the surroundings of the robot.
After the RGB data collected by the first image collecting device and the depth image data collected by the second image collecting device are obtained, data conversion is carried out on the RGB data and the depth image data, and the RGB data and the depth image data are converted into an image data matrix, wherein the image data matrix comprises the RGB data and the depth image data. The specific conversion process may be:
RGB and depth image data which are subjected to pixel data alignment are obtained, and the RGB (three-dimensional image data) and the depth data are added to expand the RGB and depth image data into four-dimensional data RGBD (depth), so that an image data matrix is obtained.
In summary, step S11 may include:
the method comprises the steps of obtaining RGB data collected by first image collecting equipment of the robot, obtaining depth image data collected by second image collecting equipment of the robot, and converting the RGB data and the depth image data into an image data matrix.
And S12, calling a pre-trained data processing model to process the image data to obtain the initial pose information of the robot.
The data processing model is trained based on training data. The training data comprises image data samples and pose information corresponding to the image data samples.
In practical application, the historical track data of the robot can be used for training the model to obtain a data processing model.
To improve the accuracy of the positioning, the used historical track may be an accurate historical track. Whether the historical track is accurate or not can be judged by monitoring the image data and the pose information in real time in the walking process of the robot and manually confirming each image data and the pose information, so that the historical track can be ensured to be correct. Or, after the robot executes a service, such as a meal delivery service, if the user gives five-star comment or confirms the walking track, the history track is correct. Therefore, the image data and the pose information can be extracted from the history track.
Specifically, step S12 may include:
and S21, acquiring the robot walking track of which the walking track evaluation result meets the preset condition.
The walking track evaluation result can be the five-star goodness evaluation, or the walking track is confirmed, or each image data and pose information in the walking process are confirmed manually. By any of the above methods, an accurate walking trajectory is available.
And S22, extracting image data and pose information at a plurality of different moments from the robot walking track.
Specifically, the walking track of the robot includes image data and pose information at each time. In this embodiment, image data and pose information at each time are acquired.
Wherein if the robot is operating at time t in the environmentxThe pose information of the robot at this time is ptx(x,y,θ)(including position coordinates and orientation angles) with particular reference to fig. 3.
Image data about to be at time txAnd the RGB data collected by the first image collection device and the depth image data collected by the second image collection device are converted into image data matrixes.
And S23, determining the image data at a plurality of different moments as image data samples, and determining the position and orientation information at a plurality of different moments as the position and orientation information corresponding to the image data samples.
In practical application, image data at different moments can be determined as an image data sample, and pose information in the same way is processed. Furthermore, image data of a period of time, such as 10s, may be determined as an image data sample, and similarly pose information is processed. The image data samples and the corresponding pose information are referred to as training data.
When the image data sample is only data at one time, the image data matrix acquired in the present embodiment is also data at one time. When the image data sample is data of a period of time, the image data matrix acquired in the present embodiment is also data of a period of time, such as data of 10s each.
And S24, training a data processing model by using the image data sample and the pose information corresponding to the image data sample, and stopping training until the loss function value of the data processing model is smaller than a preset threshold value.
In this embodiment, the data processing model may be a mainstream Neural Network such as CNN (Convolutional Neural Networks), LSTM (Long-Short Term Memory), VGG (Convolutional Neural Network), and rescen 50(Residual Neural Network).
Specifically, with reference to fig. 4 and 5, the data processing model of fig. 4 is a CNN model, the data processing model of fig. 5 is an LSTM model, and regardless of which model is used, the input and output of the model are the same, the input is an image data matrix, and the output is pose information.
And after the data processing model is selected, training the model by using the training data, and stopping training if the loss function value of the data processing model is smaller than a preset threshold value in the training process to obtain the data processing model meeting the requirements.
And after a data processing model meeting the requirements is obtained, outputting the image data matrix of the robot at the current moment to the data processing model to obtain the initial pose information of the robot.
It should be noted that, when the image data is located, there is a certain error, and the error is approximately within a range of 1.5m, so that only the fuzzy current position of the robot is obtained, and the accurate pose of the robot is obtained, which can be further selected and filtered. At this point, the laser data can be used for accurate positioning.
And S13, acquiring laser data acquired by the laser acquisition equipment of the robot.
In practical application, the robot is provided with a laser acquisition device, the laser acquisition device can be a laser-scan-spectrometer, and the laser radar can acquire laser data around the robot, and specifically can be a distance value between the robot and the peripheral edge.
The laser data qtx has an arrangement of laser or point cloud records matching (referring to fig. 6, darker colored points are locations determined by the laser data).
And S14, correcting the initial pose information based on the laser data to obtain the current pose information of the robot.
In practical applications, step S14 may include:
determining a target position corresponding to the initial pose information, acquiring standard laser data and pose information of a plurality of positions in a preset range of the target position, and determining the current pose information of the robot based on the laser data, the standard laser data of the plurality of positions and the pose information.
Specifically, a standard map is arranged in the robot, the marking map is provided with laser data and pose information of different positions, for example, for a museum, the standard map comprises laser data of each position in each museum, such as the dinosaur museum, the astronomical museum and the like, and each position in each museum, such as the southeast corner, the northwest corner and the like.
The target position of the robot can be roughly determined through the image data matrix, and at the moment, a plurality of positions in the preset range of the target position are determined. If the position of the northwest corner of the astronomical museum is within the range of the northwest corner of the astronomical museum, the standard laser data and the pose information of each position can be acquired by using the position of the northwest corner of the astronomical museum, such as the different positions within 2 meters of the initial pose information.
And then determining the current pose information of the robot based on the laser data, the standard laser data of the plurality of positions and the pose information.
More specifically, the laser data is compared with each standard laser data to obtain comparison score values, the maximum comparison score value is screened out from all the obtained comparison score values, target standard laser data corresponding to the maximum comparison score value is determined, and pose information corresponding to the target standard laser data is determined as current pose information of the robot.
In practical application, the laser data is compared with each standard laser data, the comparison result is a score value, which is called a comparison score value in this embodiment, then the maximum comparison score value is screened out, then the target standard laser data corresponding to the maximum comparison score value is determined, the pose information corresponding to the target standard laser data is obtained, and the pose information is determined as the current pose information of the robot.
Refer to fig. 6 and 7. In fig. 6, the light dots are determined as positions by the image data, the dark dots are determined as positions by the laser data, and the light dots are adjusted to the corresponding dark dots, so that the current robot position can be corrected.
In this embodiment, first, initial pose information of the robot is determined according to image data acquired by an image acquisition device, then laser data acquired by a laser acquisition device is acquired, and the initial pose information is corrected based on the laser data to obtain current pose information of the robot. According to the invention, the initial positioning is carried out through the image data collected by the robot to obtain the approximate position of the robot, then the laser data collected by the robot is used for correcting the initial pose information to obtain the accurate positioning of the robot, the position of the robot is accurately determined, and further the navigation and path planning can be carried out based on the accurate positioning information, so that the service robot can accurately reach the service place, and the quality and the user experience of the service provided by the service robot are improved.
In addition, in the invention, the robot can use historical data as a positioning basis, and the problem that the positioning error is increasingly larger due to environmental change is solved.
Alternatively, on the basis of the embodiment of the robot positioning method, another embodiment of the present invention provides a robot positioning device, and with reference to fig. 8, the robot positioning device may include:
the first data acquisition module 11 is used for acquiring image data acquired by image acquisition equipment of the robot;
the pose determining module 12 is configured to call a pre-trained data processing model to process the image data, so as to obtain initial pose information of the robot; the data processing model is obtained based on training data; the training data comprises image data samples and pose information corresponding to the image data samples;
the second data acquisition module 13 is used for acquiring laser data acquired by laser acquisition equipment of the robot;
and a pose adjusting module 14, configured to correct the initial pose information based on the laser data, so as to obtain current pose information of the robot.
Further, the pose adjustment module includes:
the position determining submodule is used for determining a target position corresponding to the initial pose information;
the data acquisition submodule is used for acquiring standard laser data and pose information of a plurality of positions in the preset range of the target position;
and the pose determination submodule is used for determining the current pose information of the robot based on the laser data, the standard laser data of the positions and the pose information.
Further, the pose determination sub-module includes:
the comparison unit is used for comparing the laser data with each standard laser data to obtain comparison scoring values;
the screening unit is used for screening out the maximum comparison score value from all the obtained comparison score values and determining the target standard laser data corresponding to the maximum comparison score value;
and the determining unit is used for determining the pose information corresponding to the target standard laser data as the current pose information of the robot.
Further, the first data obtaining module is specifically configured to:
the method comprises the steps of obtaining RGB data collected by first image collecting equipment of the robot, obtaining depth image data collected by second image collecting equipment of the robot, and converting the RGB data and the depth image data into an image data matrix.
Further, the system also comprises a model training module; the model training module comprises:
the track acquisition submodule is used for acquiring the walking track of the robot, the walking track evaluation result of which meets the preset condition;
the data extraction submodule is used for extracting image data and pose information at a plurality of different moments from the walking track of the robot;
the sample determining submodule is used for determining the image data at a plurality of different moments as image data samples and determining the pose information at a plurality of different moments as the pose information corresponding to the image data samples;
and the training submodule is used for training a data processing model by using the image data sample and the pose information corresponding to the image data sample, and stopping training until the loss function value of the data processing model is smaller than a preset threshold value.
In this embodiment, first, initial pose information of the robot is determined according to image data acquired by an image acquisition device, then laser data acquired by a laser acquisition device is acquired, and the initial pose information is corrected based on the laser data to obtain current pose information of the robot. According to the invention, the initial positioning is carried out through the image data collected by the robot to obtain the approximate position of the robot, then the laser data collected by the robot is used for correcting the initial pose information to obtain the accurate positioning of the robot, the position of the robot is accurately determined, and further the navigation and path planning can be carried out based on the accurate positioning information, so that the service robot can accurately reach the service place, and the quality and the user experience of the service provided by the service robot are improved.
It should be noted that, for the working processes of each module, sub-module, and unit in this embodiment, please refer to the corresponding description in the above embodiments, which is not described herein again.
Optionally, on the basis of the embodiments of the robot positioning method and apparatus, another embodiment of the present invention provides an electronic device, including: a memory and a processor;
wherein the memory is used for storing programs;
the processor calls a program and is used to:
acquiring image data acquired by image acquisition equipment of the robot;
calling a pre-trained data processing model to process the image data to obtain initial pose information of the robot; the data processing model is obtained based on training data; the training data comprises image data samples and pose information corresponding to the image data samples;
acquiring laser data acquired by laser acquisition equipment of the robot;
and correcting the initial pose information based on the laser data to obtain the current pose information of the robot.
Further, based on the laser data, correcting the initial pose information to obtain the current pose information of the robot, including:
determining a target position corresponding to the initial pose information;
acquiring standard laser data and pose information of a plurality of positions in the preset range of the target position;
and determining the current pose information of the robot based on the laser data, the standard laser data of the plurality of positions and the pose information.
Further, determining current pose information of the robot based on the laser data, the standard laser data for the plurality of positions, and the pose information, includes:
comparing the laser data with each standard laser data to obtain comparison scoring values;
screening out the maximum comparison score value from all the obtained comparison score values, and determining target standard laser data corresponding to the maximum comparison score value;
and determining the pose information corresponding to the target standard laser data as the current pose information of the robot.
Further, acquiring image data acquired by an image acquisition device of the robot includes:
acquiring RGB data acquired by first image acquisition equipment of the robot;
acquiring depth image data acquired by second image acquisition equipment of the robot;
converting the RGB data and the depth image data into an image data matrix.
Further, the processing procedure of the data processing model comprises:
acquiring a robot walking track of which the walking track evaluation result meets a preset condition;
extracting image data and pose information at a plurality of different moments from the walking track of the robot;
determining image data at a plurality of different moments as image data samples, and determining pose information at a plurality of different moments as pose information corresponding to the image data samples;
and training a data processing model by using the image data sample and the pose information corresponding to the image data sample, and stopping training until the loss function value of the data processing model is smaller than a preset threshold value.
In this embodiment, first, initial pose information of the robot is determined according to image data acquired by an image acquisition device, then laser data acquired by a laser acquisition device is acquired, and the initial pose information is corrected based on the laser data to obtain current pose information of the robot. According to the invention, the initial positioning is carried out through the image data collected by the robot to obtain the approximate position of the robot, then the laser data collected by the robot is used for correcting the initial pose information to obtain the accurate positioning of the robot, the position of the robot is accurately determined, and further the navigation and path planning can be carried out based on the accurate positioning information, so that the service robot can accurately reach the service place, and the quality and the user experience of the service provided by the service robot are improved.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A robot positioning method, comprising:
acquiring image data acquired by image acquisition equipment of the robot;
calling a pre-trained data processing model to process the image data to obtain initial pose information of the robot; the data processing model is obtained based on training data; the training data comprises image data samples and pose information corresponding to the image data samples;
acquiring laser data acquired by laser acquisition equipment of the robot;
and correcting the initial pose information based on the laser data to obtain the current pose information of the robot.
2. The robot positioning method according to claim 1, wherein the correcting the initial pose information based on the laser data to obtain the current pose information of the robot comprises:
determining a target position corresponding to the initial pose information;
acquiring standard laser data and pose information of a plurality of positions in the preset range of the target position;
and determining the current pose information of the robot based on the laser data, the standard laser data of the plurality of positions and the pose information.
3. The robot positioning method of claim 2, wherein determining the current pose information of the robot based on the laser data, the standard laser data for the plurality of locations, and the pose information comprises:
comparing the laser data with each standard laser data to obtain comparison scoring values;
screening out the maximum comparison score value from all the obtained comparison score values, and determining target standard laser data corresponding to the maximum comparison score value;
and determining the pose information corresponding to the target standard laser data as the current pose information of the robot.
4. The robot positioning method according to claim 1, wherein acquiring image data acquired by an image acquisition device of the robot comprises:
acquiring RGB data acquired by first image acquisition equipment of the robot;
acquiring depth image data acquired by second image acquisition equipment of the robot;
converting the RGB data and the depth image data into an image data matrix.
5. The robot positioning method according to claim 1, wherein the processing of the data processing model comprises:
acquiring a robot walking track of which the walking track evaluation result meets a preset condition;
extracting image data and pose information at a plurality of different moments from the walking track of the robot;
determining image data at a plurality of different moments as image data samples, and determining pose information at a plurality of different moments as pose information corresponding to the image data samples;
and training a data processing model by using the image data sample and the pose information corresponding to the image data sample, and stopping training until the loss function value of the data processing model is smaller than a preset threshold value.
6. A robot positioning device, comprising:
the first data acquisition module is used for acquiring image data acquired by image acquisition equipment of the robot;
the pose determining module is used for calling a pre-trained data processing model to process the image data to obtain initial pose information of the robot; the data processing model is obtained based on training data; the training data comprises image data samples and pose information corresponding to the image data samples;
the second data acquisition module is used for acquiring laser data acquired by laser acquisition equipment of the robot;
and the pose adjusting module is used for correcting the initial pose information based on the laser data to obtain the current pose information of the robot.
7. The robot positioning device according to claim 6, wherein the pose adjustment module includes:
the position determining submodule is used for determining a target position corresponding to the initial pose information;
the data acquisition submodule is used for acquiring standard laser data and pose information of a plurality of positions in the preset range of the target position;
and the pose determination submodule is used for determining the current pose information of the robot based on the laser data, the standard laser data of the positions and the pose information.
8. The robotic positioning device of claim 7, wherein the pose determination sub-module comprises:
the comparison unit is used for comparing the laser data with each standard laser data to obtain comparison scoring values;
the screening unit is used for screening out the maximum comparison score value from all the obtained comparison score values and determining the target standard laser data corresponding to the maximum comparison score value;
and the determining unit is used for determining the pose information corresponding to the target standard laser data as the current pose information of the robot.
9. The robot positioning device of claim 6, wherein the first data acquisition module is specifically configured to:
the method comprises the steps of obtaining RGB data collected by first image collecting equipment of the robot, obtaining depth image data collected by second image collecting equipment of the robot, and converting the RGB data and the depth image data into an image data matrix.
10. An electronic device, comprising: a memory and a processor;
wherein the memory is used for storing programs;
the processor calls a program and is used to:
acquiring image data acquired by image acquisition equipment of the robot;
calling a pre-trained data processing model to process the image data to obtain initial pose information of the robot; the data processing model is obtained based on training data; the training data comprises image data samples and pose information corresponding to the image data samples;
acquiring laser data acquired by laser acquisition equipment of the robot;
and correcting the initial pose information based on the laser data to obtain the current pose information of the robot.
CN202011570106.7A 2020-12-26 2020-12-26 Robot positioning method and device and electronic equipment Active CN112631303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011570106.7A CN112631303B (en) 2020-12-26 2020-12-26 Robot positioning method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011570106.7A CN112631303B (en) 2020-12-26 2020-12-26 Robot positioning method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112631303A true CN112631303A (en) 2021-04-09
CN112631303B CN112631303B (en) 2022-12-20

Family

ID=75325296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011570106.7A Active CN112631303B (en) 2020-12-26 2020-12-26 Robot positioning method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112631303B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515126A (en) * 2021-07-12 2021-10-19 北京经纬恒润科技股份有限公司 Vehicle positioning method and device
CN115577755A (en) * 2022-11-28 2023-01-06 中环服(成都)科技有限公司 Robot posture correction method, apparatus, computer device, and storage medium
CN117066702A (en) * 2023-08-25 2023-11-17 上海频准激光科技有限公司 Laser marking control system based on laser
CN117066702B (en) * 2023-08-25 2024-04-19 上海频准激光科技有限公司 Laser marking control system based on laser

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107246868A (en) * 2017-07-26 2017-10-13 上海舵敏智能科技有限公司 A kind of collaborative navigation alignment system and navigation locating method
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment
CN109431381A (en) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 Localization method and device, electronic equipment, the storage medium of robot
CN110866496A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and mapping method and device based on depth image
US20200301015A1 (en) * 2019-03-21 2020-09-24 Foresight Ai Inc. Systems and methods for localization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107246868A (en) * 2017-07-26 2017-10-13 上海舵敏智能科技有限公司 A kind of collaborative navigation alignment system and navigation locating method
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment
CN109431381A (en) * 2018-10-29 2019-03-08 北京石头世纪科技有限公司 Localization method and device, electronic equipment, the storage medium of robot
US20200301015A1 (en) * 2019-03-21 2020-09-24 Foresight Ai Inc. Systems and methods for localization
CN110866496A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and mapping method and device based on depth image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515126A (en) * 2021-07-12 2021-10-19 北京经纬恒润科技股份有限公司 Vehicle positioning method and device
CN115577755A (en) * 2022-11-28 2023-01-06 中环服(成都)科技有限公司 Robot posture correction method, apparatus, computer device, and storage medium
CN117066702A (en) * 2023-08-25 2023-11-17 上海频准激光科技有限公司 Laser marking control system based on laser
CN117066702B (en) * 2023-08-25 2024-04-19 上海频准激光科技有限公司 Laser marking control system based on laser

Also Published As

Publication number Publication date
CN112631303B (en) 2022-12-20

Similar Documents

Publication Publication Date Title
CN109949372B (en) Laser radar and vision combined calibration method
CN110599541B (en) Method and device for calibrating multiple sensors and storage medium
CN108020825B (en) Fusion calibration system and method for laser radar, laser camera and video camera
CN112631303B (en) Robot positioning method and device and electronic equipment
CN109977813A (en) A kind of crusing robot object localization method based on deep learning frame
CN111191625A (en) Object identification and positioning method based on laser-monocular vision fusion
CN111308448A (en) Image acquisition equipment and radar external parameter determination method and device
KR100939079B1 (en) System for mesurement of the snowfall and method for mesurement of the snowfall
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN104102069A (en) Focusing method and device of imaging system, and imaging system
CN113935379B (en) Human body activity segmentation method and system based on millimeter wave radar signals
CN113313765B (en) Positioning method, positioning device, electronic equipment and storage medium
CN113034526B (en) Grabbing method, grabbing device and robot
CN117036401A (en) Distribution network line inspection method and system based on target tracking
CN111985266A (en) Scale map determination method, device, equipment and storage medium
CN114964032A (en) Blind hole depth measuring method and device based on machine vision
CN112601021B (en) Method and system for processing monitoring video of network camera
US20230045287A1 (en) A method and system for generating a colored tridimensional map
CN113932712A (en) Melon and fruit vegetable size measuring method based on depth camera and key points
KR20230061612A (en) Object picking automation system using machine learning and method for controlling the same
CN113792645A (en) AI eyeball fusing image and laser radar
CN113240670A (en) Image segmentation method for object to be operated in live-wire operation scene
KR102548786B1 (en) System, method and apparatus for constructing spatial model using lidar sensor(s)
CN113313764B (en) Positioning method, positioning device, electronic equipment and storage medium
CN113269824B (en) Image-based distance determination method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 702, 7th floor, NO.67, Beisihuan West Road, Haidian District, Beijing 100080

Applicant after: Beijing Yunji Technology Co.,Ltd.

Address before: Room 702, 7th floor, NO.67, Beisihuan West Road, Haidian District, Beijing 100080

Applicant before: BEIJING YUNJI TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant