CN115294004A - Return control method and device, readable medium and self-moving equipment - Google Patents

Return control method and device, readable medium and self-moving equipment Download PDF

Info

Publication number
CN115294004A
CN115294004A CN202210992731.3A CN202210992731A CN115294004A CN 115294004 A CN115294004 A CN 115294004A CN 202210992731 A CN202210992731 A CN 202210992731A CN 115294004 A CN115294004 A CN 115294004A
Authority
CN
China
Prior art keywords
point cloud
image
base
self
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210992731.3A
Other languages
Chinese (zh)
Inventor
张泫舜
王雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecoflow Technology Ltd
Original Assignee
Ecoflow Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecoflow Technology Ltd filed Critical Ecoflow Technology Ltd
Priority to CN202210992731.3A priority Critical patent/CN115294004A/en
Publication of CN115294004A publication Critical patent/CN115294004A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application belongs to the technical field of self-moving equipment, and particularly relates to a return control method and device, a readable medium and self-moving equipment. The method comprises the steps of acquiring a visual image and a point cloud image in real time when the self-moving equipment is located in a designated area according to the position, fusing the point cloud image and the visual image to obtain a fused image, registering the fused point cloud image and the fused point cloud data with reference point cloud data of a base to obtain a registration result, and adjusting movement control information of the self-moving equipment according to the registration result to enable the self-moving equipment to be in butt joint with the base. The matching precision of the self-moving equipment and the base is improved in a mode of matching the visual image with the point cloud image, so that the return accuracy of the self-moving equipment can be realized.

Description

Return control method and device, readable medium and self-moving equipment
Technical Field
The application belongs to the technical field of self-moving equipment, and particularly relates to a return control method and device, a readable medium and self-moving equipment.
Background
With the development of technology, the application of the self-moving device is more and more extensive, but if the self-moving device is required to be charged for a long time, the self-moving device is generally controlled to return to the base for charging (also referred to as recharging), or the self-moving device is controlled to return to the base station when the self-moving device does not need to work, and recharging or standby can be realized after the self-moving device identifies the base by setting the base (also referred to as charging pile).
Generally, for a self-moving device working in an outdoor environment, taking an outdoor mowing robot as an example, when the outdoor mowing robot needs to be recharged due to insufficient electric quantity or after mowing work is finished, the outdoor mowing robot may not accurately identify the position of a base due to a large number of outdoor environmental influence factors, and thus the outdoor mowing robot cannot accurately return.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present application and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
The application aims to provide a return control method, a return control device, a readable medium and self-moving equipment, and the accuracy of return from the self-moving equipment is improved to a certain extent.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a return voyage control method including:
obtaining a location from a mobile device;
when the mobile equipment is determined to be located in the designated area according to the position, a visual image and a point cloud image are obtained in real time;
fusing the point cloud image with the visual image to obtain a fused image, wherein the fused image comprises fused point cloud data;
registering the fused point cloud data with the base reference point cloud data to obtain a registration result;
adjusting the mobile control information of the mobile equipment according to the registration result so as to enable the mobile equipment to be in butt joint with the base; wherein the movement control information includes a posture and a movement distance of the self-moving apparatus.
According to an aspect of an embodiment of the present application, there is provided a return travel control apparatus including:
a first obtaining module for obtaining a location from a mobile device;
the second acquisition module is used for acquiring the visual image and the point cloud image in real time when the self-moving equipment is determined to be located in the designated area according to the position;
the fusion module is used for fusing the point cloud image with the visual image to obtain a fused image, and the fused image comprises fused point cloud data;
the registration module is used for registering the fused point cloud data and the base reference point cloud data to obtain a registration result;
the adjusting module is used for adjusting the mobile control information of the self-mobile equipment according to the registration result so as to enable the self-mobile equipment to be in butt joint with the base; wherein the movement control information includes a posture and a movement distance from the mobile device.
According to an aspect of the embodiments of the present application, there is provided a computer-readable medium, on which a computer program is stored, and the computer program, when executed by a processor, implements a return control method as in the above technical solution.
According to an aspect of an embodiment of the present application, there is provided an autonomous mobile device including: a processor; and a memory for storing executable instructions for the processor; wherein the processor is configured to execute the return voyage control method as in the above technical solution via executing the executable instructions.
According to an aspect of an embodiment of the present application, there is provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the return control method as in the above technical solution.
According to the technical scheme, after the position of the self-moving device is located in the designated area, the acquired visual image and the point cloud image are fused to obtain a fused image, the fused point cloud data in the fused image and the reference point cloud data of the base are registered, and the moving information of the self-moving device is continuously adjusted according to the registration result so that the self-moving device can move to the position of the base, and therefore the self-moving device and the base are connected in a butt joint mode. Therefore, by adopting the technical scheme of the embodiment of the application, the area range of the base is preliminarily determined through the visual image, and then the point cloud data corresponding to the area range of the base is fused with the visual image so as to further accurately determine the specific position of the base. Therefore, the matching precision of the self-moving equipment and the base is improved in a mode of matching the visual image with the point cloud data, so that accurate return voyage can be realized, and the return voyage reliability and accuracy of the self-moving equipment are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 schematically illustrates a flowchart of steps of a return journey control method provided in an embodiment of the present application.
Fig. 2 schematically shows a specific flowchart for implementing step S105 in an embodiment of the present application.
Fig. 3 schematically shows a specific flowchart for implementing step S104 in an embodiment of the present application.
Fig. 4 schematically shows a flowchart of steps of a return journey control method according to another embodiment of the present application.
Fig. 5 schematically shows a flowchart of steps of a return control method provided in another embodiment of the present application.
Fig. 6 schematically shows a block diagram of a return control device provided in an embodiment of the present application.
FIG. 7 schematically illustrates a block diagram of a computer system suitable for use with a self-moving device that implements embodiments of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In the related art, when performing the recharging, the self-moving device generally controls the movement of the self-moving device according to RTK (real time Kinematic) so that the self-moving device can return to the base. However, the situation that the RTK pose jumps due to building body shielding or cloud layer over-thick shielding causes inaccurate pose, so that the return journey of the self-moving equipment fails, and the reliability of the return journey is low.
In order to solve the technical problem, the application provides a return control method, which includes the steps of obtaining a fusion image after fusing an acquired visual image and a point cloud image after the position of a self-moving device is located in a designated area, registering the fused point cloud data in the fusion image with reference point cloud data of a base, and continuously adjusting moving information of the self-moving device according to a registration result to enable the self-moving device to move to the position of the base, so that the self-moving device is connected with the base in a butt joint mode. Therefore, by adopting the technical scheme of the embodiment of the application, the area range where the base is located is preliminarily determined through the visual image, and then the point cloud data corresponding to the area range where the base is located is fused with the visual image so as to further accurately determine the specific position of the base. Therefore, the matching precision of the self-moving equipment and the base is improved in a mode of matching the visual image with the point cloud data, so that accurate return voyage can be realized, and the return voyage reliability and accuracy of the self-moving equipment are improved.
The method, the apparatus, the readable medium, and the self-moving device for controlling return flight provided by the present application are described in detail below with reference to specific embodiments.
The method of the embodiment can be applied to a scene of recharging from the mobile device, or a scene of waiting when the mobile device needs to return to the position of the base for standby after executing the task, and the method can be used for realizing that the mobile device can accurately return to the position of the base. It should be noted that, in the present embodiment, a scenario in which the mobile device performs the recharging is mainly taken as an example.
Specifically, referring to fig. 1, fig. 1 schematically illustrates a flowchart of steps of a return control method provided in an embodiment of the present application. The main body of the return control method may be a controller, and may mainly include steps S101 to S105 as follows.
Step S101, the position of the mobile equipment is obtained.
When the electric quantity of the self-moving equipment is insufficient or a recharging instruction or a returning instruction sent by the mobile terminal is received, the self-moving equipment is controlled to return to the base. Wherein the recharge instruction or the return instruction may be used to instruct a return instruction from the mobile device to the base. And in the process of controlling the mobile equipment to return to the base, acquiring the position of the mobile equipment in real time, and determining whether the mobile equipment enters the specified area of the base or not by acquiring the position of the mobile equipment.
The self-moving device may be a device including a self-moving auxiliary function, or may be a semi-self-moving device or a completely self-moving device. For example, the self-moving device may be a robot, a drone, a smart car, and the like, and may be, for example, a mowing robot, a meal delivery robot, a minesweeping robot, a cleaning robot, and the like, and the type of the robot is not limited in the present application.
And S102, when the self-moving equipment is determined to be located in the designated area according to the position, acquiring a visual image and a point cloud image in real time.
The designated area is an area range which is away from the base by a preset distance. The designated area can be set according to the position of the base in the actual scene. For example, the designated area may be an area range at a preset distance of 5 meters, 6 meters, 10 meters, etc. from the base. The designated area can enable the possibility that the visual image and the point cloud image containing the base are acquired by the image acquisition device in the advancing direction of the mobile equipment to be higher, namely the acquisition success rate of the visual image containing the base is improved. Meanwhile, the acquisition of the visual image and the point cloud image is carried out after the mobile equipment enters the designated area, so that the acquisition of invalid visual images and point cloud images can be reduced, the time cost of subsequent processing operation is reduced, and the acquisition efficiency of the visual images is improved. Therefore, when the position of the mobile equipment is detected to be located in the designated area, the image acquisition device can be started to acquire the image in the advancing direction of the mobile equipment. Wherein, image acquisition device can include vision camera and laser camera, and vision camera and laser camera are under the external reference calibration for the visual image that the vision camera was gathered and the point cloud data that the laser camera was gathered have the synchronism, that is to say from the mobile device in the back charge driving process, every pixel point in the visual image of gathering all can match corresponding point cloud data.
When the self-moving device is detected not to be in the designated area, the self-moving device can be controlled to advance into the designated area according to the position of the self-moving device. After the self-moving equipment enters a designated area, a visual image and a point cloud image are obtained in real time, and the visual image and the point cloud image have a one-to-one correspondence relationship.
And S103, fusing the point cloud image and the visual image to obtain a fused image, wherein the fused image comprises fused point cloud data.
Specifically, the fusion of the point cloud image and the visual image means that the data points of the point cloud image and the pixels of the visual image are converted to the same reference coordinate system. For example, a laser coordinate system where a laser camera is located is taken as a reference coordinate system, that is, pixel points on the visual image are converted into the laser coordinate system through a transformation matrix. Or taking the camera coordinate where the visual camera where the visual image is located as a reference coordinate system, namely converting the data points of the point cloud image into the camera coordinate system through a transformation matrix. Or, taking a coordinate system where the designated sensor is located as a reference coordinate system, and at this time, also taking the coordinate system as a global coordinate system (world coordinate system), acquiring the external reference matrixes of the sensor and the laser camera, converting the data points of the point cloud image into the global coordinate system through the external reference matrixes, and in the same way, acquiring the external reference matrixes of the sensor and the visual camera, and converting the pixel points of the visual image into the global coordinate system through the external reference matrixes.
In some optional embodiments, when the point cloud image and the visual image are fused to obtain a fused image, the fused point cloud data may be obtained mainly in the following manner, specifically:
and acquiring an external parameter matrix of the image acquisition device.
The image acquisition device comprises a vision camera and a laser camera, the vision camera can be an RGB camera, a vision image can be acquired through the vision camera, and point cloud data corresponding to the vision image can be acquired through the laser camera. And then, carrying out external parameter calibration on the visual camera and the laser camera to determine an external parameter matrix between the visual camera and the laser camera, namely the external parameter matrix of the image acquisition device. Therefore, the external parameter matrix of the image acquisition device is obtained, so that the fusion of the visual image and the point cloud image is realized.
Projecting the visual image into the point cloud image according to the external parameter matrix to obtain a fused image; or
And projecting the point cloud image into the visual image according to the external parameter matrix to obtain a fusion image.
In the embodiment of the application, the method can project the pixel points of the visual image into the point cloud image by combining the external parameter matrix according to the spatial relationship between the point cloud image and the visual image to obtain the projection points of the pixel points in the point cloud image, and then, according to the relationship between the coordinates of the projection points and the size of the point cloud image, the projection points exceeding the size range of the point cloud image are removed to obtain the fusion image, wherein the fusion image comprises the point cloud data after fusion.
In the embodiment of the application, the point cloud image is projected into the visual image according to the external reference matrix, and a fused image comprising fused point cloud data is obtained. The method can be specifically realized by formula (1) and formula (2):
u = g _ fx (pt.x-t _ depth _ rgb _ x)/pt.z + g _ cx equation (1);
v = g _ fy (pt.y-t _ depth _ rgb _ x)/pt.z + g _ cy formula (2);
where u and v represent pixel coordinates on the camera coordinate system where the vision camera is located. g _ fx and g _ fy represent the camera focal length in the x direction and the y direction respectively, pt represents the point cloud data, corresponding pt.x, pt.y and pt.z represent the x coordinate, the y coordinate and the z coordinate of the point cloud data respectively, t _ depth _ rgb _ x represents the external reference matrix of the image acquisition device, and g _ cx and g _ cy represent the offset in the x direction and the y direction in the camera parameters respectively. According to the formula (1) and the formula (2), t _ depth _ rgb _ x represents the external reference transformation towards the x translation direction, namely, the point cloud data in the laser coordinate system where the laser camera is located is translated to the camera coordinate system where the visual camera is located, and the point cloud image and the visual image are fused.
In the embodiment of the present application, whether the visual image includes the pedestal or not may be detected by an object detection algorithm. The target detection algorithm may include one or more of R-CNN (Region-CNN, regional convolutional neural network), SPP (Spatial Pyramid Pooling) YOLO (young Only Look Once, regression method based on deep learning), and other target detection algorithms, and the selection of the target detection algorithm is not limited herein. For example, the pixel points where the pedestal is detected may be marked in the visual image. Because the visual image and the point cloud image have a one-to-one correspondence relationship, the pixel points marked with the bases in the visual image and the point cloud data of the corresponding base of the point cloud image can be fused in a projection mode to obtain fused point cloud data. And establishing a mapping relation of the base between the visual image and the point cloud data through the fused point cloud data. Because point cloud data of all point cloud images are not adopted during fusion, only point cloud data containing a base is adopted, and point cloud data corresponding to non-bases are filtered out, the calculation of data amount is reduced, and the accuracy of point cloud registration is improved by filtering out the point cloud data of the non-bases.
In the real-time example of the application, the fused point cloud data is registered, not all the fused point cloud data but only the point cloud data corresponding to the base in the visual image, namely the point cloud data corresponding to the non-base is filtered. Therefore, the calculation of the data volume is reduced, and the non-base point cloud data is filtered, so that the matching precision is further improved.
In the embodiment of the application, when the base is detected in the visual image, the label of the base can be set in the image area where the base is located, so that the pixel points belonging to the base in the image area all carry the same label. Through fusing the point cloud data and the visual image, the corresponding point cloud data belonging to the base can be determined according to the pixel points carrying the labels, so that the point cloud data corresponding to the base and the point cloud data corresponding to other environmental information are distinguished, and the non-base point cloud data are filtered. The fused visual image has the space coordinates of the point cloud data, so that the coarse positioning can be performed through scene segmentation and object detection of the visual image, and the self-moving equipment moves towards the base according to the coarse positioning result.
And S104, registering the fused point cloud data and the base reference point cloud data to obtain a registration result.
Wherein the reference point cloud data represents point cloud data that is a standard reference for the base.
In the embodiment of the application, after the fused point cloud data and the base reference point cloud data are registered, the obtained rotation matrix and translation vector are the final pose of the self-moving equipment. Wherein the registration includes a coarse registration and a fine registration. Coarse registration refers to relatively coarse registration under the condition that the pose change between the fused point cloud data and the base reference point cloud data is completely unknown, the purpose is mainly to provide a good initial pose for fine registration, and the pose change can be generally predicted through an Inertial Measurement Unit (IMU) or predicted through an odometer; and the fine matching criterion is that the initial pose given by coarse registration is further optimized to obtain a more accurate final pose.
The registration of the fused point cloud data and the base reference point cloud data specifically comprises the following processes:
for example, a Normal Distribution Transform (NDT) algorithm may be first used to register the fused point cloud data with the reference point cloud data of the base, so as to realize coarse registration of the pose of the self-moving device, that is, to obtain the initial estimated pose of the self-moving device. And (3) carrying out fine registration on the initial estimated pose by using an Iterative Closest Point (ICP) algorithm, so that the finally obtained error is optimal, and the fine registration of the pose of the mobile equipment can be completed. The final obtained registration result is the optimal pose of the self-moving equipment after fine registration processing, namely the pose change estimated when the self-moving equipment is over against the base relative to the current pose of the self-moving equipment. For example, when the self-moving device is currently deflected by 10 ° relative to the yaw angle when the self-moving device is facing the base.
Then, because the fused point cloud data carries the information of the visual image, the point cloud data can be filtered more accurately, and the accuracy can be improved and the interference can be reduced by registering the fused point cloud data and the reference point cloud data of the base, so that the accurate position of the self-moving equipment can be determined, and the accurate navigation and recharging from the self-moving equipment can be realized.
Step S105, adjusting the movement control information of the mobile equipment according to the registration result so as to enable the mobile equipment to be in butt joint with the base; wherein the movement control information includes a posture and a movement distance of the self-moving apparatus.
The attitude represents an orientation angle of the self-moving device, for example, angles of a roll angle, a pitch angle, and a yaw angle of the self-moving device. The movement distance may be a distance from the mobile device to the base.
And continuously adjusting the movement control information of the self-moving equipment according to the registration result and the visual image, so that the more accurate pose of the self-moving equipment can be obtained, the matching precision of the self-moving equipment and the base is improved, and the accuracy of return voyage of the self-moving equipment can be realized.
According to the technical scheme provided by the embodiment of the application, after the position of the self-moving equipment is located in the designated area, the acquired visual image and the point cloud image are fused to obtain a fused image, the fused point cloud data in the fused image and the reference point cloud data of the base are registered, and the moving information of the self-moving equipment is continuously adjusted according to the registration result so that the self-moving equipment moves to the position of the base, so that the self-moving equipment is in butt joint with the base. Therefore, by adopting the technical scheme of the embodiment of the application, the area range of the base is preliminarily determined through the visual image, and then the point cloud data corresponding to the area range of the base is fused with the visual image so as to further accurately determine the specific position of the base. Therefore, the matching precision of the self-moving equipment and the base is improved in a mode of matching the visual image with the point cloud data, so that accurate return voyage can be realized, and the return voyage reliability and accuracy of the self-moving equipment are improved.
In some optional embodiments, referring to fig. 2, fig. 2 schematically shows a specific flowchart for implementing step S105 in an embodiment of the present application. Step S105, adjusting the movement control information of the mobile device according to the registration result to interface the mobile device with the base, which may specifically include the following steps S201 to S204.
And step S201, determining the target distance and the target angle from the base to the mobile device according to the registration result.
Wherein the target distance is the distance between the center of the base and the center of the visual image in the visual image. The target angle is the included angle between the connecting line of the center of the base and the center of the visual image and the perpendicular bisector of the center of the base in the visual image.
When the fused point cloud data and the base reference point cloud data are registered, the base position and pose change of the mobile equipment relative to the positive direction can be obtained after the fused point cloud data is registered by taking the base reference point cloud data as reference. The target distance and the target angle from the base to the self-moving device can be further determined through the pose change. And respectively adjusting the moving distance of the self-moving equipment and the posture of the self-moving equipment by obtaining the target distance and the target angle.
And S202, adjusting the posture of the mobile equipment according to the target angle.
And adjusting the posture of the self-moving device by obtaining the target angle, for example, if the target angle between the base and the self-moving device is 45 degrees according to the registration result, adjusting the posture of the self-moving device to deflect 45 degrees towards the direction close to the base.
And step S203, adjusting the moving distance of the mobile equipment according to the target distance.
And adjusting the moving distance of the mobile device by obtaining the target distance, for example, if the target distance between the base and the mobile device is 10 cm according to the registration result, controlling the mobile device to move 10 cm in the direction close to the base. It should be noted that the data are only for illustration and do not represent actual parameters.
And S204, when the self-mobile device is detected to be overlapped with the base in the fused image, determining that the self-mobile device is completely butted with the base.
When the fact that the perpendicular bisector where the center point of the base is located in the fused image is coincident with the perpendicular bisector where the center point of the fused image is located in the fused image is detected, the mobile device is considered to be coincident with the base. And after the self-moving equipment is detected to be overlapped with the base, the self-moving equipment is considered to be in butt joint with the base.
Therefore, the target distance and the target angle from the base to the mobile equipment are determined in the fusion image through the registration result, the posture of the mobile equipment is adjusted through the target angle to obtain a more accurate posture, and the moving distance of the mobile equipment is adjusted through the target distance to obtain a more accurate position, so that the matching is more accurate when the mobile equipment is superposed with the base. This mode has improved the cooperation precision from mobile device and base matching promptly, is favorable to realizing reliability and the accuracy of returning a journey from mobile device.
In some optional embodiments, referring to fig. 3, fig. 3 schematically shows a specific flowchart for implementing step S104 in an embodiment of the present application. The registering of the fused point cloud data and the base reference point cloud data to obtain a registration result may specifically include the following steps S301 to S303.
And S301, extracting the characteristics of the fused point cloud data to obtain the fused point cloud data of the base.
The point cloud data of the base and the point cloud data of other environmental information cannot be distinguished through the fused point cloud data. Therefore, feature extraction can be performed on the fused point cloud data by adopting target detection algorithm detection to obtain fused point cloud data belonging to the base so as to filter point cloud data of other irrelevant environmental information. For a specific target detection algorithm, refer to step S103, which is not described herein again. Or before the fused point cloud data is obtained, labels are set for pixel points belonging to the base in the visual image through a target detection algorithm, so that the fused point cloud data of the base can be screened out through identifying the labels in the fused point cloud data.
And S302, performing rough registration calculation on the fused point cloud data of the base and the reference point cloud data of the base.
Methods for coarse registration may include, but are not limited to, feature-based methods, random sample consensus (RANSAC) framework-based methods, and global registration methods. The rough registration method based on the characteristics comprises the steps of firstly establishing a signature for each fused point cloud data and reference point cloud data according to a Point Signature (PS) characteristic description operator provided by a local characteristic descriptor, searching points with the same or similar signatures in the fused point cloud data and the reference point cloud data as corresponding points, and calculating and matching to obtain an initial pose of the mobile equipment.
And step S303, performing fine registration calculation on the result obtained by the coarse registration calculation to obtain a registration result.
After the coarse registration is carried out, the initial pose obtained by the coarse registration is subjected to fine registration calculation through an ICP (inductively coupled plasma) algorithm to obtain a registration result, namely the final pose of the mobile equipment.
Therefore, the current point cloud features and the reference point cloud features are subjected to rough registration calculation, and then the result obtained by the rough registration calculation is subjected to fine registration calculation so as to perform accurate filtering, reduce interference and improve the accuracy of point cloud matching.
In some alternative embodiments, the reference point cloud data of the base may be obtained by:
acquiring historical point cloud data of a base;
creating a point cloud map according to historical point cloud data of the base;
and recording the pose of the base in the point cloud map to obtain the reference point cloud data of the base.
The historical point cloud data of the base refers to point cloud data corresponding to the base, which is acquired at historical time and extracted from the depth image, and the depth image can comprise the base acquired from the mobile device in the positive direction. And, in some embodiments, the reference point cloud data may also include point cloud data extracted from the three-dimensional model to which the base belongs.
Therefore, the point cloud map is created according to the historical point cloud data of the base, the point cloud map can be a grid map, and the pose of the base in the point cloud map is recorded so as to obtain the reference point cloud data of the base, so that the point cloud data after fusion and the reference point cloud data of the base can be favorably registered, and the accurate pose of the mobile equipment can be further favorably determined.
In some optional embodiments, after obtaining the position of the self-mobile device, when the self-mobile device is determined not to be located in the designated area according to the position, the self-mobile device is controlled to enter the designated area based on the positioning signal.
Wherein the positioning signal may be transmitted by an RTK or GNSS, based on which the autonomous mobile device is guided into the designated area. Therefore, the self-moving equipment is controlled to be located in the designated area, so that the precision of the image acquisition device is kept conveniently, and the acquisition effect is better.
In some alternative embodiments, referring to fig. 4, fig. 4 schematically shows a flowchart of steps of a return control method provided in another embodiment of the present application. After acquiring the visual image in real time, steps S401 to S402 may be mainly included as follows.
Step S401, performing object detection on the fused image to obtain a category to which one or more objects included in the fused image belong.
After the fused image is obtained, object detection is carried out on the fused image by using an image detection algorithm, so that the category of each object in the fused image is obtained. Therefore, the classification of the fused image is facilitated by obtaining the category to which each object belongs, so that the base and non-base objects are determined.
Step S402, adding a category label corresponding to the object to the fused point cloud data corresponding to the object.
After the class of each object in the fused image is detected, whether the class of each object belongs to the base class or the non-base class is obtained, and a class label is added to the point in the fused point cloud data. The category label may be a category number, a category name, and the like, which is not limited herein.
After the fused image is obtained, object detection needs to be performed on the fused image to obtain the category to which one or more objects contained in the fused image belong, and a category label corresponding to the object is added to the fused point cloud data corresponding to the object, so that whether a base is contained in the fused image can be determined according to the category label, and a non-base object can be screened out conveniently. Therefore, the subsequent point cloud data containing the base is conveniently adopted, and the point cloud data corresponding to the non-base is filtered, so that the calculation of the data volume is reduced, and the matching precision is further improved by filtering the point cloud data of the non-base.
To facilitate understanding of the technical solution of the present application, referring to fig. 5, fig. 5 schematically shows a flowchart of steps of a return control method according to still another embodiment of the present application. The return voyage control method comprises the following steps:
step S501, the location of the mobile device is acquired.
The controller receives a recharging instruction sent by the mobile terminal, wherein the recharging instruction is used for indicating an instruction of returning from the mobile equipment to the charging device (namely the base). After the controller receives the recharging instruction, the current position of the mobile device can be obtained first, and the mobile device can be controlled to accurately return to the position of the base from the base in a follow-up mode through the current position of the mobile device, so that the mobile device can be in butt joint with the base.
Step S502, judging whether the self-moving equipment is located in the designated area.
The designated area is an area range which is away from the base by a preset distance, a clear visual image can be shot in the designated area, and the category of each object can be accurately identified through the visual image. After obtaining the current location obtained from the mobile device, it is determined whether the location is within a specified area. And if the self-moving equipment is not in the designated area, controlling the self-moving equipment to enter the designated area.
Step S503, if the self-moving equipment is located in the designated area, acquiring the visual image in real time, and detecting and identifying the visual image to obtain the category of each object in the visual image.
The visual image is acquired in real time after entering the designated area from the mobile device. After the visual image is acquired, the visual image needs to be detected and identified to obtain the category to which each object in the visual image belongs. Therefore, whether the mobile equipment is located in the designated area or not is judged firstly, so that clear visual images can be obtained through shooting, and the categories of all objects in the visual images can be conveniently identified.
Step S504, detecting whether the visual image includes a base.
When detecting whether the visual image contains the base, the detection can be specifically performed through an OpenCV algorithm. Of course, other algorithms may be used to detect whether the visual image includes the base, as long as it can detect whether the visual image includes the base, which is not limited herein.
And determining whether the visual image contains the base or not according to the category of each object. Therefore, by identifying the category to which each object in the visual image belongs, non-base objects can be screened conveniently, and the visual image containing the base is reserved. Therefore, the subsequent point cloud data containing the base is conveniently adopted, and the point cloud data corresponding to the non-base is filtered, so that the calculation of the data volume is reduced, and the matching precision is further improved by filtering the point cloud data of the non-base.
And step S505, when the visual image is detected to contain the base, point cloud data corresponding to the visual image is obtained, and the point cloud data and the visual image are fused to obtain fused point cloud data.
When the visual image is detected to contain the base, point cloud data corresponding to the visual image is obtained, namely the point cloud data of all images is not adopted in the embodiment, but only the point cloud data containing the base visual image is adopted. Only point cloud data containing the base visual image is obtained, and point cloud data containing the non-base visual image is not obtained, so that point cloud data corresponding to the non-base are filtered. After the point cloud data corresponding to the visual image is obtained, the point cloud data and the visual image are fused to obtain the fused point cloud data, so that the specific position of the base can be further accurately determined subsequently, and the docking precision of the mobile device and the base can be improved.
Step S506, registering the fused point cloud data with the base reference point cloud data to obtain a registration result;
and step S507, adjusting the moving state of the self-moving equipment according to the registration result and the visual image so as to enable the self-moving equipment to be in butt joint with the base.
It should be noted that, there are reasons such as environmental factors that cause the distance between the point cloud data to be too large when the point cloud data after fusion is registered, that is, the registration is unsuccessful. And when the registration is unsuccessful, continuously adjusting the moving state of the mobile equipment according to the registration result and the visual image, controlling the position of the mobile equipment to be finely adjusted, and re-executing the step S503 to the step S06 after fine adjustment. And when the registration is successful, executing the step S507, thereby realizing accurate navigation self-mobile device recharging and improving the recharging reliability and accuracy.
Therefore, the area range where the base is located is preliminarily determined through the visual image, and then the point cloud data corresponding to the area range where the base is located is fused with the visual image so as to further accurately determine the specific position of the base. The matching precision of the self-moving equipment and the base is improved in a mode of matching the visual image with the point cloud data, so that the accurate navigation self-moving equipment recharging can be realized, and the recharging reliability and accuracy are improved. In addition, because the point cloud data of all images are not adopted, but only the point cloud data containing the bases are adopted, and the point cloud data corresponding to the non-bases are filtered, the calculation of the data volume is reduced, and the non-base point cloud data are filtered, so that the matching precision is further improved.
It should be noted that although the steps of the methods in this application are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order or that all of the depicted steps must be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Embodiments of the apparatus of the present application are described below, which may be used to implement the return control method in the above-described embodiments of the present application. Fig. 6 schematically shows a block diagram of a return control device provided in an embodiment of the present application. As shown in fig. 6, return control apparatus 600 includes:
a first obtaining module 601, configured to obtain a location of a mobile device;
a second obtaining module 602, configured to obtain a visual image and a point cloud image in real time when it is determined that the mobile device is located in the designated area according to the location;
a fusion module 603, configured to fuse the point cloud image with the visual image to obtain a fused image, where the fused image includes fused point cloud data;
a registration module 604, configured to register the fused point cloud data with reference point cloud data of the base to obtain a registration result;
an adjusting module 605, configured to adjust the mobile control information of the self-moving device according to the registration result, so that the self-moving device is docked with the base; wherein the movement control information includes a posture and a movement distance of the self-moving apparatus.
In some embodiments of the present application, based on the above technical solution, the adjusting module 605 is further configured to determine a target distance and a target angle from the base to the mobile device according to the registration result; adjusting the posture of the mobile equipment according to the target angle; adjusting the moving distance of the mobile equipment according to the target distance; when the self-mobile device is detected to be coincident with the base in the fused image, it is determined that the self-mobile device is docked with the base.
In some embodiments of the present application, based on the above technical solution, the fusion module 603 is further configured to obtain an external parameter matrix of the image acquisition device; projecting pixel points in the visual image into the point cloud image according to the external parameter matrix to obtain a fused image; or projecting the point cloud data in the point cloud image to the visual image according to the external parameter matrix to obtain a fusion image.
In some embodiments of the present application, based on the above technical solution, the registration module 604 is further configured to perform feature extraction on the fused point cloud data to obtain fused point cloud data of the base; carrying out rough registration calculation on the fused point cloud data of the base and the reference point cloud data of the base; and performing fine registration calculation on the result obtained by the coarse registration calculation to obtain a registration result.
In some embodiments of the application, based on the above technical solution, the return journey control apparatus further includes a creation module, where the creation module is configured to obtain historical point cloud data of the base; creating a point cloud map according to historical point cloud data of the base; and recording the pose of the base in the point cloud map to obtain the reference point cloud data of the base.
In some embodiments of the application, based on the above technical solution, the return journey control apparatus further includes a positioning module, where the positioning module is configured to control the self-moving device to enter the designated area based on a positioning signal when it is determined that the self-moving device is not located in the designated area according to the position.
In some embodiments of the application, based on the above technical solution, the second obtaining module is further configured to perform object detection on the fused image, and determine one or more objects included in the fused image; and adding a category label corresponding to the object to the fused point cloud data corresponding to the object.
The specific details of the return control device provided in each embodiment of the present application have been described in detail in the corresponding method embodiment, and are not described herein again.
Fig. 7 schematically shows a computer system structure block diagram of an autonomous mobile device for implementing an embodiment of the present application.
It should be noted that the computer system 700 of the self-moving device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the use range of the embodiment of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the random access memory 703, various programs and data necessary for system operation are also stored. The cpu 701, the rom 702, and the ram 703 are connected to each other via a bus 704. An Input/Output interface 705 (Input/Output interface, i.e., I/O interface) is also connected to the bus 704.
The following components are connected to the input/output interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a local area network card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A driver 710 is also connected to the input/output interface 705 as necessary. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present application, the processes described in the various method flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the central processor 701, performs various functions defined in the system of the present application.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A return journey control method is characterized by comprising the following steps:
obtaining a location from a mobile device;
when the self-moving equipment is determined to be located in the designated area according to the position, a visual image and a point cloud image are obtained in real time;
fusing the point cloud image and the visual image to obtain a fused image, wherein the fused image comprises fused point cloud data;
registering the fused point cloud data with the base reference point cloud data to obtain a registration result;
adjusting the movement control information of the self-moving equipment according to the registration result so as to enable the self-moving equipment to be in butt joint with the base; wherein the movement control information includes a posture and a movement distance of the self-moving apparatus.
2. The return voyage control method according to claim 1, wherein the adjusting the movement control information of the self-moving device according to the registration result to interface the self-moving device with the base includes:
determining a target distance and a target angle from the base to the self-moving equipment according to the registration result;
adjusting the posture of the self-moving equipment according to the target angle;
adjusting the moving distance of the self-moving equipment according to the target distance;
determining that the self-moving device is docked with the base when the self-moving device is detected to be coincident with the base in the fused image.
3. The return voyage control method according to claim 1, wherein the fusing the point cloud image and the visual image to obtain a fused image comprises:
acquiring an external parameter matrix of an image acquisition device;
projecting pixel points in the visual image to the point cloud image according to the external parameter matrix to obtain the fused image; or
And projecting the point cloud data in the point cloud image into the visual image according to the external parameter matrix to obtain the fusion image.
4. The return journey control method according to claim 1, wherein the registering the fused point cloud data with the reference point cloud data of the base to obtain a registration result comprises:
extracting the characteristics of the fused point cloud data to obtain fused point cloud data of the base;
performing coarse registration calculation on the fused point cloud data of the base and the reference point cloud data of the base;
and performing fine registration calculation on the result obtained by the coarse registration calculation to obtain the registration result.
5. The return voyage control method according to any one of claims 1 to 4, wherein before the position acquired from the mobile device, the method further comprises:
acquiring historical point cloud data of the base;
creating a point cloud map according to historical point cloud data of the base;
and recording the pose of the base in the point cloud map so as to obtain the reference point cloud data of the base.
6. The return leg control method according to claim 1, further comprising:
and controlling the self-moving equipment to enter the designated area based on a positioning signal when the self-moving equipment is determined not to be located in the designated area according to the position.
7. The return leg control method according to claim 1, wherein after the obtaining of the fusion image, the method further comprises:
carrying out object detection on the fused image, and determining one or more objects contained in the fused image;
and adding a category label corresponding to the object to the fused point cloud data corresponding to the object.
8. A return voyage control device, characterized in that, the return voyage control device includes:
a first obtaining module for obtaining a location from a mobile device;
the second acquisition module is used for acquiring a visual image and a point cloud image in real time when the self-moving equipment is determined to be located in the designated area according to the position;
the fusion module is used for fusing the point cloud image and the visual image to obtain a fused image, and the fused image comprises fused point cloud data;
the registration module is used for registering the fused point cloud data with the base reference point cloud data to obtain a registration result;
the adjusting module is used for adjusting the mobile control information of the self-moving equipment according to the registration result so as to enable the self-moving equipment to be in butt joint with the base; wherein the movement control information includes a posture and a movement distance of the self-moving apparatus.
9. A computer-readable medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out a return control method according to any one of claims 1 to 7.
10. An autonomous mobile device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the return leg control method of any one of claims 1 to 7 via execution of the executable instructions.
CN202210992731.3A 2022-08-18 2022-08-18 Return control method and device, readable medium and self-moving equipment Pending CN115294004A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210992731.3A CN115294004A (en) 2022-08-18 2022-08-18 Return control method and device, readable medium and self-moving equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210992731.3A CN115294004A (en) 2022-08-18 2022-08-18 Return control method and device, readable medium and self-moving equipment

Publications (1)

Publication Number Publication Date
CN115294004A true CN115294004A (en) 2022-11-04

Family

ID=83830548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210992731.3A Pending CN115294004A (en) 2022-08-18 2022-08-18 Return control method and device, readable medium and self-moving equipment

Country Status (1)

Country Link
CN (1) CN115294004A (en)

Similar Documents

Publication Publication Date Title
CN110312912B (en) Automatic vehicle parking system and method
US10339387B2 (en) Automated multiple target detection and tracking system
CN110116407B (en) Flexible robot position and posture measuring method and device
JP7263630B2 (en) Performing 3D reconstruction with unmanned aerial vehicles
CN106969768B (en) Accurate positioning and parking method for trackless navigation AGV
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
CN112183133B (en) Aruco code guidance-based mobile robot autonomous charging method
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
CN112346453A (en) Automatic robot recharging method and device, robot and storage medium
CN110986920B (en) Positioning navigation method, device, equipment and storage medium
CN110751123B (en) Monocular vision inertial odometer system and method
CN109978954A (en) The method and apparatus of radar and camera combined calibrating based on cabinet
CN116278880A (en) Charging equipment and method for controlling mechanical arm to charge
US20210272289A1 (en) Sky determination in environment detection for mobile platforms, and associated systems and methods
US11525697B2 (en) Limited-sensor 3D localization system for mobile vehicle
CN115421486A (en) Return control method and device, computer readable medium and self-moving equipment
EP3859275B1 (en) Navigation apparatus, navigation parameter calculation method, and program
CN115294004A (en) Return control method and device, readable medium and self-moving equipment
CN108564626B (en) Method and apparatus for determining relative pose angle between cameras mounted to an acquisition entity
CN113345023B (en) Box positioning method and device, medium and electronic equipment
EP4086872A1 (en) Parking route determining method and apparatus, vehicle and storage medium
CN113994382A (en) Depth map generation method, electronic device, calculation processing device, and storage medium
CN115328164A (en) Mobile robot and autonomous charging method and system thereof
CN118115562A (en) Autonomous three-dimensional reconstruction method and system for warship surface scene, electronic equipment and computer readable storage medium
CN117940739A (en) Positioning method and device for movable platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination