WO2023109347A1 - Procédé de relocalisation pour dispositif automoteur, dispositif et support de stockage - Google Patents

Procédé de relocalisation pour dispositif automoteur, dispositif et support de stockage Download PDF

Info

Publication number
WO2023109347A1
WO2023109347A1 PCT/CN2022/129455 CN2022129455W WO2023109347A1 WO 2023109347 A1 WO2023109347 A1 WO 2023109347A1 CN 2022129455 W CN2022129455 W CN 2022129455W WO 2023109347 A1 WO2023109347 A1 WO 2023109347A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
environment information
area
map
area identification
Prior art date
Application number
PCT/CN2022/129455
Other languages
English (en)
Chinese (zh)
Inventor
罗绍涵
孙佳佳
曹蒙
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2023109347A1 publication Critical patent/WO2023109347A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process

Definitions

  • the disclosure belongs to the technical field of artificial intelligence, and specifically relates to a relocation method, device and storage medium of a self-mobile device.
  • self-mobile devices can achieve autonomous positioning and navigation with the help of Simultaneous Localization and Mapping (SLAM) technology.
  • SLAM Simultaneous Localization and Mapping
  • it may be hijacked, such as being moved, suspended or dragged in a large area.
  • an uncontrollable drift error will occur in the positioning of the self-mobile device, and repositioning is required.
  • the traditional relocation method includes: searching for the original starting position from the hijacked position of the self-mobile device, thereby completing the relocation of the self-mobile device.
  • the present disclosure provides a relocation method, device and storage medium of a self-mobile device, which can solve the problem of low relocation efficiency of the self-mobile device due to cumbersome relocation methods.
  • the disclosure provides the following technical solutions:
  • a method for relocating a self-mobile device comprising: in response to an instruction for relocating the self-mobile device in a working area, acquiring the current location of the self-mobile device Collected current environmental information;
  • the identifying the current environment information to obtain the area identification information corresponding to the current environment information includes:
  • the area identification model is obtained by training a preset neural network model using training data; the training data includes sample environment information and the The region label corresponding to the sample environment information.
  • the training data further includes the classification label of the first obstacle corresponding to the sample environment information, and the classification label of the first obstacle is used for joint training of the neural network model in combination with the area label , to obtain the area identification model;
  • the inputting the current environment information into the pre-trained area identification model to obtain the area identification information includes: inputting the current environment information into the area identification model to obtain The classification result of the first obstacle corresponding to the current environment information and the area identification information;
  • the training data also includes the first characteristic information of the first obstacle; correspondingly, the inputting the current environment information into the pre-trained area recognition model to obtain the area identification information includes: inputting the current Inputting the environment information and the first feature information into the area identification model to obtain the area identification model;
  • the first obstacle is used to indicate area identification information.
  • the identifying the current environment information to obtain the area identification information corresponding to the current environment information further includes:
  • the current environment information includes information matching the second feature information, determine that the area identification information corresponding to the current environment information is the area identification information indicated by the first obstacle.
  • the second characteristic information includes contour information of the first obstacle; the second characteristic information further includes size information and/or distance information of the first obstacle.
  • the area identification information is the position coordinates of the first obstacle in the area map; determining the local area map indicated by the area identification information in the area map of the working area includes:
  • a local area map of a preset shape and a preset size is determined based on the area identification information
  • the local region map to which the region identification information belongs is determined, and the region map is pre-divided into a plurality of partial region maps.
  • determining the partial area map indicated by the area identification information in the area map of the working area includes:
  • matching the current environment information with the template environment information to determine the location of the mobile device includes:
  • the relocation neural network inputting the current environment information and the template environment information into a pre-trained relocation neural network to obtain the position; the relocation neural network is used to determine whether the current environment information matches the template environment information, and The location corresponding to the matched template environment information is determined as the location.
  • the template environment information includes characteristic information of the second obstacle.
  • the second aspect provides an electronic device, the device includes a processor and a memory; a program is stored in the memory, and the program is loaded and executed by the processor to realize the mobile device described in the first aspect The relocation method.
  • a third aspect provides a computer-readable storage medium, where a program is stored in the storage medium, and when the program is executed by a processor, the program is used to implement the self-mobile device relocation method provided in the first aspect.
  • the beneficial effect of the present disclosure is that: by responding to the instruction of relocating the self-mobile device in the working area, the current environment information collected from the mobile device based on the current location is acquired, the current environment information is identified, and the current environment information is obtained Corresponding area identification information, determine the local area map indicated by the area identification information in the area map of the working area, obtain the template environment information corresponding to at least one map position in the local area map, match the current environment information with the template environment information, and Determine the location from the mobile device in the local area map.
  • the problem of low relocation efficiency caused by cumbersome relocation methods of the self-mobile device can be solved.
  • the self-mobile device By identifying the area identification information when the mobile device needs to be relocated, and using the local area map indicated by the area identification information to relocate the current location of the mobile device.
  • the self-mobile device does not need to search for a certain position in the entire working area, but can realize relocation by using a local area map, which can improve the efficiency of relocation.
  • the self-mobile device does not need to move to the original starting position, but only needs to match the current environment information of the current location with the template environment information of the local area map, which can save resources of the self-mobile device and further improve relocation. s efficiency.
  • the neural network model is jointly trained by using the classification label and the area label of the first obstacle to obtain an area recognition model, and the trained area identification model will be more accurate, which can improve the accuracy of area identification information identification.
  • the area identification model can compare the current environment information with the first feature information to determine whether there is a first obstacle. Since the first obstacle can indicate the area identification information, it can Reduce the computational difficulty of the network model and save computing resources from mobile devices.
  • FIG. 1 is a schematic structural diagram of a mobile device provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a relocation method from a mobile device provided by an embodiment of the present disclosure
  • Fig. 3 is a block diagram of an apparatus for relocating self-mobile equipment provided by an embodiment of the present disclosure
  • Fig. 4 is a block diagram of an electronic device provided by an embodiment of the present disclosure.
  • orientation words such as “up, down, top, bottom” usually refers to the direction shown in the drawings, or refers to the vertical, vertical Or in the direction of gravity; similarly, for the convenience of understanding and description, “inner and outer” refer to the inner and outer relative to the outline of each component itself, but the above-mentioned orientation words are not used to limit the present disclosure.
  • FIG. 1 it is a schematic structural diagram of a self-moving device provided by an embodiment of the present disclosure.
  • the self-moving device can be a self-moving device such as a sweeping robot and a floor washing robot. limited. It can be seen from FIG. 1 that the mobile device at least includes: a driving component 110 , a moving component 120 , a controller 130 and a first sensor 140 .
  • the driving component 110 is connected with the moving component 120 and is used to drive the moving component 120 to run, so as to drive the mobile device to move.
  • the driving component 110 is connected with the controller 130 and is used to drive the moving component 120 to run in response to an instruction issued by the controller 130 .
  • the driving assembly 110 may be implemented as a DC motor, a servo motor, a stepping motor, etc., and this embodiment does not limit the implementation of the driving assembly 110 .
  • the first sensor 140 is used to collect current environment information.
  • the first sensor 140 may be a camera equipped with a color system (Red Green Blue, RGB) detection function, an infrared sensor, or a laser radar sensor, etc.
  • the type of the first sensor 140 is not limited in this embodiment.
  • the first sensor 140 may be installed on the casing of the self-mobile device, and used to collect the environment where the self-mobile device is located.
  • the collection range of the first sensor 140 includes, but is not limited to: the area directly in front of, obliquely above, and/or obliquely below the direction of travel of the self-mobile device; and/or the area on the left side of the direction of travel of the self-mobile device; The area on the right side of the traveling direction of the device; and/or the rear side area of the traveling direction of the self-mobile device, etc., this embodiment does not limit the collection range of the first sensor 140 .
  • the number of the first sensor 140 can be one or at least two. In the case where the number of the first sensor 140 is at least two, the types of different first sensors 140 are the same or different. This embodiment does not apply to the first sensor 140. The number and implementation methods are limited.
  • the first sensor 140 is connected with the controller 130 to send the collected current environment information to the controller 130 .
  • the controller 130 is used to relocate the mobile device.
  • the controller 130 may be implemented as a single-chip microcomputer or a processor, and this embodiment does not limit the implementation manner of the controller 130 .
  • the controller 130 is configured to: respond to an instruction to relocate the mobile device in the working area, obtain the current environment information collected from the mobile device based on the current location; identify the current environment information, and obtain The regional identification information corresponding to the current environmental information; determine the local area map indicated by the regional identification information in the regional map of the working area; obtain the template environmental information corresponding to at least one map position in the local area map; compare the current environmental information with the template environmental information Match to determine the location from the mobile device in the local area map.
  • the instruction for relocating the mobile device within the working area is generated by the mobile device based on the sensing data of the second sensor.
  • the mobile device is also provided with a second sensor 150 , which is connected to the controller 130 and used to send sensing data to the controller 130 .
  • the controller 130 determines whether the self-mobile device is hijacked based on the sensing data; Positioning instructions; if it is determined that the self-mobile device has not been hijacked, the step of whether the self-mobile device is hijacked is performed again until the self-mobile device finishes working.
  • being hijacked refers to an abnormal movement of the mobile device
  • the abnormal movement refers to a movement that does not occur independently of the mobile device, and the abnormal movement cannot be sensed by the mobile device. Therefore, when the self-mobile device is hijacked, it cannot locate or accurately locate its own position. For example, when the self-mobile device is moved, suspended in the air during the movement, or dragged in a large area, etc., it is the case that the self-mobile device is hijacked.
  • disengagement hijacking refers to the end of abnormal movement of the self-mobile device, such as: returning to the ground after being moved, or when the self-mobile device stops being dragged after being dragged, it is self-moving The device is out of the hijacked situation.
  • the sensing data includes, but is not limited to: height data, displacement data, and/or angle data, etc. from the mobile device.
  • the second sensor 150 includes, but is not limited to, a gyroscope, a displacement sensor, or an image collector etc., this embodiment does not limit the implementation manner of the second sensor 150 .
  • determining whether the mobile device is hijacked based on the sensing data includes: comparing the change of the sensing data with the template change in the hijacked state; If the changes match, it is determined that the self-mobile device is hijacked; if the change of the sensing data does not match the template change in the hijacked state, it is determined that the self-mobile device is not hijacked.
  • determining whether the self-mobile device is hijacked based on the sensing data includes: determining whether the variation value of the sensing data is within the variation range of the hijacked state; if so, determining that the self-mobile device is hijacked; The mobile device was not hijacked.
  • the sensing data includes but not limited to: contact data, etc.
  • the second sensor 150 includes but not limited to a pressure sensor, or a contact sensor, etc., and this embodiment does not limit the implementation of the second sensor 150 .
  • determining whether the mobile device is hijacked based on the sensing data includes: determining whether the sensing data indicates that there is an object approaching the mobile device; if so, determining that the mobile device is hijacked; if not, determining that the mobile device is not hijacked hijack.
  • the command to relocate the mobile device within the working area is sent from a control device communicatively connected to the mobile device.
  • the control device may be a mobile phone, a remote controller, or a wearable device, etc., and this embodiment does not limit the type of the control device.
  • the instruction to relocate the mobile device within the working area is generated when the mobile device receives a trigger operation acting on the repositioning control.
  • a relocation control is also provided on the self-mobile device, and the relocation control may be a physical button or a virtual control displayed through a touch screen. This embodiment does not limit the implementation of the relocation control.
  • the above manner of obtaining the relocation instruction is only illustrative. In actual implementation, the manner of obtaining the relocation instruction from the mobile device may also be other manners, which will not be listed here in this embodiment.
  • the self-moving device may also include other components, such as a power supply component, a shock absorption component, etc., which will not be listed here in this embodiment.
  • the original starting position is searched from the hijacked position of the mobile device.
  • the search method is usually: the self-mobile device moves randomly in the working area, and receives the infrared signal emitted by the original starting position (such as the charging stand) through the infrared receiving device. When the infrared signal is received by the infrared receiving device, move to the position of the infrared signal to move to the original starting position, and read the map position of the original starting position stored in the area map of the work area to complete the reset position.
  • the traditional relocation method cannot relocate the self-mobile device when it is free from hijacking. It needs to search the original starting position in the entire working area, and the efficiency of relocation is low.
  • the current location of the mobile device is relocated by identifying the area identification information when the mobile device needs to be relocated, and using the local area map indicated by the area identification information.
  • the self-mobile device does not need to search for a certain position in the entire working area, but can realize relocation by using a local area map, which can improve the efficiency of relocation.
  • the self-mobile device does not need to move to the original starting position, but only needs to match the current environment information of the current location with the template environment information of the local area map, which can save resources of the self-mobile device and further improve relocation. s efficiency.
  • FIG. 2 A method for relocating a self-mobile device provided in this embodiment is shown in FIG. 2 .
  • the method is used in the controller 130 shown in FIG. 1 as an example for illustration.
  • the method includes at least the following steps:
  • Step 201 in response to an instruction to relocate the mobile device within the working area, acquire current environment information collected based on the current location of the mobile device.
  • the instruction for relocating the self-mobile device is generated by the self-mobile device based on the sensing data of the second sensor; or, it is sent by a control device connected to the self-mobile device in communication; It is generated when a trigger operation is performed on the positioning control, and this embodiment does not limit the acquisition method of the instruction for relocating from the mobile device.
  • the current environment information is obtained from the current environment where the mobile device is located.
  • the current environment information may be image data and/or point cloud data, and the current environment information may be three-dimensional data or two-dimensional data, and this embodiment does not limit the implementation manner of the current environment information.
  • the current environment information may be collected by the controller controlling the first sensor when the mobile device obtains a relocation instruction; or, it is collected continuously after the first sensor is powered on.
  • the timing of collection is limited.
  • Step 202 identifying the current environment information to obtain area identification information corresponding to the current environment information.
  • the area identification information is used to uniquely indicate a certain local area in the working area.
  • the area identification information is the position coordinates of the first obstacle in the area map.
  • the first obstacle is used to indicate area identification information.
  • the first obstacle refers to an obstacle that can indicate the attribute of the local area.
  • the first obstacle is a dining table, and the attribute of the local area indicated by the dining table is a restaurant; another example: the first obstacle is a toilet, and the attribute of the local area indicated by the toilet is a bathroom; another example: the first obstacle is a bed, the bed The attribute of the indicated local area is a bedroom, and this embodiment does not list all implementations of the first obstacle here.
  • the area identification information is an area identification of the local area, and the area identification may be an attribute or a label of the local area.
  • the area identification information is a restaurant, a bathroom, and/or a bedroom.
  • the label 1, 2, 3, etc. are pre-set for each local area map in the area identification information area map.
  • the attributes of the local area include restaurants, bathrooms, and bedrooms as examples for illustration.
  • the attribute division method of the local area can also be in other ways, for example: divide the attributes of the local area into: office area , coffee break area, etc., this embodiment does not limit the attribute division method of the local area.
  • the way to identify the current environmental information and obtain the area identification information corresponding to the current environmental information includes but is not limited to at least one of the following:
  • the first identification method is to input the current environment data into the pre-trained area identification model to obtain area identification information.
  • the region recognition model is obtained by using training data to train a preset neural network model.
  • the realization of training data includes but not limited to the following situations:
  • the training data only includes the sample environment information and the region labels corresponding to the sample environment information.
  • the training process of the area recognition model includes: inputting the sample environment information into the preset first neural network model to obtain the first training result; inputting the first training result and the area label corresponding to the sample environment information into the first loss function, Obtaining a first loss result; training the first neural network model based on the first loss result to reduce the difference between the first training result and the corresponding region label until the neural network model converges to obtain a region recognition model.
  • the area identification information is the position coordinates of the first obstacle; when the area label is the area attribute of the first obstacle, the area identification information is the The area attribute of an obstacle.
  • Inputting the current environment information into the pre-trained area recognition model to obtain area identification information includes: inputting the current environment information into the pre-trained area identification model to obtain the area identification information corresponding to the current environment information.
  • the first neural network model can be a convolutional neural network (Convolutional Neural Networks, CNN), a recursive neural network (Recursive Neural Network, RNN), a feedforward neural network (Feedforward Neural Network, FNN), and this embodiment does not apply to the first
  • the implementation of the neural network model is limited.
  • the training data not only includes the sample environment information and the area label corresponding to the sample environment information, but also includes the classification label of the first obstacle corresponding to the sample information, wherein the classification label of the first obstacle is used to combine the area label pair
  • the neural network model is jointly trained to obtain a region recognition model.
  • the classification label of the first obstacle is the attribute label of the first obstacle.
  • the training process of the area recognition model includes: inputting the sample environment information into the preset second neural network model to obtain the second training result, the second training result includes the area prediction result and the classification prediction result; the second training result , the region label and the classification label are input into the second loss function to obtain the second loss result; the second neural network model is trained based on the second loss result to reduce the difference between the second training result and the corresponding region label and classification label value until the second neural network model converges to obtain the region recognition model.
  • the second neural network model includes two network branches, one of which is used to calculate the region prediction result, and the other branch is used to calculate the classification result.
  • the trained area identification model will be more accurate, which can improve the accuracy of area identification information identification .
  • inputting the current environment information into the pre-trained area recognition model to obtain area identification information includes: inputting the current environment information into the area identification model to obtain the classification result and area identification information of the first obstacle corresponding to the current environment information.
  • the training data not only includes the sample environment information and the area label corresponding to the sample environment information, but also includes the first characteristic information of the first obstacle corresponding to the sample environment information.
  • the first feature information may be contour information of the first obstacle, or a feature vector of the first obstacle, and this embodiment does not limit the implementation manner of the first feature information.
  • the training process of the region recognition model includes: inputting the sample environment information and the first feature information into the preset third neural network model to obtain the third training result; inputting the third training result and the region label into the third loss function , to obtain the third loss result; based on the third loss result, the third neural network model is trained to reduce the difference between the third training result and the corresponding region label, until the third neural network model converges, and the region recognition model is obtained .
  • the third neural network model is used to compare the sample environment information with the first feature information to determine whether the sample environment information matches the first feature information, thereby determining whether there is a first obstacle in the sample environment information, due to the first Obstacles can indicate area identification information, therefore, the calculation difficulty of the network model can be reduced, and the calculation resources of mobile devices can be saved.
  • inputting the current environment information into the pre-trained area recognition model to obtain the area identification information includes: inputting the current environment information and the first feature information of the first obstacle into the area identification model to obtain the area identification information.
  • the first characteristic information of the first obstacle is pre-stored in the self-mobile device.
  • the second identification method is to obtain the second feature information of the first obstacle; match the acquired current environment information with the second feature information; if the current environment information includes information matching the second feature information, Then it is determined that the area identification information corresponding to the current environment information is the area identification information indicated by the first obstacle.
  • the second feature information is the same as or different from the first feature information.
  • the second feature information may be the outline information of the first obstacle, the size information and/or distance information of the first obstacle, or the information that can be used to describe the characteristics of the first obstacle. This embodiment does not make any contribution to the second feature information. limited.
  • the first obstacle is a dining table
  • the second feature information is the shape and size of the dining table. If the current environment information includes information matching the shape and size of the dining table, it is determined that the area identification information corresponding to the current environment information is the area identification information indicated by the dining table, such as a restaurant.
  • Step 203 determine the partial area map indicated by the area identification information in the area map of the working area.
  • the local area map may change based on changes in area identification information.
  • the area identification information is the position coordinates of the first obstacle in the area map.
  • the manner of determining the partial area map indicated by the area identification information in the area of the working area includes but not limited to at least one of the following:
  • the first type In the area map, a local area map with a preset shape and a preset size is determined based on the area identification information.
  • the preset shape and preset size are pre-stored in the mobile device.
  • the preset shape may be a circle, a rectangle, or an irregular shape, and this embodiment does not limit the implementation of the preset shape.
  • determining a local area map with a preset shape and a preset size based on the area identification information includes: taking the position coordinates of the first obstacle as the centroid of the local area map to generate a local area with a preset shape and a preset size map.
  • the first obstacle is the charging stand
  • the area identification information is the location coordinates of the charging stand.
  • the default shape is a circle
  • the default size is a radius of 2 meters.
  • the local area map is a circular area on the area map with the location coordinates of the charging stand as the center and a radius of 2 meters.
  • the position coordinates of the first obstacle may also be located at the edge of the local area map or other positions. This embodiment does not limit the method of determining the local area map with a preset shape and a preset size based on the area identification information. .
  • the second type in the area map that has been segmented, determine the local area map to which the area identification information belongs, and the area map is pre-divided into a plurality of local area maps.
  • the way of dividing the regional map into multiple local regional maps includes but not limited to: dividing the regional map according to attributes, or dividing the regional map according to the preset segmentation size.
  • the division method of the regional map can also be Other manners are not listed here in this embodiment.
  • the area map For example: Divide the area map into a bedroom area map and a bathroom area map.
  • the area identification information is the location coordinates of the charging stand, the charging stand is located in the bedroom area, and the local area map to which the area identification information belongs is the bedroom area map.
  • determining the local area map indicated by the area identification information in the area map of the working area includes: determining the local area map corresponding to the area identification information based on the correspondence between the area identification information and the local area map.
  • the area identification information is the label of each local area map.
  • the local area map corresponding to the label is found from the correspondence.
  • Step 204 acquiring template environment information corresponding to at least one map position in the local area map.
  • the local area map includes template environment information corresponding to multiple map locations, and the multiple template environment information all have location coordinates corresponding to the map locations.
  • the controller obtains the template environment information corresponding to at least one map position.
  • Step 205 matching the current environment information with the template environment information to determine the location of the ego mobile device in the local area map.
  • the template environment information includes characteristic information of the second obstacle.
  • the manner of matching the current environment information with the template environment information includes but not limited to at least one of the following:
  • the first is to input the current environment information and the template environment information into the pre-trained relocation neural network to obtain the position of the self-mobile device in the local area map.
  • the relocation neural network is used to determine whether the current environment information matches the template environment information, and determine the position corresponding to the matched template environment information as the position of the self-mobile device in the local area map.
  • the template environment information includes characteristic information of the second obstacle.
  • the second obstacle can be any obstacle in the working area, such as: tables, chairs, carpets, walls, etc., and this embodiment does not limit the type of the second obstacle.
  • the feature information of the second obstacle may be the shape, size, or feature vector of the second obstacle, and this embodiment does not limit the implementation manner of the feature information of the second obstacle.
  • the training process of the relocation neural network includes: inputting the sample environment information and each template environment information into the preset neural network model to obtain the similarity result; comparing the similarity result with the real similarity result, and based on the comparison result Train the neural network model to obtain a relocation neural network.
  • Match the current environment information with the template environment information to determine the position of the self-mobile device in the local area map including: input the current environment information and the template environment information into the pre-trained relocation neural network to obtain multiple similarity results,
  • the map position corresponding to the environmental template information ranked first in the similarity result is determined as the position of the self-mobile device in the local area map.
  • the second method is to calculate the similarity between the current environment information and each template environment information; the map position corresponding to the environment template information with the greatest similarity is determined as the position of the self-mobile device in the local area map.
  • the method for relocating the self-mobile device obtains the current environment information collected by the self-mobile device based on the current location by responding to an instruction to relocate the self-mobile device in the work area, Identify the current environmental information, obtain the area identification information corresponding to the current environmental information, determine the local area map indicated by the area identification information in the area map of the working area, obtain the template environment information corresponding to at least one map position in the local area map, and set The current environment information is matched with the template environment information to determine the location of the mobile device in the local area map.
  • the problem of low relocation efficiency caused by cumbersome relocation methods of the self-mobile device can be solved.
  • the self-mobile device By identifying the area identification information when the mobile device needs to be relocated, and using the local area map indicated by the area identification information to relocate the current location of the mobile device.
  • the self-mobile device does not need to search for a certain position in the entire working area, but can realize relocation by using a local area map, which can improve the efficiency of relocation.
  • the self-mobile device does not need to move to the original starting position, but only needs to match the current environment information of the current location with the template environment information of the local area map, which can save resources of the self-mobile device and further improve relocation. s efficiency.
  • the neural network model is jointly trained by using the classification label and the area label of the first obstacle to obtain an area recognition model, and the trained area identification model will be more accurate, which can improve the accuracy of area identification information identification.
  • the area identification model can compare the current environment information with the first feature information to determine whether there is a first obstacle. Since the first obstacle can indicate the area identification information, it can Reduce the computational difficulty of the network model and save computing resources from mobile devices.
  • Fig. 3 is a block diagram of an apparatus for relocating an autonomous mobile device provided by an embodiment of the present disclosure. This embodiment is described by taking the application of the apparatus in the autonomous mobile device shown in Fig. 1 as an example.
  • the device at least includes the following modules: a first acquisition module 310 , an information identification module 320 , a map determination module 330 , a second acquisition module 340 and a relocation module 350 .
  • the first acquiring module 310 is configured to acquire current environment information collected from the mobile device based on the current location in response to an instruction to relocate the mobile device within the working area.
  • the information identification module 320 is configured to identify the current environment information, and obtain area identification information corresponding to the current environment information.
  • the map determination module 330 is configured to determine a partial area map indicated by the area identification information in the area map of the working area.
  • the second acquiring module 340 is configured to acquire template environment information corresponding to at least one map position in the local area map.
  • the relocation module 350 is configured to match the current environment information with the template environment information, so as to determine the location of the ego mobile device in the local area map.
  • the self-mobile device relocation device when the self-mobile device relocation device provided in the above-mentioned embodiments performs relocation, it only uses the division of the above-mentioned functional modules as an example. In practical applications, the above-mentioned functions can be assigned by different The completion of the functional modules means that the internal structure of the self-mobile device relocation device is divided into different functional modules, so as to complete all or part of the functions described above.
  • the apparatus for relocating self-mobile equipment provided by the above-mentioned embodiments and the embodiment of the method for relocating self-mobile equipment belong to the same concept, and its specific implementation process is detailed in the method embodiment, and will not be repeated here.
  • the electronic device may be the self-moving device in FIG. 1 .
  • the electronic device includes at least a processor 401 and a memory 402 .
  • the processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • Processor 401 can adopt at least one hardware form in DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
  • Processor 401 may also include a main processor and a coprocessor, and the main processor is a processor for processing data in a wake-up state, also known as a CPU (Central Processing Unit, central processing unit); the coprocessor is Low-power processor for processing data in standby state.
  • CPU Central Processing Unit, central processing unit
  • the coprocessor is Low-power processor for processing data in standby state.
  • the processor 401 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
  • the processor 401 may also include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, artificial intelligence
  • Memory 402 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 402 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 402 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 401 to realize the motor brake provided by the method embodiment of the present disclosure. method.
  • the electronic device may optionally further include: a peripheral device interface and at least one peripheral device.
  • the processor 401, the memory 402, and the peripheral device interface may be connected through a bus or a signal line.
  • Each peripheral device can be connected with the peripheral device interface through a bus, a signal line or a circuit board.
  • peripheral devices include but are not limited to: radio frequency circuits, touch screens, audio circuits, and power supplies.
  • the electronic device may also include fewer or more components, which is not limited in this embodiment.
  • the present disclosure further provides a computer-readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the method for relocating from a mobile device in the foregoing method embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Stored Programmes (AREA)
  • Telephone Function (AREA)

Abstract

Procédé de relocalisation pour un dispositif automoteur, dispositif et support de stockage. Le procédé de relocalisation consiste : en réponse à une instruction de réalisation d'une relocalisation sur un dispositif automoteur dans une région de travail, à obtenir des informations d'environnement actuel qui sont acquises par le dispositif automoteur sur la base d'une position actuelle (S201) ; à identifier les informations d'environnement actuel pour obtenir des informations d'identifiant de région correspondant aux informations d'environnement actuel (S202) ; à déterminer, dans une carte régionale, une carte de région locale indiquée par les informations d'identifiant de région (S203) ; à obtenir des informations d'environnement de modèle correspondant à au moins une position de carte dans la carte de région locale (S204) ; et à mettre en correspondance les informations d'environnement actuel avec les informations d'environnement de modèle pour déterminer la position du dispositif automoteur dans la carte de région locale (S205). Les informations d'identifiant de région sont identifiées lorsque le dispositif automoteur doit être relocalisé, et une relocalisation est réalisée au moyen de la carte de région locale indiquée par les informations d'identifiant de région, de sorte que l'efficacité de relocalisation peut être améliorée, et le problème d'un mode de relocalisation complexe du dispositif automoteur est résolu.
PCT/CN2022/129455 2021-12-13 2022-11-03 Procédé de relocalisation pour dispositif automoteur, dispositif et support de stockage WO2023109347A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111517584.6 2021-12-13
CN202111517584.6A CN116263598A (zh) 2021-12-13 2021-12-13 自移动设备的重定位方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023109347A1 true WO2023109347A1 (fr) 2023-06-22

Family

ID=86721797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129455 WO2023109347A1 (fr) 2021-12-13 2022-11-03 Procédé de relocalisation pour dispositif automoteur, dispositif et support de stockage

Country Status (2)

Country Link
CN (1) CN116263598A (fr)
WO (1) WO2023109347A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012107892A2 (fr) * 2011-02-09 2012-08-16 Primesense Ltd. Détection de regard dans un environnement de mappage tridimensionnel (3d)
CN107037806A (zh) * 2016-02-04 2017-08-11 科沃斯机器人股份有限公司 自移动机器人重新定位方法及采用该方法的自移动机器人
CN111158374A (zh) * 2020-01-10 2020-05-15 惠州拓邦电气技术有限公司 重定位方法、系统、移动机器人及存储介质
CN111539400A (zh) * 2020-07-13 2020-08-14 追创科技(苏州)有限公司 自移动设备的控制方法、装置、存储介质及自移动设备
CN111539398A (zh) * 2020-07-13 2020-08-14 追创科技(苏州)有限公司 自移动设备的控制方法、装置及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012107892A2 (fr) * 2011-02-09 2012-08-16 Primesense Ltd. Détection de regard dans un environnement de mappage tridimensionnel (3d)
CN107037806A (zh) * 2016-02-04 2017-08-11 科沃斯机器人股份有限公司 自移动机器人重新定位方法及采用该方法的自移动机器人
CN111158374A (zh) * 2020-01-10 2020-05-15 惠州拓邦电气技术有限公司 重定位方法、系统、移动机器人及存储介质
CN111539400A (zh) * 2020-07-13 2020-08-14 追创科技(苏州)有限公司 自移动设备的控制方法、装置、存储介质及自移动设备
CN111539398A (zh) * 2020-07-13 2020-08-14 追创科技(苏州)有限公司 自移动设备的控制方法、装置及存储介质

Also Published As

Publication number Publication date
CN116263598A (zh) 2023-06-16

Similar Documents

Publication Publication Date Title
CN109643127B (zh) 构建地图、定位、导航、控制方法及系统、移动机器人
WO2019232806A1 (fr) Procédé et système de navigation, système de commande mobile et robot mobile
TWI661289B (zh) 移動式清掃機器人及其控制方法
US10946520B2 (en) Mobile robot system and control method thereof
KR102314539B1 (ko) 인공지능 이동 로봇의 제어 방법
EP3349087B1 (fr) Robot automoteur
US11227434B2 (en) Map constructing apparatus and map constructing method
US11027425B1 (en) Space extrapolation for robot task performance
KR20240063820A (ko) 청소 로봇 및 그의 태스크 수행 방법
WO2019091310A1 (fr) Détermination d'attribut de région
US11547261B2 (en) Moving robot and control method thereof
US20160154996A1 (en) Robot cleaner and method for controlling a robot cleaner
WO2022078467A1 (fr) Procédé et appareil de recharge automatique de robot, robot et support de stockage
US11703334B2 (en) Mobile robots to generate reference maps for localization
WO2020223975A1 (fr) Procédé de localisation de dispositif sur carte, serveur, et robot mobile
CN113116224B (zh) 机器人及其控制方法
CN109933061A (zh) 基于人工智能的机器人及控制方法
Jebari et al. Multi-sensor semantic mapping and exploration of indoor environments
KR20210029586A (ko) 이미지 내의 특징적 객체에 기반하여 슬램을 수행하는 방법 및 이를 구현하는 로봇과 클라우드 서버
WO2022017341A1 (fr) Procédé et appareil de recharge automatique, support de stockage, base de charge et système
CN115164906B (zh) 定位方法、机器人和计算机可读存储介质
WO2023142678A1 (fr) Procédé de correction de position de projection, procédé de localisation de projection, dispositif de commande, et robot
US20200326722A1 (en) Mobile robot, system for multiple mobile robot, and map learning method of mobile robot using artificial intelligence
US20230297120A1 (en) Method, apparatus, and device for creating map for self-moving device with improved map generation efficiency
WO2023109347A1 (fr) Procédé de relocalisation pour dispositif automoteur, dispositif et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22906099

Country of ref document: EP

Kind code of ref document: A1