CN116263598A - Relocation method and equipment for self-mobile equipment and storage medium - Google Patents
Relocation method and equipment for self-mobile equipment and storage medium Download PDFInfo
- Publication number
- CN116263598A CN116263598A CN202111517584.6A CN202111517584A CN116263598A CN 116263598 A CN116263598 A CN 116263598A CN 202111517584 A CN202111517584 A CN 202111517584A CN 116263598 A CN116263598 A CN 116263598A
- Authority
- CN
- China
- Prior art keywords
- information
- area
- self
- environment information
- identification information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000007613 environmental effect Effects 0.000 claims abstract description 55
- 238000012549 training Methods 0.000 claims description 42
- 238000003062 neural network model Methods 0.000 claims description 24
- 238000013528 artificial neural network Methods 0.000 claims description 13
- 239000000758 substrate Substances 0.000 claims 1
- 238000013473 artificial intelligence Methods 0.000 abstract description 4
- 230000008859 change Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 208000012661 Dyskinesia Diseases 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 241001122767 Theaceae Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Stored Programmes (AREA)
- Telephone Function (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The application belongs to the technical field of artificial intelligence, and particularly relates to a repositioning method, equipment and storage medium of self-mobile equipment. The method comprises the following steps: responding to an instruction for repositioning the self-mobile device in a working area, and acquiring current environment information acquired by the self-mobile device based on the current position; identifying the current environmental information to obtain region identification information corresponding to the current environmental information; determining a local area map indicated by the area identification information in the area map; acquiring template environment information corresponding to at least one map position in a local area map; the current environmental information is matched with the template environmental information to determine the location of the self-mobile device in the local area map. The problem of complicated relocation mode of the self-mobile equipment can be solved. By identifying the area identification information when the self-mobile device needs to be relocated and using the local area map indicated by the area identification information to relocate, the relocation efficiency can be improved.
Description
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a repositioning method, equipment and storage medium of self-mobile equipment.
Background
Currently, autonomous positioning and navigation can be achieved by the self-mobile device by means of synchronous positioning and mapping (Simultaneous Localization and Mapping, SLAM) techniques. However, in the SLAM process, it may be hijacked, etc., for example: is moved, suspended or dragged in a large range. At this point, uncontrolled drift errors occur from the positioning of the mobile device, requiring repositioning.
A conventional relocation method comprising: the self-mobile device searches the original departure position from the hijacked position, thereby completing the relocation of the self-mobile device.
However, the conventional relocation method is too cumbersome, which results in a problem of low relocation efficiency.
Disclosure of Invention
The application provides a relocation method, equipment and storage medium of self-mobile equipment, which can solve the problem of low relocation efficiency of the self-mobile equipment caused by complicated relocation mode. The application provides the following technical scheme:
in a first aspect, there is provided a relocation method of a self-mobile device, the method comprising: responding to an instruction for repositioning the self-mobile equipment in a working area, and acquiring current environment information acquired by the self-mobile equipment based on the current position;
Identifying the current environment information to obtain region identification information corresponding to the current environment information;
determining a local area map indicated by the area identification information in an area map of the working area;
acquiring template environment information corresponding to at least one map position in the local area map;
and matching the current environment information with the template environment information to determine the position of the self-mobile device in the local area map.
Optionally, the identifying the current environmental information to obtain the area identification information corresponding to the current environmental information includes:
inputting the current environment information into a pre-trained region identification model to obtain the region identification information; the region identification model is obtained by training a preset neural network model by using training data; the training data comprises sample environment information and area labels corresponding to the sample environment information.
Optionally, the training data further includes a classification tag of a first obstacle corresponding to the sample environmental information, where the classification tag of the first obstacle is used to combine with the region tag to perform joint training on the neural network model, so as to obtain the region identification model; correspondingly, the step of inputting the current environment information into a pre-trained area identification model to obtain the area identification information comprises the following steps: inputting the current environment information into the area identification model to obtain a classification result of a first obstacle corresponding to the current environment information and the area identification information;
Or,
the training data further includes first characteristic information of the first obstacle; correspondingly, the step of inputting the current environment information into a pre-trained area identification model to obtain the area identification information comprises the following steps: inputting the current environment information and the first characteristic information into the region identification model to obtain the region identification model;
wherein the first obstacle is used for indicating area identification information.
Optionally, the identifying the current environmental information to obtain the area identification information corresponding to the current environmental information further includes:
acquiring second characteristic information of a first obstacle, wherein the first obstacle is used for indicating area identification information;
matching the current environment information with the second characteristic information;
and determining that the area identification information corresponding to the current environment information is the area identification information indicated by the first obstacle when the current environment information comprises information matched with the second characteristic information.
Optionally, the second characteristic information includes contour information of the first obstacle; the second characteristic information further comprises size information and/or distance information of the first obstacle.
Optionally, the area identification information is a position coordinate of the first obstacle in the area map; the determining the local area map indicated by the area identification information in the area map of the working area comprises the following steps:
determining a local area map with a preset shape and a preset size based on the area identification information in the area map;
or,
in the area map subjected to the area division, a local area map to which the area identification information belongs is determined, and the area map is divided into a plurality of local area maps in advance.
Optionally, determining a local area map indicated by the area identification information in an area map of the working area includes:
and determining the local area map corresponding to the area identification information based on the corresponding relation between the area identification information and the local area map.
Optionally, matching the current environmental information with the template environmental information to determine a location of the self-mobile device includes:
inputting the current environment information and the template environment information into a pre-trained repositioning neural network to obtain the position; the repositioning neural network is used for determining whether the current environment information and the template environment information are matched or not, and determining the position corresponding to the matched template environment information as the position.
Optionally, the template environmental information includes characteristic information of the second obstacle.
In a second aspect, an electronic device is provided, the device comprising a processor and a memory; the memory stores a program that is loaded and executed by the processor to implement the relocation method of the self-mobile device according to the first aspect.
In a third aspect, there is provided a computer readable storage medium having stored therein a program for implementing the self-mobile device relocation method provided in the first aspect when executed by a processor.
The beneficial effects of this application lie in: the method comprises the steps of responding to an instruction for repositioning the self-mobile equipment in a working area, acquiring current environment information acquired by the self-mobile equipment based on the current position, identifying the current environment information, obtaining area identification information corresponding to the current environment information, determining a local area map indicated by the area identification information in an area map of the working area, acquiring template environment information corresponding to at least one map position in the local area map, and matching the current environment information with the template environment information to determine the position of the self-mobile equipment in the local area map. The problem of low repositioning efficiency of the self-mobile equipment due to complicated repositioning modes can be solved. And repositioning the current position of the self-mobile device by identifying the area identification information when the self-mobile device needs to be repositioned and using the local area map indicated by the area identification information. At this time, the self-mobile device can realize relocation without searching a certain position in the whole working area, and the relocation efficiency can be improved by using a local area map. Meanwhile, the self-mobile device does not need to move to the original departure position, only the current environment information of the current position is matched with the template environment information of the local area map, so that the resources of the self-mobile device can be saved, and the repositioning efficiency can be further improved.
In addition, the neural network model is jointly trained by using the classification tag and the region tag of the first obstacle to obtain a region identification model, the region identification model obtained through training can be more accurate, and the accuracy of region identification information identification can be improved.
In addition, when the current environment information is identified, the area identification model can compare the current environment information with the first characteristic information to determine whether the first obstacle exists, and the first obstacle can indicate the area identification information, so that the calculation difficulty of the network model can be reduced, and the calculation resource of the self-mobile equipment can be saved.
Drawings
FIG. 1 is a schematic diagram of a self-mobile device according to one embodiment of the present application;
FIG. 2 is a flow chart of a relocation method for a self-mobile device provided in one embodiment of the present application;
FIG. 3 is a block diagram of a self-mobile device relocation apparatus provided in one embodiment of the present application;
fig. 4 is a block diagram of an electronic device provided in one embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. The present application will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
In the application, unless otherwise indicated, terms of orientation such as "upper, lower, top, bottom" are used generally with respect to the orientation shown in the drawings or with respect to the component itself in the vertical, vertical or gravitational direction; also, for ease of understanding and description, "inner and outer" refers to inner and outer relative to the profile of each component itself, but the above-mentioned orientation terms are not intended to limit the present application.
Fig. 1 is a schematic structural diagram of a self-mobile device according to an embodiment of the present application, where the self-mobile device may be a self-mobile device such as a sweeping robot, a washing robot, or the like, and the embodiment does not limit the type of the self-mobile device. As can be seen from fig. 1, the self-mobile device at least includes: a drive assembly 110, a movement assembly 120, a controller 130, and a first sensor 140.
The driving component 110 is connected to the moving component 120, and is used for driving the moving component 120 to operate so as to drive the self-moving device to move.
The driving assembly 110 is connected to the controller 130, and is configured to respond to an instruction issued by the controller 130 to drive the moving assembly 120 to operate.
Alternatively, the driving assembly 110 may be implemented as a dc motor, a servo motor, a stepper motor, etc., and the embodiment is not limited to the implementation of the driving assembly 110.
The first sensor 140 is used to collect current environmental information. Alternatively, the first sensor 140 may be a camera, an infrared sensor, or a lidar sensor, etc. equipped with a color system (Red Green Blue, RGB) detection function, and the type of the first sensor 140 is not limited in this embodiment.
Alternatively, the first sensor 140 may be mounted on the housing of the self-mobile device and used to collect the environment in which the self-mobile device is located. The acquisition range of the first sensor 140 includes, but is not limited to: in the region directly in front of, obliquely above and/or obliquely below the direction of travel of the self-moving device; and/or from a left region of the travel direction of the mobile device; and/or from a right region of the travel direction of the mobile device; and/or from a rear region of the travel direction of the mobile device, etc., the present embodiment does not limit the acquisition range of the first sensor 140.
In addition, the number of the first sensors 140 may be one or at least two, and in the case where the number of the first sensors 140 is at least two, the types of the different first sensors 140 are the same or different, and the number and implementation of the first sensors 140 are not limited in this embodiment.
The first sensor 140 is connected to the controller 130 to transmit the collected current environmental information to the controller 130.
The controller 130 is used to relocate the self-mobile device. Alternatively, the controller 130 may be implemented as a single-chip microcomputer, or a processor, and the implementation of the controller 130 is not limited in this embodiment.
In this embodiment, the controller 130 is configured to: responding to an instruction for repositioning the self-mobile device in a working area, and acquiring current environment information acquired by the self-mobile device based on the current position; identifying the current environmental information to obtain region identification information corresponding to the current environmental information; determining a local area map indicated by the area identification information in an area map of the working area; acquiring template environment information corresponding to at least one map position in a local area map; the current environmental information is matched with the template environmental information to determine the location of the self-mobile device in the local area map.
Optionally, the instructions to reposition the self-mobile device within the work area are generated by the self-mobile device based on sensory data of the second sensor. At this time, a second sensor 150 is further disposed on the self-mobile device, and the second sensor 150 is connected to the controller 130 and is used for transmitting sensing data to the controller 130. Accordingly, after receiving the sensing data, the controller 130 determines whether the self-mobile device is hijacked based on the sensing data; if yes, determining to reposition when the sensing data indicates that the self-mobile equipment is out of hijack, and generating a repositioning instruction; if the self-mobile device is not hijacked, the step of whether the self-mobile device is hijacked is executed again until the self-mobile device stops when the self-mobile device finishes working.
The hijacking means that abnormal movement occurs to the self-moving device, the abnormal movement means that the self-moving device does not autonomously occur, and the abnormal movement cannot be sensed by the self-moving device. Therefore, the self-mobile device cannot locate or cannot accurately locate its own position when hijacked. Such as: the self-mobile device is lifted, suspended in the moving process, or dragged in a large range, and the like, which are all the conditions that the self-mobile device is hijacked.
Accordingly, the disengagement hijacking means that abnormal movement is ended from the mobile device, such as: the situation that the mobile device returns to the ground again after being moved or stops being dragged after being dragged is the situation that the mobile device is separated from hijacking.
Illustratively, the sensed data includes, but is not limited to: the second sensor 150 includes, but is not limited to, a gyroscope, a displacement sensor, or an image collector, etc., accordingly, the implementation of the second sensor 150 is not limited to this embodiment. At this time, determining whether the self-mobile device is hijacked based on the sensed data includes: comparing the change condition of the sensing data with the change condition of the template in the hijacked state; if the change condition of the sensing data is matched with the change condition of the template in the hijacked state, determining that the self-mobile equipment is hijacked; if the change condition of the sensing data is not matched with the change condition of the template in the hijacked state, the self-mobile equipment is determined to be not hijacked. Alternatively, determining whether the self-mobile device is hijacked based on the sensed data includes: determining whether the change value of the sensing data is in the change range under the hijacked state; if yes, determining that the self-mobile equipment is hijacked; if not, it is determined that the self-mobile device is not hijacked.
Illustratively, the sensed data includes, but is not limited to: contact data, etc., and accordingly, the second sensor 150 includes, but is not limited to, a pressure sensor, a contact sensor, etc., the implementation of the second sensor 150 is not limited in this embodiment. At this time, determining whether the self-mobile device is hijacked based on the sensed data includes: determining whether the sensed data indicates that an object is present in proximity to the self-moving device; if yes, determining that the self-mobile equipment is hijacked; if not, it is determined that the self-mobile device is not hijacked.
Alternatively, the instruction to relocate the self-mobile device within the operating area is sent by a control device communicatively coupled to the self-mobile device. The control device may be a mobile phone, a remote controller, or a wearable device, and the type of the control device is not limited in this embodiment.
Alternatively, the instruction to relocate the self-mobile device within the work area is generated if a trigger operation is received from the mobile device for the relocation control. At this time, the self-mobile device is further provided with a repositioning control, where the repositioning control may be an entity button or a virtual control displayed by touching the display screen, and the implementation manner of the repositioning control is not limited in this embodiment.
The above-mentioned manner of obtaining the instruction for relocation is merely illustrative, and in actual implementation, the manner of obtaining the instruction for relocation from the mobile device may be other manners, which are not listed here.
It should be noted that, in actual implementation, the self-mobile device may further include other components, such as: power supply components, shock absorbing components, etc., are not specifically mentioned herein.
In the conventional relocation method, the self-mobile device searches for the original departure position from the hijacked position. The search mode is usually: the self-mobile device randomly moves in the working area and receives the infrared signal emitted by the original starting position (such as a charging seat) through the infrared receiving device. And under the condition that the infrared signal is received by the infrared receiving device, moving to the position of the infrared signal to move to the original departure position, and reading the map position of the original departure position stored in the regional map of the working area to finish repositioning. However, the conventional relocation method cannot relocate the self-mobile device when the self-mobile device is out of hijacking, the original departure position needs to be searched in the whole working area, and the relocation efficiency is low. In this embodiment, the location where the self-mobile device is currently located is relocated by identifying the area identification information when the self-mobile device needs to be relocated and using the local area map indicated by the area identification information. At this time, the self-mobile device can realize relocation without searching a certain position in the whole working area, and the relocation efficiency can be improved by using a local area map. Meanwhile, the self-mobile device does not need to move to the original departure position, only the current environment information of the current position is matched with the template environment information of the local area map, so that the resources of the self-mobile device can be saved, and the repositioning efficiency can be further improved.
The relocation method of the self-mobile device provided by the application is described in detail below.
The relocation method of the self-mobile device is shown in fig. 2. This embodiment will be described by taking the controller 130 shown in fig. 1 as an example. The method at least comprises the following steps:
in step 201, in response to an instruction for repositioning the self-mobile device in the working area, current environmental information acquired by the self-mobile device based on the current location is acquired.
Instructions to reposition the self-mobile device are generated by the self-mobile device based on the sensed data of the second sensor; or, is sent by a control device communicatively connected to the self-mobile device; or, the method is generated when the mobile device receives the triggering operation acting on the repositioning control, and the method for acquiring the instruction for repositioning the mobile device is not limited in the embodiment.
In one example, the current context information is collected from the context in which the mobile device is currently located. The current environmental information may be image data and/or point cloud data, and the current environmental information may be three-dimensional data or two-dimensional data, which is not limited by the implementation manner of the current environmental information in this embodiment.
Alternatively, the current environmental information may be acquired from the mobile device by acquiring an instruction for relocation, and the controller controls the first sensor to acquire the current environmental information; or, the first sensor is continuously collected after being powered on, and the embodiment does not limit the collection time of the current environmental information.
The area identification information is used to uniquely indicate a certain partial area in the work area.
Optionally, the area identification information is a position coordinate of the first obstacle in the area map. Wherein the first obstacle is used for indicating the area identification information. Specifically, the first obstacle refers to an obstacle capable of indicating the attribute of the local area. Such as: the first obstacle is a dining table, and the attribute of the local area indicated by the dining table is a dining room; and, for example: the first obstacle is a toilet, and the attribute of the local area indicated by the toilet is a toilet; for another example: the first obstacle is a bed, and the local area indicated by the bed is a bedroom, and the implementation manner of the first obstacle is not listed here.
Alternatively, the area identification information is an area identification of the local area, and the area identification may be an attribute or a label of the local area. Such as: in the case where the area identification is an attribute, the area identification information is a restaurant, a bathroom, a bedroom, or the like. Such as: in the case where the area identification is a reference numeral, reference numerals 1, 2, 3, etc. are set in advance for the respective partial area maps in the area identification information area map.
In this embodiment, the attribute of the local area including a restaurant, a bathroom and a bedroom is taken as an example for explanation, and in actual implementation, the attribute dividing manner of the local area may be other manners, for example: the attributes of the local area are divided into: office areas, tea rest areas, etc., the present embodiment does not limit the attribute division manner of the local area.
The manner of identifying the current environmental information to obtain the region identification information corresponding to the current environmental information includes, but is not limited to, at least one of the following:
in the first recognition mode, the current environment data is input into a pre-trained region recognition model to obtain region identification information. The region identification model is obtained by training a preset neural network model by using training data.
Optionally, implementation cases of the training data include, but are not limited to, the following:
in the first case, the training data includes only the sample environment information and the region tag corresponding to the sample environment information.
Accordingly, the training process of the region identification model includes: inputting sample environment information into a preset first neural network model to obtain a first training result; inputting a first training result and a regional label corresponding to sample environment information into a first loss function to obtain a first loss result; training the first neural network model based on the first loss result to reduce the difference value between the first training result and the corresponding region label until the neural network model converges to obtain a region identification model.
Wherein, in the case that the area tag is a position coordinate tag of the first obstacle, the area identification information is a position coordinate of the first obstacle; in the case where the area tag is an area attribute of the first obstacle, the area identification information is an area attribute of the first obstacle.
The current environment information is input into a pre-trained region identification model to obtain region identification information, and the method comprises the following steps: and inputting the current environmental information into a pre-trained region identification model to obtain region identification information corresponding to the current environmental information.
The first neural network model may be a convolutional neural network (Convolutional Neural Networks, CNN), a recurrent neural network (Recursive Neural Network, RNN), and a feed forward neural network (Feedforward Neural Network, FNN), which is not limited to the implementation manner of the first neural network model in this embodiment.
In the second case, the training data not only comprises sample environment information and area labels corresponding to the sample environment information, but also comprises classification labels of first obstacles corresponding to the sample information, wherein the classification labels of the first obstacles are used for carrying out joint training on the neural network model by combining the area labels to obtain an area identification model.
The classification labels of the first obstacles are attribute labels of the first obstacles.
Accordingly, the training process of the region identification model includes: inputting the sample environment information into a preset second neural network model to obtain a second training result, wherein the second training result comprises a regional prediction result and a classification prediction result; inputting a second training result, the region labels and the classification labels into a second loss function to obtain a second loss result; training the second neural network model based on the second loss result to reduce the difference value between the second training result and the corresponding region label and the classification label until the second neural network model converges to obtain a region identification model.
Wherein the second neural network model includes two network branches, one for computing the region prediction result and the other for computing the classification result.
Because the training of the second neural network model is finished when the results of the two network branches are close to the true value in the training process, the region identification model obtained by training is more accurate, and the accuracy of the region identification information identification can be improved.
Correspondingly, the current environment information is input into a pre-trained area identification model to obtain area identification information, and the method comprises the following steps: and inputting the current environmental information into the area identification model to obtain a classification result and area identification information of the first obstacle corresponding to the current environmental information.
In a third case, the training data includes not only the sample environment information and the region tag corresponding to the sample environment information, but also first feature information of the first obstacle corresponding to the sample environment information.
Alternatively, the first feature information may be profile information of the first obstacle, or a feature vector of the first obstacle, which is not limited by the implementation manner of the first feature information in this embodiment.
Accordingly, the training process of the region identification model includes: inputting the sample environment information and the first characteristic information into a preset third neural network model to obtain a third training result; inputting a third training result and the regional label into a third loss function to obtain a third loss result; training the third neural network model based on the third loss result to reduce the difference value between the third training result and the corresponding region label until the third neural network model converges to obtain a region identification model.
The third neural network model is used for comparing the sample environment information with the first characteristic information to determine whether the sample environment information is matched with the first characteristic information, so that whether the sample environment information has a first obstacle or not is determined, and the first obstacle can indicate the area identification information, so that the calculation difficulty of the network model can be reduced, and the calculation resources of the self-mobile equipment are saved.
Correspondingly, inputting the current environment information into a pre-trained region identification model to obtain region identification information, wherein the method comprises the following steps: and inputting the current environment information and the first characteristic information of the first obstacle into the region identification model to obtain region identification information.
Wherein the first characteristic information of the first obstacle is pre-stored in the self-mobile device.
The second recognition mode is used for acquiring second characteristic information of the first obstacle; matching the acquired current environment information with second characteristic information; and if the current environment information comprises the information matched with the second characteristic information, determining that the area identification information corresponding to the current environment information is the area identification information indicated by the first obstacle.
Optionally, the second characteristic information is the same as or different from the first characteristic information. The second feature information may be profile information of the first obstacle, size information and/or distance information of the first obstacle, or information that may be used to describe features of the first obstacle, and the embodiment does not limit the second feature information.
Such as: the first obstacle is a dining table, and the second characteristic information is the shape and the size of the dining table. In the case that the current environmental information includes information matching the shape and size of the dining table, it is determined that the area identification information corresponding to the current environmental information is the area identification information indicated by the dining table, such as a restaurant.
In this embodiment, the local area map may be changed based on a change in the area identification information.
In one example, the area identification information is a location coordinate of the first obstacle in the area map. Accordingly, the manner of determining the local area map indicated by the area identification information in the area region of the work area includes, but is not limited to, at least one of:
first kind: in the area map, a local area map of a preset shape and a preset size is determined based on the area identification information.
Wherein the preset shape and the preset size are pre-existing in the self-moving device. The preset shape may be a circular shape, a rectangular shape, or an irregular shape, and the implementation manner of the preset shape is not limited in this embodiment.
Illustratively, determining a local area map of a preset shape and a preset size based on the area identification information includes: and generating a local area map with a preset shape and a preset size by taking the position coordinates of the first obstacle as the centroid of the local area map.
Such as: the first obstacle is a charging seat, and the area identification information is the position coordinates of the charging seat. The preset shape is circular and the preset size is 2 meters in radius. At this time, the local area map is determined to be a circular area with a radius of 2 meters on the area map with the position coordinates of the charging stand as the center of a circle.
In other embodiments, the position coordinates of the first obstacle may be located at an edge or other positions of the local area map, and the present embodiment does not limit the manner of determining the local area map with the preset shape and the preset size based on the area identification information.
Second kind: in the area map subjected to the area division, a local area map to which the area identification information belongs is specified, the area map being divided into a plurality of local area maps in advance.
The way to divide the regional map into a plurality of local regional maps includes, but is not limited to: the region map is divided according to the attribute, or the region map is divided according to the preset division size, and the division mode of the region map can be other modes in actual implementation, which is not listed here.
Such as: the regional map is divided into a bedroom regional map and a bathroom region. And when the area identification information is the position coordinate of the charging seat, the charging seat is positioned in the bedroom area, and the local area map to which the area identification information belongs is the bedroom area map.
In another example, a correspondence between the area identification information and the local area map is stored in the self-mobile device. At this time, determining a local area map indicated by the area identification information in the area map of the work area includes: and determining the local area map corresponding to the area identification information based on the corresponding relation between the area identification information and the local area map.
Such as: the area identification information is the label of each local area map, and at this time, after the label is acquired from the mobile device, the local area map corresponding to the label is found from the corresponding relationship.
In this embodiment, the local area map includes template environment information corresponding to a plurality of map positions, and the plurality of template environment information has position coordinates corresponding to the map positions. The controller obtains template environment information corresponding to at least one map position.
Wherein the template environment information includes characteristic information of the second obstacle.
The manner in which the current environmental information is matched to the template environmental information includes, but is not limited to, at least one of the following:
first, current environment information and template environment information are input into a pre-trained repositioning neural network to obtain the position of the self-mobile device in a local area map. The repositioning neural network is used for determining whether the current environment information and the template environment information are matched or not, and determining the position corresponding to the matched template environment information as the position of the self-mobile device in the local area map.
Optionally, the template environment information includes characteristic information of the second obstacle. The second obstacle may be any obstacle within the work area, such as: tables, chairs, carpets, walls, etc., the present embodiment does not limit the type of second obstacle.
The feature information of the second obstacle may be a shape, a size, or a feature vector of the second obstacle, and the implementation manner of the feature information of the second obstacle is not limited in this embodiment.
Accordingly, the training process for repositioning the neural network includes: inputting the sample environment information and the environment information of each template into a preset neural network model to obtain a similarity result; and comparing the similarity result with a real similarity result, and training the neural network model based on the comparison result to obtain the repositioning neural network.
Matching the current environmental information with the template environmental information to determine a location of the self-mobile device in the local area map, comprising: and inputting the current environment information and the template environment information into a pre-trained repositioning neural network to obtain a plurality of similarity results, and determining the map position corresponding to the environment template information with the similarity results arranged at the first position as the position of the self-mobile device in the local area map.
Secondly, calculating the similarity between the current environment information and the environment information of each template; and determining the map position corresponding to the environment template information with the maximum similarity as the position of the self-mobile device in the local area map.
In summary, in the relocation method for the self-mobile device provided by the embodiment, by responding to an instruction for relocation of the self-mobile device in a working area, current environmental information acquired by the self-mobile device based on a current location is acquired, the current environmental information is identified to obtain area identification information corresponding to the current environmental information, a local area map indicated by the area identification information is determined in an area map of the working area, template environmental information corresponding to at least one map location in the local area map is acquired, and the current environmental information is matched with the template environmental information to determine the location of the self-mobile device in the local area map. The problem of low repositioning efficiency of the self-mobile equipment due to complicated repositioning modes can be solved. And repositioning the current position of the self-mobile device by identifying the area identification information when the self-mobile device needs to be repositioned and using the local area map indicated by the area identification information. At this time, the self-mobile device can realize relocation without searching a certain position in the whole working area, and the relocation efficiency can be improved by using a local area map. Meanwhile, the self-mobile device does not need to move to the original departure position, only the current environment information of the current position is matched with the template environment information of the local area map, so that the resources of the self-mobile device can be saved, and the repositioning efficiency can be further improved.
In addition, the neural network model is jointly trained by using the classification tag and the region tag of the first obstacle to obtain a region identification model, the region identification model obtained through training can be more accurate, and the accuracy of region identification information identification can be improved.
In addition, when the current environment information is identified, the area identification model can compare the current environment information with the first characteristic information to determine whether the first obstacle exists, and the first obstacle can indicate the area identification information, so that the calculation difficulty of the network model can be reduced, and the calculation resource of the self-mobile equipment can be saved.
Fig. 3 is a block diagram of a self-mobile device relocation apparatus according to an embodiment of the present application, and this embodiment is described by taking the application of the apparatus to the self-mobile device shown in fig. 1 as an example. The device at least comprises the following modules: a first acquisition module 310, an information identification module 320, a map determination module 330, a second acquisition module 340, and a relocation module 350.
A first obtaining module 310, configured to obtain, in response to an instruction for repositioning the self-mobile device in the working area, current environmental information collected by the self-mobile device based on a current location.
The information identifying module 320 is configured to identify the current environmental information, and obtain the area identification information corresponding to the current environmental information.
The map determining module 330 is configured to determine a local area map indicated by the area identification information in the area map of the working area.
The second obtaining module 340 is configured to obtain template environment information corresponding to at least one map position in the local area map.
A relocation module 350 for matching the current environment information with the template environment information to determine the location of the self-mobile device in the local area map.
For relevant details reference is made to the above embodiments.
It should be noted that: in the self-mobile equipment repositioning device provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the self-mobile equipment repositioning device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the self-mobile device relocation apparatus and the self-mobile device relocation method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments, which are not repeated herein.
The present embodiment provides an electronic device, as shown in fig. 4, which may be the self-mobile device in fig. 1. The electronic device comprises at least a processor 401 and a memory 402.
Processor 401 may include one or more processing cores such as: 4 core processors, 8 core processors, etc. The processor 401 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 401 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 401 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 401 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the motor braking method provided by the method embodiments herein.
In some embodiments, the electronic device may further optionally include: a peripheral interface and at least one peripheral. The processor 401, memory 402, and peripheral interfaces may be connected by buses or signal lines. The individual peripheral devices may be connected to the peripheral device interface via buses, signal lines or circuit boards. Illustratively, peripheral devices include, but are not limited to: radio frequency circuitry, touch display screens, audio circuitry, and power supplies, among others.
Of course, the electronic device may also include fewer or more components, as the present embodiment is not limited in this regard.
Optionally, the present application further provides a computer readable storage medium, in which a program is stored, the program being loaded and executed by a processor to implement the self-mobile device relocation method of the above-mentioned method embodiment.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.
Claims (11)
1. A method of relocation from a mobile device, the method comprising:
responding to an instruction for repositioning the self-mobile equipment in a working area, and acquiring current environment information acquired by the self-mobile equipment based on the current position;
identifying the current environment information to obtain region identification information corresponding to the current environment information;
Determining a local area map indicated by the area identification information in an area map of the working area;
acquiring template environment information corresponding to at least one map position in the local area map;
and matching the current environment information with the template environment information to determine the position of the self-mobile device in the local area map.
2. The method of claim 1, wherein the identifying the current environmental information to obtain the area identification information corresponding to the current environmental information includes:
inputting the current environment information into a pre-trained region identification model to obtain the region identification information; the region identification model is obtained by training a preset neural network model by using training data; the training data comprises sample environment information and area labels corresponding to the sample environment information.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the training data further comprises a classification label of a first obstacle corresponding to the sample environment information, wherein the classification label of the first obstacle is used for carrying out combined training on the neural network model by combining the regional label to obtain the regional identification model; correspondingly, the step of inputting the current environment information into a pre-trained area identification model to obtain the area identification information comprises the following steps: inputting the current environment information into the area identification model to obtain a classification result of a first obstacle corresponding to the current environment information and the area identification information;
Or,
the training data further includes first characteristic information of the first obstacle; correspondingly, the step of inputting the current environment information into a pre-trained area identification model to obtain the area identification information comprises the following steps: inputting the current environment information and the first characteristic information into the region identification model to obtain the region identification model;
wherein the first obstacle is used for indicating area identification information.
4. The method of claim 1, wherein the identifying the current environmental information to obtain the area identification information corresponding to the current environmental information further comprises:
acquiring second characteristic information of a first obstacle, wherein the first obstacle is used for indicating area identification information;
matching the current environment information with the second characteristic information;
and determining that the area identification information corresponding to the current environment information is the area identification information indicated by the first obstacle when the current environment information comprises information matched with the second characteristic information.
5. The method according to claim 3 or 4, wherein the second characteristic information comprises profile information of the first obstacle; the second characteristic information further comprises size information and/or distance information of the first obstacle.
6. The method of claim 1, wherein the area identification information is a location coordinate of a first obstacle in the area map; the determining the local area map indicated by the area identification information in the area map of the working area comprises the following steps:
determining a local area map with a preset shape and a preset size based on the area identification information in the area map;
or,
in the area map subjected to the area division, a local area map to which the area identification information belongs is determined, and the area map is divided into a plurality of local area maps in advance.
7. The method of claim 1, wherein determining a local area map indicated by the area identification information in an area map of the work area comprises:
and determining the local area map corresponding to the area identification information based on the corresponding relation between the area identification information and the local area map.
8. The method of claim 1, wherein matching the current environmental information with the template environmental information to determine the location of the self-mobile device comprises:
inputting the current environment information and the template environment information into a pre-trained repositioning neural network to obtain the position; the repositioning neural network is used for determining whether the current environment information and the template environment information are matched or not, and determining the position corresponding to the matched template environment information as the position.
9. The method of claim 8, wherein the template environmental information includes characteristic information of a second obstacle.
10. An electronic device comprising a processor and a memory; stored in the memory is a program that is loaded and executed by the processor to implement the relocation method of a self-mobile device according to any one of claims 1 to 9.
11. A computer readable storage medium, characterized in that the storage medium has stored therein a program which, when executed by a processor, is adapted to carry out a relocation method of a self-mobile device according to any of claims 1 to 9.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111517584.6A CN116263598A (en) | 2021-12-13 | 2021-12-13 | Relocation method and equipment for self-mobile equipment and storage medium |
PCT/CN2022/129455 WO2023109347A1 (en) | 2021-12-13 | 2022-11-03 | Relocalization method for self-moving device, device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111517584.6A CN116263598A (en) | 2021-12-13 | 2021-12-13 | Relocation method and equipment for self-mobile equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116263598A true CN116263598A (en) | 2023-06-16 |
Family
ID=86721797
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111517584.6A Pending CN116263598A (en) | 2021-12-13 | 2021-12-13 | Relocation method and equipment for self-mobile equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116263598A (en) |
WO (1) | WO2023109347A1 (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9285874B2 (en) * | 2011-02-09 | 2016-03-15 | Apple Inc. | Gaze detection in a 3D mapping environment |
CN107037806B (en) * | 2016-02-04 | 2020-11-27 | 科沃斯机器人股份有限公司 | Self-moving robot repositioning method and self-moving robot adopting same |
CN111158374A (en) * | 2020-01-10 | 2020-05-15 | 惠州拓邦电气技术有限公司 | Repositioning method, repositioning system, mobile robot and storage medium |
CN111539400A (en) * | 2020-07-13 | 2020-08-14 | 追创科技(苏州)有限公司 | Control method and device of self-moving equipment, storage medium and self-moving equipment |
CN113920451A (en) * | 2020-07-13 | 2022-01-11 | 追觅创新科技(苏州)有限公司 | Control method and device of self-moving equipment and storage medium |
-
2021
- 2021-12-13 CN CN202111517584.6A patent/CN116263598A/en active Pending
-
2022
- 2022-11-03 WO PCT/CN2022/129455 patent/WO2023109347A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2023109347A1 (en) | 2023-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102235270B1 (en) | Moving Robot and controlling method | |
US11227434B2 (en) | Map constructing apparatus and map constructing method | |
EP3672762B1 (en) | Self-propelled robot path planning method, self-propelled robot and storage medium | |
EP3739417A1 (en) | Navigation method, navigation system, mobile control system, and mobile robot | |
CN111989537A (en) | System and method for detecting human gaze and gestures in an unconstrained environment | |
KR20240063820A (en) | Cleaning robot and Method of performing task thereof | |
US11703334B2 (en) | Mobile robots to generate reference maps for localization | |
US10078333B1 (en) | Efficient mapping of robot environment | |
CN111814752B (en) | Indoor positioning realization method, server, intelligent mobile device and storage medium | |
WO2021143543A1 (en) | Robot and method for controlling same | |
Jebari et al. | Multi-sensor semantic mapping and exploration of indoor environments | |
WO2019001237A1 (en) | Mobile electronic device, and method in mobile electronic device | |
US20210348927A1 (en) | Information processing apparatus, information processing method, and recording medium | |
EP3475872A1 (en) | System for taking inventory and estimating the position of objects | |
WO2018191818A1 (en) | Stand-alone self-driving material-transport vehicle | |
CN115164906B (en) | Positioning method, robot, and computer-readable storage medium | |
WO2022017341A1 (en) | Automatic recharging method and apparatus, storage medium, charging base, and system | |
KR20220120908A (en) | Moving robot and control method of moving robot | |
US20230297120A1 (en) | Method, apparatus, and device for creating map for self-moving device with improved map generation efficiency | |
CN111487980A (en) | Control method of intelligent device, storage medium and electronic device | |
CN116412824A (en) | Relocation method and equipment for self-mobile equipment and storage medium | |
US20230215092A1 (en) | Method and system for providing user interface for map target creation | |
CN116263598A (en) | Relocation method and equipment for self-mobile equipment and storage medium | |
CN112182122A (en) | Method and device for acquiring navigation map of working environment of mobile robot | |
CN112733617B (en) | Target positioning method and system based on multi-mode data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |