CN117311333A - Positioning method and device of mobile equipment, storage medium and electronic device - Google Patents

Positioning method and device of mobile equipment, storage medium and electronic device Download PDF

Info

Publication number
CN117311333A
CN117311333A CN202210712718.8A CN202210712718A CN117311333A CN 117311333 A CN117311333 A CN 117311333A CN 202210712718 A CN202210712718 A CN 202210712718A CN 117311333 A CN117311333 A CN 117311333A
Authority
CN
China
Prior art keywords
target
matched
mobile device
sensor
target mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210712718.8A
Other languages
Chinese (zh)
Inventor
王永涛
杨盛
曹蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dreame Innovation Technology Suzhou Co Ltd
Original Assignee
Dreame Innovation Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dreame Innovation Technology Suzhou Co Ltd filed Critical Dreame Innovation Technology Suzhou Co Ltd
Priority to CN202210712718.8A priority Critical patent/CN117311333A/en
Publication of CN117311333A publication Critical patent/CN117311333A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a positioning method and device of mobile equipment, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring three-dimensional space data of a space to be measured through a first sensor; performing feature recognition on the three-dimensional space data to obtain object features of an object to be matched; determining the relative position between the object to be matched and target mobile equipment under the condition that the object characteristics of the object to be matched are matched with the object characteristics of a preset target object; and positioning the target mobile equipment according to the relative position between the object to be matched and the target mobile equipment and the preset object position of the target object, wherein the preset object position is the position of the target object in the target area map. By adopting the technical scheme, the problem that the accuracy of equipment positioning is low due to poor stability of sensor data in the positioning method of the mobile equipment in the related technology is solved.

Description

Positioning method and device of mobile equipment, storage medium and electronic device
[ field of technology ]
The application relates to the field of smart home, in particular to a positioning method and device of mobile equipment, a storage medium and an electronic device.
[ background Art ]
Currently, a mobile device (e.g., a cleaning robot) may be provided with a sensing sensor (e.g., a laser sensor), information of surrounding obstacles may be sensed through sensor data detected by the provided sensing sensor, and the mobile device may be positioned based on a matching result by matching two-dimensional information of the sensed obstacles with a preset area map.
When the mobile equipment is positioned by the two-dimensional information acquired by the sensing sensor, if the mobile equipment is bumpy or inclined, the stability of the sensor data is poor, the information of surrounding obstacles cannot be accurately sensed, and the mobile equipment cannot be positioned.
As can be seen, the positioning method of the mobile device in the related art has a problem of low accuracy of device positioning due to poor stability of sensor data.
[ invention ]
The present invention provides a method and apparatus for positioning a mobile device, a storage medium, and an electronic apparatus, so as to at least solve the problem of low accuracy of positioning a device caused by poor stability of sensor data in the positioning method of the mobile device in the related art.
The purpose of the application is realized through the following technical scheme:
according to an aspect of an embodiment of the present application, there is provided a positioning method of a mobile device, including: acquiring three-dimensional space data of a space to be measured through a first sensor; performing feature recognition on the three-dimensional space data to obtain object features of an object to be matched; under the condition that the object characteristics of the object to be matched are matched with the object characteristics of a preset target object, determining the relative position between the object to be matched and the target mobile device according to the two-dimensional space data; and positioning the target mobile device according to the relative position between the object to be matched and the target mobile device and the preset object position of the target object, wherein the preset object position is the position of the target object in a target area map.
In an exemplary embodiment, before the acquiring, by the first sensor, three-dimensional space data of the space to be measured, the method further includes: and controlling the first sensor to be adjusted from the closed state to the open state under the condition that the target mobile device is determined to be in the abnormal gesture state.
In an exemplary embodiment, the method further comprises: detecting, by a pose detection component, a pose reference parameter of the target mobile device, wherein the pose reference parameter comprises at least one of: a position change parameter for representing a change in position of the target mobile device along a target axis, and a tilt angle parameter for representing a tilt angle of the target mobile device; and determining that the target mobile device is in the abnormal posture state under the condition that the target mobile device is determined to be in a jolt state according to the position change parameter or the target mobile device is determined to be in a tilt state according to the tilt angle parameter.
In an exemplary embodiment, after the feature recognition is performed on the three-dimensional space data to obtain the object feature of the object to be matched, the method further includes: determining a reference object with object characteristics matched with the object characteristics of the object to be matched in a plurality of preset reference objects, wherein the object characteristics of the object to be matched comprise at least one of the following: the object type of the object to be matched and the object size of the object to be matched.
In an exemplary embodiment, after the feature recognition is performed on the three-dimensional space data to obtain the object feature of the object to be matched, the method further includes: in the case that the object to be matched contains a plurality of candidate objects, matching the plurality of candidate objects with the plurality of reference objects according to the object characteristics of each candidate object in the plurality of candidate objects, the position relationship among the plurality of candidate objects, the preset object characteristics of the plurality of reference objects and the position relationship among the plurality of reference objects; in a case where object features of the plurality of candidate objects match object features of a plurality of target objects among the plurality of reference objects and a positional relationship between the plurality of candidate objects matches a positional relationship between the plurality of target objects, the plurality of target objects are determined as matching objects that match the plurality of candidate objects.
In an exemplary embodiment, the determining the relative position between the object to be matched and the target mobile device includes: and determining the relative position between the object to be matched and the target mobile equipment according to the two-dimensional space data acquired by the second sensor.
In an exemplary embodiment, the acquiring, by the first sensor, three-dimensional space data of the space to be measured includes: acquiring three-dimensional point cloud data of the space to be detected through a flight time sensor, wherein the first sensor is the flight time sensor, and the three-dimensional space data is the three-dimensional point cloud data; the method further comprises the steps of: and acquiring laser point cloud data of the space to be detected through a laser sensor, wherein the second sensor is the laser sensor, and the two-dimensional space data is the laser point cloud data.
In an exemplary embodiment, the method further comprises: positioning the target mobile equipment through the second sensor to obtain a target position of the target mobile equipment in the target area map; acquiring object features of the target object in a region range containing the target position through the first sensor; determining the preset object position according to the target position and the relative position between the target mobile equipment and the target object; and storing the preset object position and the object characteristics of the target object with corresponding relations.
According to another aspect of the embodiments of the present application, there is also provided a positioning apparatus of a mobile device, including: the first acquisition unit is used for acquiring three-dimensional space data of the space to be measured through the first sensor; the identification unit is used for carrying out feature identification on the three-dimensional space data to obtain object features of the object to be matched; the first determining unit is used for determining the relative position between the object to be matched and the target mobile device according to the two-dimensional space data under the condition that the object characteristics of the object to be matched are matched with the object characteristics of the preset target object; the first positioning unit is used for positioning the target mobile device according to the relative position between the object to be matched and the target mobile device and the preset object position of the target object, wherein the preset object position is the position of the target object in the target area map.
In an exemplary embodiment, the apparatus further comprises: the control unit is used for controlling the first sensor to be adjusted from a closed state to an open state under the condition that the target mobile equipment is determined to be in an abnormal posture state before the three-dimensional space data of the space to be detected is acquired through the first sensor.
In an exemplary embodiment, the apparatus further comprises: a detection unit, configured to detect a pose reference parameter of the target mobile device by using a pose detection component, where the pose reference parameter includes at least one of: a position change parameter for representing a change in position of the target mobile device along a target axis, and a tilt angle parameter for representing a tilt angle of the target mobile device; and the second determining unit is used for determining that the target mobile equipment is in the abnormal gesture state under the condition that the target mobile equipment is determined to be in a jolt state according to the position change parameter or the target mobile equipment is determined to be in a tilting state according to the tilting angle parameter.
In an exemplary embodiment, the apparatus further comprises: the third determining unit is configured to determine, after the feature recognition is performed on the three-dimensional space data to obtain object features of an object to be matched, a reference object with object features matched with the object features of the object to be matched from a plurality of preset reference objects, where the object features of the object to be matched include at least one of the following: the object type of the object to be matched and the object size of the object to be matched.
In an exemplary embodiment, the apparatus further comprises: the matching unit is used for matching the plurality of candidate objects with the plurality of reference objects according to the object characteristics of each candidate object in the plurality of candidate objects, the position relationship among the plurality of candidate objects, the preset object characteristics of the plurality of reference objects and the position relationship among the plurality of reference objects after the three-dimensional space data are subjected to characteristic recognition to obtain the object characteristics of the object to be matched; a fourth determination unit configured to determine the plurality of target objects as matching objects that match the plurality of candidate objects, in a case where object features of the plurality of candidate objects match object features of a plurality of target objects among the plurality of reference objects, and a positional relationship between the plurality of candidate objects matches a positional relationship between the plurality of target objects.
In an exemplary embodiment, the first determining unit includes: and the determining module is used for determining the relative position between the object to be matched and the target mobile equipment according to the two-dimensional space data acquired by the second sensor.
In an exemplary embodiment, the first obtaining unit includes an obtaining module, and the apparatus further includes a second obtaining unit, where the obtaining module is configured to obtain three-dimensional point cloud data of the space to be measured through a time-of-flight sensor, where the first sensor is the time-of-flight sensor, and the three-dimensional space data is the three-dimensional point cloud data; the second acquisition unit is configured to acquire laser point cloud data of the space to be measured through a laser sensor, where the second sensor is the laser sensor, and the two-dimensional space data is the laser point cloud data.
In an exemplary embodiment, the apparatus further comprises: the second positioning unit is used for positioning the target mobile equipment through the second sensor to obtain a target position of the target mobile equipment in the target area map; a third acquisition unit configured to acquire, by the first sensor, an object characteristic of the target object within an area range including the target position; a fifth determining unit, configured to determine the preset object position according to the target position and a relative position between the target mobile device and the target object; and the storage unit is used for storing the preset object position with the corresponding relation and the object characteristics of the target object.
According to yet another aspect of the embodiments of the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the method of testing an interface as described above when run.
According to still another aspect of the embodiments of the present application, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the method for testing the interface described above through the computer program.
In the embodiment of the application, a mode of carrying out loop detection on equipment based on three-dimensional space data and positioning the equipment based on a loop result is adopted, and the three-dimensional space data of a space to be detected is obtained through a first sensor; performing feature recognition on the three-dimensional space data to obtain object features of an object to be matched; under the condition that the object characteristics of the object to be matched are matched with the object characteristics of the preset target object, determining the relative position between the object to be matched and the target mobile device according to the two-dimensional space data; according to the relative position between the object to be matched and the target mobile device and the preset object position of the target object, the target mobile device is positioned, wherein the preset object position is the position of the target object in the target area map, the object characteristics of the object to be matched in the space to be matched are identified based on three-dimensional space data, the preset object matched with the object to be matched is determined based on the object characteristics, the mobile device can be positioned based on the object position of the preset object and the detected relative position between the object to be matched and the mobile device, and the problem that the accuracy of equipment positioning is low due to poor stability of the sensor data in the positioning method of the mobile device in the related technology is solved by combining the preset object position of the target object matched with the object to be matched determined based on the three-dimensional information relative to the mode of positioning based on the two-dimensional information.
[ description of the drawings ]
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a hardware environment of an alternative mobile device positioning method according to an embodiment of the present application;
FIG. 2 is a flow chart of an alternative method of locating a mobile device according to an embodiment of the present application;
FIG. 3 is a flow chart of another alternative method of locating a mobile device according to an embodiment of the present application;
FIG. 4 is a block diagram of an alternative mobile device positioning apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an alternative electronic device according to an embodiment of the present application.
[ detailed description ] of the invention
The present application will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
According to one aspect of the embodiments of the present application, a positioning method of a mobile device is provided. Alternatively, in the present embodiment, the above-described positioning method of the mobile device may be applied to a hardware environment constituted by the terminal device 102, the mobile device 104, and the server 106 as shown in fig. 1. As shown in fig. 1, the terminal device 102 may be connected to the mobile device 104 and/or the server 106 (e.g., an internet of things platform or cloud server) through a network to control the mobile device 104, e.g., bind with the mobile device 104, configure device functions of the mobile device 104.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (Wireless Fidelity ), bluetooth, infrared. The network used by the terminal device 102 to communicate with the mobile device 104 and/or the server 106 may be the same or different from the network used by the mobile device 104 to communicate with the server 106. The terminal device 102 may not be limited to a PC, a mobile phone, a tablet computer, or the like, and the mobile device 104 may be a mobile robot, for example, a cleaning device (e.g., a cleaning robot such as a sweeping robot), a dispensing device (e.g., a meal delivery robot), or other types of mobile devices.
The positioning method of the mobile device in the embodiment of the present application may be performed by the terminal device 102, the mobile device 104, or the server 106 alone, or may be performed by at least two of the terminal device 102, the mobile device 104, and the server 106 together. The positioning method of the mobile device performed by the terminal device 102 or the mobile device 104 according to the embodiments of the present application may also be performed by a client installed thereon.
Taking the mobile device 104 as an example to perform the positioning method of the mobile device in this embodiment, fig. 2 is a schematic flow chart of an alternative positioning method of the mobile device according to an embodiment of the present application, as shown in fig. 2, the flow of the method may include the following steps:
step S202, three-dimensional space data of a space to be measured are obtained through a first sensor.
The positioning method of the mobile device in the embodiment can be applied to a scenario of positioning the mobile device in the process of operating the mobile device, for example, the positioning method can be applied to a process of executing area cleaning or other tasks by the cleaning device. The mobile device may be a cleaning device, a dispensing device, or other type of mobile device. And when the mobile equipment is picked up and the time length of positioning failure reaches the preset time length or in the normal moving process, the mobile equipment can be positioned through the sensor data of the sensing sensor configured on the mobile equipment, so that the current position of the mobile equipment is determined.
The current location of the mobile device may be the location of the mobile device in a preset area map. For a target mobile device, the target mobile device may be configured with one or more sensors, for example, a first sensor for sensing three-dimensional spatial information. The first sensor can detect the space to be detected of the target mobile device, and the target mobile device is positioned based on the detected sensor data, wherein the positioning is the position of the target mobile device in the target area map.
The first sensor may be any type of sensor for sensing three-dimensional spatial information, for example, a TOF (Time of Flight) sensor, an infrared sensor or other types of sensors, and the sensor type of the first sensor is not limited in this embodiment.
In addition, the mobile device may be further configured with a second sensor for sensing two-dimensional spatial information, that is, the second sensor may be a sensor for sensing two-dimensional spatial information, for example, a laser sensor, which may be an LDS (Laser Distance Sensor, laser ranging sensor) sensor, or may be another sensor, where the sensor type of the second sensor is not limited in this embodiment.
In the related art, a sensor used for positioning a mobile device is usually a two-dimensional sensing sensor, for example, an LDS sensor, where the LDS sensor can only sense the distance between an obstacle and the sensor, and loses three-dimensional information, and when the mobile device is positioned, the mobile device cannot handle abnormal states such as jolt or tilt, otherwise the positioning accuracy and stability are reduced.
In this embodiment, in order to at least partially solve the above technical problem, three-dimensional space data (for example, three-dimensional point cloud data) and two-dimensional space data (for example, laser point cloud data) may be detected by the first sensor and combined, and the mobile device is positioned by combining information of the three-dimensional space and information of the two-dimensional space of the space to be detected, so as to improve positioning accuracy of the mobile device. For the target mobile device, the first sensor configured on the target mobile device may acquire three-dimensional space data of the space to be measured.
And step S204, performing feature recognition on the three-dimensional space data to obtain object features of the object to be matched.
For the three-dimensional space data, the target mobile device may perform feature recognition on the three-dimensional space data, identify features of a preset type of object in the space to be detected, for example, identify features of furniture, door, and the like, so as to obtain features of the object to be matched, where the features of the object to be matched may be used to describe features of the object to be matched, for example, an object type, an object size, an object color, and the like, and in this embodiment, the features of the object are not limited.
The object to be matched may comprise one or more objects, and for objects of a predetermined object type, all objects may be identified, and the object characteristics of all objects that meet the requirements (i.e. belong to the predetermined object type) may be retained.
In step S206, in the case that the object features of the object to be matched are matched with the object features of the preset target object, the relative position between the object to be matched and the target mobile device is determined according to the two-dimensional space data.
After obtaining the object features of the object to be matched, the target mobile device may match the object features of the object to be matched with the object features of the preset plurality of reference objects, and determine that the feature has been observed when the object features are matched with the reference object whose object features are consistent with the object to be matched, that is, the object features of the object to be matched are matched with the object features of the preset target object (i.e., the matched object). Here, the object feature agreement means that the object features are the same (e.g., object type) or the deviation is within the deviation range.
If a matched object exists, the relative position between the object to be matched and the target mobile device can be determined according to the two-dimensional space data, wherein the relative position can be the relative position of the object to be matched and the target mobile device in a two-dimensional plane.
Optionally, when the two-dimensional spatial data is acquired by the second sensor, the relative position between the object to be matched and the target mobile device may be determined according to the relative position between the second sensor and the centroid position of the target mobile device. Furthermore, the relative position between the object to be matched and the target mobile device may also be represented by the relative position between the object to be matched and the second sensor.
Of course, the two-dimensional space data may be obtained by any other means, and is not limited to the means obtained by the second sensor. For example, the three-dimensional space information obtained by the first sensor may be further analyzed, and the like, which is not limited herein.
And step S208, positioning the target mobile equipment according to the relative position between the object to be matched and the target mobile equipment and the preset object position of the target object, wherein the preset object position is the position of the target object in the target area map.
The target object is a preset reference object, which may have a corresponding object position, that is, a preset object position, which may be pre-specified by a use object of the target mobile device, or may be automatically determined based on an ongoing area map construction or other scene. According to the relative position between the object to be matched and the target mobile device and the preset object position, the target mobile device can be positioned, so that the current position of the target mobile device in the target area map is determined.
Optionally, the preset object position may be a position coordinate of a preset object point of the target object in the target area map, and the relative position between the object to be matched and the target mobile device is a relative position between an object point of the same position on the object to be matched and the target mobile device, which may be in a vector form, and the position coordinate of the target mobile device may be determined by superimposing a relative position vector on the position coordinate of the preset object point of the target object.
Through the steps S202 to S208, three-dimensional space data of the space to be measured is obtained through the first sensor; performing feature recognition on the three-dimensional space data to obtain object features of an object to be matched; under the condition that the object characteristics of the object to be matched are matched with the object characteristics of the preset target object, determining the relative position between the object to be matched and the target mobile device according to the two-dimensional space data; according to the relative position between the object to be matched and the target mobile device and the preset object position of the target object, the target mobile device is positioned, wherein the preset object position is the position of the target object in the target area map, the problem that in a positioning method of the mobile device in the related technology, the accuracy of device positioning is low due to poor stability of sensor data is solved, and the accuracy of device positioning is improved.
In one exemplary embodiment, determining the relative position between the object to be matched and the target mobile device includes:
and S11, determining the relative position between the object to be matched and the target mobile equipment according to the two-dimensional space data acquired by the second sensor.
In this embodiment, the target mobile device may be further configured with a second sensor, and the type of the second sensor is similar to that in the foregoing embodiment, which is not described herein. The sensor data acquired by the second sensor is two-dimensional space data. The two-dimensional space data and the three-dimensional space data may be space data of a space to be measured detected at the same time, or space data of a space to be measured detected at two times at a detection interval smaller than a preset time interval.
And according to the combination of the three-dimensional space data and the two-dimensional space data, the target mobile equipment can be positioned in a combined mode. Alternatively, the two-dimensional space data may be used to determine the relative position between the object to be matched and the target mobile device, i.e. the relative position may be determined according to the two-dimensional space data acquired by the second sensor. The two-dimensional spatial data may include distance data between the second sensor and the object to be matched, in which case the second sensor may be a ranging sensor, the two-dimensional spatial data may be two-dimensional ranging data, and based on a ranging result, a relative position between the object to be matched and the target mobile device may be determined.
According to the embodiment, the relative position between the object and the mobile device is determined through the two-dimensional space data, so that the accuracy and convenience of position detection can be improved, and meanwhile, the existing positioning function of the mobile device can be compatible.
In one exemplary embodiment, acquiring three-dimensional space data of a space to be measured by a first sensor includes:
s21, three-dimensional point cloud data of a space to be detected are obtained through a flight time sensor, wherein the first sensor is the flight time sensor, and the three-dimensional space data are the three-dimensional point cloud data.
For a scene that is jointly located based on two-dimensional spatial data and three-dimensional spatial data, the method further includes:
s22, acquiring laser point cloud data of a space to be detected through a laser sensor, wherein the second sensor is the laser sensor, and the two-dimensional space data is the laser point cloud data.
In this embodiment, the first sensor may be a TOF sensor and the second sensor may be a laser sensor, for example, an LDS sensor. By means of the time-of-flight sensor, the target mobile device may acquire three-dimensional point cloud data of the space to be measured (in which case the three-dimensional space data may comprise three-dimensional point cloud data), by means of the laser sensor, the target mobile device may acquire laser point cloud data of the space to be measured (in which case the two-dimensional space data may comprise laser point cloud data).
Here, the TOF sensor can observe the point cloud data of the three-dimensional space, can conveniently identify the characteristics of furniture, flowerpots, doors and windows and the like in families, can make up that the laser sensor can only observe two-dimensional information, has limited information and cannot distinguish the indoor characteristics. The TOF sensor has limited visual angle and limited sensing distance, and can sense data within 350 degrees by adopting the laser sensor when the conditions such as shielding are easy to fail in positioning, and the measuring distance is larger than that of the TOF sensor, so that the positioning result is relatively more stable. The success rate and the accuracy of loop-back can be improved by combining the TOF sensor with the laser sensor.
According to the embodiment, the mobile equipment is subjected to loop detection by combining the TOF sensor with the laser sensor, so that the success rate and the accuracy of loop detection can be improved.
In an exemplary embodiment, the above method further comprises:
s31, positioning the target mobile equipment through a second sensor to obtain a target position of the target mobile equipment in a target area map;
s32, acquiring object characteristics of a target object in a region range containing a target position through a first sensor;
s33, determining a preset object position according to the target position and the relative position between the target mobile equipment and the target object;
S34, storing the preset object position with the corresponding relation and the object characteristics of the target object.
When the target area map is built (for example, when the target mobile device moves in the target area for the first time), the target mobile device can be positioned through the second sensor, object features in a certain range of the current position are detected through the first sensor, an area map based on the features is built, and the target area map is obtained.
In this embodiment, the target mobile device may locate the target mobile device through the second sensor, so as to obtain a target position where the target mobile device is located in the target area map. The positioning may be SLAM (Simultaneous Localization and Mapping, synchronous positioning and mapping) positioning, where SLAM may be: the mobile equipment starts to move from an unknown position in an unknown environment, self-positioning is carried out according to the position and the map in the moving process, and meanwhile, an incremental map is built on the basis of self-positioning, so that the autonomous positioning and navigation of the mobile equipment are realized.
Meanwhile, the target mobile device may acquire object features of the target object within the area including the target position, for example, features of furniture, flowerpots, doors and windows in a home, and determine a preset object position according to the target position and a relative position between the target mobile device and the target object, where a manner of determining the preset object position is similar to a manner of determining the device position of the target mobile device described above, and will not be described herein.
After the preset object position is obtained, the target mobile device can store the object features of the preset object position and the target object with corresponding relations, and the object features and the object positions of the plurality of reference objects can be obtained by continuously executing the process of determining the object features and the object positions, so that loop detection is conveniently carried out on the target mobile device based on the object features and the object positions of the plurality of reference objects, and the target mobile device is positioned.
Note that, storing the preset object position and the object feature of the target object having the correspondence relationship may be storing the preset object position and the object feature of the target object having the correspondence relationship on the target feature map (which may be the same map as the target area map or a different map from the target area map). The matching the object features of the object to be matched with the object features of the preset plurality of reference objects may include: and matching the object characteristics of the object to be matched with the object characteristics in the target characteristic map.
For example, the TOF sensor can sense three-dimensional environment information, identify features such as furniture and doors, and make one-to-one correspondence with SLAM positioning information results of the LDS sensor at the same time, construct a map based on the features, and search the feature map for whether the feature is observed in the course of traveling by the mobile robot (a real force of the target mobile device), if the feature is repeatedly observed, a closed loop constraint can be established.
By the method, the corresponding relation between the object features and the object positions is stored when the regional map is constructed, loop detection can be conveniently carried out on the mobile equipment, and the positioning accuracy of the mobile equipment is improved.
In an exemplary embodiment, before the three-dimensional space data of the space to be measured is acquired by the first sensor, the method further includes:
s41, controlling the first sensor to be adjusted from the closed state to the open state when the target mobile device is determined to be in the abnormal posture state.
The first sensor may be on all the time, i.e. the mobile device is positioned in a similar way as in the previous embodiments during operation of the mobile device. However, in the above manner, since the joint equipment positioning is required, the complexity of positioning is high and the consumed computing resources are large, so that the time delay of equipment positioning is long and the resource consumption of equipment positioning is large.
In this embodiment, in a case where the target mobile device is in a normal posture state, for example, a jolt is small, an inclination angle is small, or the like, the target mobile device may be positioned only by a two-dimensional sensor (for example, a second sensor), and at this time, the first sensor is in a closed state. When the target mobile equipment is in an abnormal posture state, the first sensor is controlled to be adjusted from a closed state to an open state, and the target mobile equipment is jointly positioned through the first sensor or the first sensor and the two-dimensional sensing sensor.
According to the embodiment, when the mobile equipment is in the abnormal posture state, the three-dimensional sensing sensor is started to position the mobile equipment, so that the time delay of equipment positioning can be shortened, and the resource consumption of equipment positioning is reduced.
In an exemplary embodiment, the above method further comprises:
s51, detecting pose reference parameters of the target mobile device through a pose detection component, wherein the pose reference parameters comprise at least one of the following: a position change parameter for indicating a position change of the target mobile device along the target axis, and a tilt angle parameter for indicating a tilt angle of the target mobile device;
and S52, determining that the target mobile device is in an abnormal posture state when the target mobile device is determined to be in a jolt state according to the position change parameter or in a tilt state according to the tilt angle parameter.
In this embodiment, the abnormal posture state may be various, and may include, but is not limited to, one of the following: the bump state, the tilt state, and correspondingly, the manner of determining that the target mobile device is in the abnormal posture state may be various, and may include, but not limited to, one of the following:
mode one: detecting a position change parameter of the target mobile device by a pose detection component, wherein the position change parameter is used for representing the position change of the target mobile device along a target axis; and under the condition that the target mobile device is determined to be in a jolt state according to the position change parameters, determining that the target mobile device is in an abnormal posture state.
Here, the target axis may be a coordinate axis perpendicular to a preset plane, for example, a Z axis, and the position change parameter is used to represent a position change of the target mobile device along the target axis, that is, to represent a height change of the target mobile device, if the height change is gentle, it may be in a smooth moving state with the target mobile device, otherwise, it is determined that the target mobile device is in a bumpy state. The bump state is a state in which the target mobile device repeatedly switches between an increase in altitude and a decrease in altitude within a short time (e.g., within a preset period of time). At this time, the accuracy of the two-dimensional space data detected by the sensing sensor is low, the target mobile device is difficult to position by the two-dimensional space data, and the first sensor can be started to perform sensor joint positioning.
Mode two: detecting an inclination angle parameter of the target mobile device by a pose detection component, wherein the inclination angle parameter is used for representing the inclination angle of the target mobile device; and under the condition that the target mobile device is determined to be in the inclined state according to the inclined angle parameter, determining that the target mobile device is in the abnormal posture state.
Here, the tilt angle may be an angle with a preset plane, such as a pitch angle, and the tilt angle parameter is used to represent the tilt angle of the target mobile device, i.e., to represent the pitch angle of the target mobile device. If the inclination angle is too large, the information content of the two-dimensional space data detected by the two-dimensional sensing sensor is small, and the target mobile equipment is difficult to position by the two-dimensional space data, so that the first sensor can be started to perform sensor joint positioning.
Through this embodiment, when detecting that mobile device jolts or inclination is too big, start three-dimensional perception sensor and carry out the sensor location, can improve the rationality of sensor control, improve the accuracy of mobile device location.
In an exemplary embodiment, after performing feature recognition on the three-dimensional space data to obtain the object feature of the object to be matched, the method further includes:
s61, determining a reference object with object characteristics matched with the object characteristics of the object to be matched in a plurality of preset reference objects, wherein the object characteristics of the object to be matched comprise at least one of the following: object type of object to be matched, object size of object to be matched.
In this embodiment, the object features may include a variety of features, which may include, but are not limited to, at least one of: the object type, object size, and correspondingly, the object characteristics of the object to be matched include at least one of: object type of object to be matched, object size of object to be matched. The object types herein may be preset of a plurality of object types, each object type corresponding to a class of objects, such as, but not limited to, furniture, flower pots, doors and windows, etc. in a household.
Object features of a plurality of reference objects can be preset on the target area map or the target feature map. After the object features of the object to be matched are obtained, the object features of the object to be matched and the object features of each reference object can be matched to obtain the matching degree of the object to be matched and each reference object, and then the target object is determined based on the matching degree of the object to be matched and each reference object. The target object may be: and in the preset multiple reference objects, the object characteristics are matched with the object characteristics of the object to be matched.
Here, the object to be matched with the object to be matched may be a reference object with a matching degree greater than a preset matching degree threshold, or a reference object with a maximum matching degree, or may be another manner of determining the matching object, which is not limited in this embodiment.
According to the embodiment, the object characteristic matching is performed based on the object type and the object size, so that the convenience and the accuracy of object matching can be improved.
In an exemplary embodiment, after performing feature recognition on the three-dimensional space data to obtain the object feature of the object to be matched, the method further includes:
s71, in the case that the object to be matched comprises a plurality of candidate objects, matching the plurality of candidate objects with the plurality of reference objects according to the object characteristics of each candidate object in the plurality of candidate objects, the position relations among the plurality of candidate objects, the preset object characteristics of the plurality of reference objects and the position relations among the plurality of reference objects;
S72, in a case where the object features of the plurality of candidate objects match the object features of the plurality of target objects among the plurality of reference objects, and the positional relationship between the plurality of candidate objects matches the positional relationship between the plurality of target objects, the plurality of target objects are determined as matching objects that match the plurality of candidate objects.
If the number of objects to be matched is only one, object feature matching may be performed in a similar manner to the foregoing embodiment, and a reference object corresponding to the objects to be matched is determined, which has already been described and will not be described herein.
If the object to be matched contains a plurality of candidate objects, one candidate object can be selected as the object to be matched, and object feature matching is performed in a similar manner to the previous embodiment, and a reference object corresponding to the object to be matched is determined. The selection mode can be to select the candidate object of the target type as the object to be matched, or to select randomly, or to select other modes.
In this embodiment, in order to improve accuracy of object feature matching, the plurality of candidate objects and the plurality of reference objects may be matched according to an object feature of each of the plurality of candidate objects, a positional relationship between the plurality of candidate objects, an object feature of a preset plurality of reference objects, and a positional relationship between the plurality of reference objects, that is, in addition to determining whether the object features match, it is determined whether a positional relationship between adjacent features is correct, for example, the current position repeatedly observes a plurality of gates.
If the object characteristics of the plurality of candidate objects match the object characteristics of the plurality of target objects among the plurality of reference objects and the positional relationship between the plurality of candidate objects matches the positional relationship between the plurality of target objects, the plurality of target objects may be determined as matching objects that match the plurality of candidate objects.
When the target mobile device is positioned, one target object can be selected from a plurality of target objects, and the target mobile device is positioned according to the object position of the selected target object and the relative position between the target mobile device and the selected target object, so that the convenience of positioning the mobile device is improved.
According to the embodiment, the mobile equipment is positioned based on the object characteristics and the position relation of the adjacent characteristics, so that the positioning accuracy of the mobile equipment can be improved.
The positioning method of the mobile device in this embodiment is explained below in conjunction with alternative examples. In this alternative example, the target mobile device is a mobile robot, the first sensor is a TOF sensor, and the second sensor is an LDS sensor.
In this optional example, provided is a SLAM positioning scheme for assisting an LDS point cloud by looping back through a ToF sensor, as shown in fig. 3, a flow of a positioning method of a mobile device in this optional example may include the following steps:
Step S302, constructing a regional characteristic map through a TOF sensor and an LDS sensor.
And sensing three-dimensional environment information by using a TOF sensor, identifying characteristics of furniture, doors and the like, and constructing a map based on the characteristics, namely, a regional characteristic map by corresponding the characteristics to LDS SLAM positioning results at the same moment one by one.
And step S304, positioning the mobile robot by using the constructed regional characteristic map.
And constructing a kdtree (k-dimensional tree) by using the region feature map, detecting object features in a certain range of the current position on the kdtree through an LDS SLAM positioning result, judging whether the object features are repeatedly observed, and if the object features meet the requirement of proving that the robot passes the position, establishing closed loop constraint and positioning the mobile robot.
Here, the manner of determining whether the object feature is repeatedly observed may include:
1) Whether the same type of feature, such as whether both are doors or sofas;
2) Whether the size of the features is consistent, e.g., the door opening in the home is wide and narrow, determines whether the door size is consistent.
By the aid of the LDS sensor positioning method, stability of positioning of the LDS sensor when the robot bumps or tilts can be improved, and robustness and accuracy can be improved when the robot passes through a slope.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM (Read-Only Memory)/RAM (Random Access Memory ), magnetic disk, optical disc), including instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
According to still another aspect of the embodiments of the present application, there is also provided a positioning apparatus for a mobile device, where the positioning method for a mobile device is implemented. Fig. 4 is a block diagram of an alternative positioning apparatus for a mobile device according to an embodiment of the present application, as shown in fig. 4, the apparatus may include:
a first acquiring unit 402, configured to acquire three-dimensional space data of a space to be measured by using a first sensor;
the identifying unit 404 is connected to the first obtaining unit 402, and is configured to perform feature identification on the three-dimensional space data to obtain an object feature of the object to be matched;
a first determining unit 406, connected to the identifying unit 404, configured to determine, according to the two-dimensional spatial data, a relative position between the object to be matched and the target mobile device when the object feature of the object to be matched matches the object feature of the preset target object;
the first positioning unit 408 is connected to the first determining unit 406, and is configured to position the target mobile device according to a relative position between the object to be matched and the target mobile device and a preset object position of the target object, where the preset object position is a position of the target object in the target area map.
It should be noted that, the first acquiring unit 402 in this embodiment may be used to perform the above-mentioned step S202, the identifying unit 404 in this embodiment may be used to perform the above-mentioned step S204, the first determining unit 406 in this embodiment may be used to perform the above-mentioned step S206, and the first positioning unit 408 in this embodiment may be used to perform the above-mentioned step S208.
Acquiring three-dimensional space data of a space to be measured through the first sensor by the module; performing feature recognition on the three-dimensional space data to obtain object features of an object to be matched; determining the relative position between the object to be matched and target mobile equipment under the condition that the object characteristics of the object to be matched are matched with the object characteristics of a preset target object; according to the relative position between the object to be matched and the target mobile device and the preset object position of the target object, the target mobile device is positioned, wherein the preset object position is the position of the target object in the target area map, the problem that in a positioning method of the mobile device in the related technology, the accuracy of device positioning is low due to poor stability of sensor data is solved, and the accuracy of device positioning is improved.
In an exemplary embodiment, the above apparatus further includes:
and the control unit is used for controlling the first sensor to be adjusted from the closed state to the open state under the condition that the target mobile equipment is determined to be in the abnormal posture state before the three-dimensional space data of the space to be detected is acquired through the first sensor.
In an exemplary embodiment, the above apparatus further includes:
a detection unit for detecting pose reference parameters of the target mobile device by the pose detection component, wherein the pose reference parameters comprise at least one of the following: a position change parameter for indicating a position change of the target mobile device along the target axis, and a tilt angle parameter for indicating a tilt angle of the target mobile device;
and the second determining unit is used for determining that the target mobile device is in an abnormal posture state under the condition that the target mobile device is determined to be in a bumpy state according to the position change parameters or is determined to be in a tilting state according to the tilting angle parameters.
In an exemplary embodiment, the above apparatus further includes:
the third determining unit is configured to determine, after performing feature recognition on the three-dimensional space data to obtain object features of an object to be matched, a reference object with object features matched with the object features of the object to be matched from a plurality of preset reference objects, where the object features of the object to be matched include at least one of: object type of object to be matched, object size of object to be matched.
In an exemplary embodiment, the above apparatus further includes:
the matching unit is used for matching the plurality of candidate objects with the plurality of reference objects according to the object characteristics of each candidate object in the plurality of candidate objects, the position relationship among the plurality of candidate objects, the preset object characteristics of the plurality of reference objects and the position relationship among the plurality of reference objects after the three-dimensional space data are subjected to characteristic recognition to obtain the object characteristics of the object to be matched and the plurality of candidate objects are included in the object to be matched;
and a fourth determination unit configured to determine the plurality of target objects as matching objects that match the plurality of candidate objects, in a case where object features of the plurality of candidate objects match object features of the plurality of target objects among the plurality of reference objects, and a positional relationship between the plurality of candidate objects matches a positional relationship between the plurality of target objects.
In one exemplary embodiment, the first determining unit includes:
and the determining module is used for determining the relative position between the object to be matched and the target mobile device according to the two-dimensional space data acquired by the second sensor.
In an exemplary embodiment, the first acquisition unit comprises an acquisition module, the apparatus further comprises a second acquisition unit, wherein,
The acquisition module is used for acquiring three-dimensional point cloud data of a space to be detected through the flight time sensor, wherein the first sensor is the flight time sensor, and the three-dimensional space data is the three-dimensional point cloud data;
the second acquisition unit is used for acquiring laser point cloud data of the space to be detected through the laser sensor, wherein the second sensor is the laser sensor, and the two-dimensional space data is the laser point cloud data.
In an exemplary embodiment, the above apparatus further includes:
the second positioning unit is used for positioning the target mobile equipment through the second sensor to obtain the target position of the target mobile equipment in the target area map;
a third acquisition unit configured to acquire, by the first sensor, an object feature of a target object within a range of a region including the target position;
a fifth determining unit, configured to determine a preset object position according to the target position and a relative position between the target mobile device and the target object;
and the storage unit is used for storing the preset object position with the corresponding relation and the object characteristics of the target object. It should be noted that the above modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the above embodiments. It should be noted that the above modules may be implemented in software or in hardware as part of the apparatus shown in fig. 1, where the hardware environment includes a network environment.
According to yet another aspect of embodiments of the present application, there is also provided a storage medium. Alternatively, in this embodiment, the storage medium may be used to execute the program code of the positioning method of any one of the mobile devices described in the embodiments of the present application.
Alternatively, in this embodiment, the storage medium may be located on at least one network device of the plurality of network devices in the network shown in the above embodiment.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of:
s1, acquiring three-dimensional space data of a space to be measured through a first sensor;
s2, carrying out feature recognition on the three-dimensional space data to obtain object features of an object to be matched;
s3, determining the relative position between the object to be matched and the target mobile equipment according to the two-dimensional space data under the condition that the object characteristics of the object to be matched are matched with the object characteristics of the preset target object;
and S4, positioning the target mobile equipment according to the relative position between the object to be matched and the target mobile equipment and the preset object position of the target object, wherein the preset object position is the position of the target object in the target area map.
Alternatively, specific examples in the present embodiment may refer to examples described in the above embodiments, which are not described in detail in the present embodiment.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a U disk, ROM, RAM, a mobile hard disk, a magnetic disk or an optical disk.
According to still another aspect of the embodiments of the present application, there is further provided an electronic device for implementing the positioning method of the mobile device, where the electronic device may be a server, a terminal, or a combination thereof.
Fig. 5 is a block diagram of an alternative electronic device, according to an embodiment of the present application, including a processor 502, a communication interface 504, a memory 506, and a communication bus 508, as shown in fig. 5, wherein the processor 502, the communication interface 504, and the memory 506 communicate with each other via the communication bus 508, wherein,
a memory 506 for storing a computer program;
the processor 502 is configured to execute the computer program stored in the memory 506, and implement the following steps:
s1, acquiring three-dimensional space data of a space to be measured through a first sensor;
s2, carrying out feature recognition on the three-dimensional space data to obtain object features of an object to be matched;
S3, determining the relative position between the object to be matched and the target mobile equipment according to the two-dimensional space data under the condition that the object characteristics of the object to be matched are matched with the object characteristics of the preset target object;
and S4, positioning the target mobile equipment according to the relative position between the object to be matched and the target mobile equipment and the preset object position of the target object, wherein the preset object position is the position of the target object in the target area map.
Alternatively, in the present embodiment, the communication bus may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus. The communication interface is used for communication between the electronic device and other equipment.
The memory may include RAM or nonvolatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
As an example, the memory 506 may include, but is not limited to, a first acquisition unit 402, an identification unit 404, a first determination unit 406, and a first positioning unit 408 in a control apparatus including the device. In addition, other module units in the control device of the above apparatus may be included, but are not limited to, and are not described in detail in this example.
The processor may be a general purpose processor and may include, but is not limited to: CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments, and this embodiment is not described herein.
It will be understood by those skilled in the art that the structure shown in fig. 5 is only schematic, and the device implementing the positioning method of the mobile device may be a terminal device, and the terminal device may be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palmtop computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 5 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 5, or have a different configuration than shown in FIG. 5.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, ROM, RAM, magnetic or optical disk, etc.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in the present embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (11)

1. A method for locating a mobile device, comprising:
acquiring three-dimensional space data of a space to be measured through a first sensor;
performing feature recognition on the three-dimensional space data to obtain object features of an object to be matched;
under the condition that the object characteristics of the object to be matched are matched with the object characteristics of a preset target object, determining the relative position between the object to be matched and the target mobile device according to the two-dimensional space data;
and positioning the target mobile device according to the relative position between the object to be matched and the target mobile device and the preset object position of the target object, wherein the preset object position is the position of the target object in a target area map.
2. The method of claim 1, wherein prior to the acquiring three-dimensional space data of the space under test by the first sensor, the method further comprises:
And controlling the first sensor to be adjusted from the closed state to the open state under the condition that the target mobile device is determined to be in the abnormal gesture state.
3. The method of claim 2, wherein the method further comprises:
detecting, by a pose detection component, a pose reference parameter of the target mobile device, wherein the pose reference parameter comprises at least one of: a position change parameter for representing a change in position of the target mobile device along a target axis, and a tilt angle parameter for representing a tilt angle of the target mobile device;
and determining that the target mobile device is in the abnormal posture state under the condition that the target mobile device is determined to be in a jolt state according to the position change parameter or the target mobile device is determined to be in a tilt state according to the tilt angle parameter.
4. The method of claim 1, wherein after said feature recognition of the three-dimensional spatial data to obtain object features of the object to be matched, the method further comprises:
determining a reference object with object characteristics matched with the object characteristics of the object to be matched in a plurality of preset reference objects, wherein the object characteristics of the object to be matched comprise at least one of the following: the object type of the object to be matched and the object size of the object to be matched.
5. The method of claim 1, wherein after said feature recognition of the three-dimensional spatial data to obtain object features of the object to be matched, the method further comprises:
in the case that the object to be matched contains a plurality of candidate objects, matching the plurality of candidate objects with the plurality of reference objects according to the object characteristics of each candidate object in the plurality of candidate objects, the position relationship among the plurality of candidate objects, the preset object characteristics of the plurality of reference objects and the position relationship among the plurality of reference objects;
in a case where object features of the plurality of candidate objects match object features of a plurality of target objects among the plurality of reference objects and a positional relationship between the plurality of candidate objects matches a positional relationship between the plurality of target objects, the plurality of target objects are determined as matching objects that match the plurality of candidate objects.
6. The method according to any one of claims 1 to 5, wherein said determining the relative position between the object to be matched and the target mobile device from two-dimensional spatial data comprises:
And determining the relative position between the object to be matched and the target mobile equipment according to the two-dimensional space data acquired by the second sensor.
7. The method of claim 6, wherein,
the obtaining, by the first sensor, three-dimensional space data of a space to be measured includes: acquiring three-dimensional point cloud data of the space to be detected through a flight time sensor, wherein the first sensor is the flight time sensor, and the three-dimensional space data is the three-dimensional point cloud data;
the method further comprises the steps of: and acquiring laser point cloud data of the space to be detected through a laser sensor, wherein the second sensor is the laser sensor, and the two-dimensional space data is the laser point cloud data.
8. The method of claim 6, wherein the method further comprises:
positioning the target mobile equipment through the second sensor to obtain a target position of the target mobile equipment in the target area map;
acquiring object features of the target object in a region range containing the target position through the first sensor;
determining the preset object position according to the target position and the relative position between the target mobile equipment and the target object;
And storing the preset object position and the object characteristics of the target object with corresponding relations.
9. A positioning apparatus for a mobile device, comprising:
the first acquisition unit is used for acquiring three-dimensional space data of the space to be measured through the first sensor;
the identification unit is used for carrying out feature identification on the three-dimensional space data to obtain object features of the object to be matched;
the first determining unit is used for determining the relative position between the object to be matched and the target mobile device according to the two-dimensional space data under the condition that the object characteristics of the object to be matched are matched with the object characteristics of the preset target object;
the first positioning unit is used for positioning the target mobile device according to the relative position between the object to be matched and the target mobile device and the preset object position of the target object, wherein the preset object position is the position of the target object in the target area map.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 7.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to perform the method of any of claims 1 to 8 by means of the computer program.
CN202210712718.8A 2022-06-22 2022-06-22 Positioning method and device of mobile equipment, storage medium and electronic device Pending CN117311333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210712718.8A CN117311333A (en) 2022-06-22 2022-06-22 Positioning method and device of mobile equipment, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210712718.8A CN117311333A (en) 2022-06-22 2022-06-22 Positioning method and device of mobile equipment, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN117311333A true CN117311333A (en) 2023-12-29

Family

ID=89283606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210712718.8A Pending CN117311333A (en) 2022-06-22 2022-06-22 Positioning method and device of mobile equipment, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN117311333A (en)

Similar Documents

Publication Publication Date Title
CN112515563B (en) Obstacle avoiding method, sweeping robot and readable storage medium
US11847796B2 (en) Calibrating cameras using human skeleton
US20200380100A1 (en) Method and apparatus for turning on screen, mobile terminal and storage medium
CN105467992B (en) The determination method and apparatus in mobile electronic device path
CN107843252B (en) Navigation path optimization method and device and electronic equipment
CN105408762A (en) Device localization using camera and wireless signal
Sample et al. Optical localization of passive UHF RFID tags with integrated LEDs
CN111476894A (en) Three-dimensional semantic map construction method and device, storage medium and electronic equipment
WO2023098455A1 (en) Operation control method, apparatus, storage medium, and electronic apparatus for cleaning device
US20190122371A1 (en) Moving object controller, landmark, and moving object control method
CN113961009B (en) Obstacle avoidance method and device for sweeper, storage medium and electronic device
US20200209876A1 (en) Positioning method and apparatus with the same
CN117311333A (en) Positioning method and device of mobile equipment, storage medium and electronic device
CN112405526A (en) Robot positioning method and device, equipment and storage medium
CN112220405A (en) Self-moving tool cleaning route updating method, device, computer equipment and medium
US10012729B2 (en) Tracking subjects using ranging sensors
CN114041729A (en) Map construction method, system and storage medium for sweeping robot
US20240020930A1 (en) Method and apparatus for determining security area, device, and storage medium
WO2023207407A1 (en) Floor material recognition method and apparatus, storage medium, and electronic apparatus
CN117095043A (en) Robot repositioning method and device, storage medium and electronic device
CN117502990A (en) Ground material detection method and device, storage medium and electronic device
CN117132879A (en) Dynamic obstacle recognition method and device, storage medium and electronic device
WO2022134060A1 (en) Camera registration via robot
CN112393720B (en) Target equipment positioning method and device, storage medium and electronic device
KR102656646B1 (en) Method for detecting installation abnormality of sensing device and sensing device of performing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination