CN117412238A - Equipment positioning method and movable electronic equipment - Google Patents

Equipment positioning method and movable electronic equipment Download PDF

Info

Publication number
CN117412238A
CN117412238A CN202210794647.0A CN202210794647A CN117412238A CN 117412238 A CN117412238 A CN 117412238A CN 202210794647 A CN202210794647 A CN 202210794647A CN 117412238 A CN117412238 A CN 117412238A
Authority
CN
China
Prior art keywords
intelligent
robot
information
equipment
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210794647.0A
Other languages
Chinese (zh)
Inventor
杨栋梁
查永东
曾俊飞
朱维峰
胡海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210794647.0A priority Critical patent/CN117412238A/en
Publication of CN117412238A publication Critical patent/CN117412238A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/309Measuring or estimating channel quality parameters
    • H04B17/318Received signal strength

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application discloses a device positioning method and movable electronic equipment, relates to the technical field of artificial intelligence, and can automatically complete detection of position information of intelligent equipment in a scene. In this scenario, the electronic device may move in the scene for environmental detection. And the electronic equipment can search wireless signals of surrounding intelligent equipment while detecting the environment, and determine the signal strength of the searched wireless signals. Further, the electronic device can complete information confirmation of the intelligent device in the scene according to the searched wireless signals, and can complete positioning of the intelligent device in the scene according to pose information of the electronic device detected by the environment and signal strength of the searched wireless signals.

Description

Equipment positioning method and movable electronic equipment
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to a device positioning method and movable electronic equipment.
Background
With the development of artificial intelligence technology, more and more mobile intelligent products are emerging and applied to people's daily life. The mobile intelligent product is an intelligent product with an artificial intelligent system and mobile capability, and can be any form of intelligent product, such as a mobile robot form product, an unmanned plane form product, an intelligent vehicle form product and the like.
Generally, the mobile intelligent product needs to detect a scene where the mobile intelligent product is located, so that when the mobile intelligent product moves in the scene, corresponding work can be completed. Taking a sweeping robot applied to an intelligent home scene as an example, the sweeping robot generally needs to detect the home environment so as to automatically complete the cleaning and other works of the home environment in a reasonable route according to detection results.
However, as the concept of everything interconnection suggests, the user's call for intelligent device interconnection is also getting higher and higher. However, the current mobile intelligent products have limited capability, and intelligent devices in the environment cannot be accurately detected, so that intelligent devices in a manual association scene are often needed, and more operations are needed by users, so that the intelligent experience of the users on the intelligent products is affected.
Disclosure of Invention
The application provides a device positioning method and movable electronic equipment, which can automatically realize accurate detection of the position of intelligent equipment in a scene.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, the present application provides a device positioning method applied to a mobile electronic device. The equipment positioning method comprises the following steps: acquiring record information of a plurality of moments in the moving process of the electronic equipment; wherein the recording information for each of the plurality of time instants includes: pose information of the electronic equipment at the moment and signal strength of wireless signals of the intelligent equipment searched by the electronic equipment at the moment; determining target pose information of the electronic equipment corresponding to the target moment of which the signal strength meets the first preset condition according to the recorded information of the plurality of moments; and determining the position information of the intelligent equipment according to the target pose information.
According to the scheme provided by the first aspect, the electronic device can acquire pose information of the electronic device recorded at a plurality of moments in the moving process and signal strength of wireless signals of the intelligent device searched by the electronic device. In the information recorded at the moments, if the signal intensity of the wireless signal of the intelligent device searched by the electronic device meets a first preset condition at one moment, the electronic device can consider that the signal intensity of the intelligent device searched at the moment is very strong, so that the electronic device at the moment can be considered to be very close to the intelligent device, and the electronic device can locate the position information of the intelligent device according to the pose information of the electronic device at the moment. Thus, the electronic equipment can automatically complete the positioning of the intelligent equipment in the environment based on the pose information of the electronic equipment when the signal intensity of the searched intelligent equipment meets the first preset condition, the electronic equipment can automatically accurately detect the position information of the intelligent equipment in the environment, the user does not need to manually add the position information of the intelligent equipment, and the intelligent experience of the user on intelligent products is improved.
In one possible implementation manner, determining the location information of the intelligent device according to the target pose information may include: acquiring region information in the moving process of the electronic equipment; and determining the area where the intelligent equipment is located according to the target pose information and the area information. Therefore, the electronic equipment can correspond to the region information in the moving process of the electronic equipment when the signal intensity of the searched intelligent equipment meets the first preset condition, so that the region of the intelligent equipment in the environment can be automatically positioned, the electronic equipment can automatically accurately detect the region information of the intelligent equipment in the environment, the specific position of the intelligent equipment is not required to be accurately positioned, and the user does not need to manually divide the region of the intelligent equipment.
In one possible implementation manner, the acquiring the area information during the moving process of the electronic device may include: an environment map including area information is acquired. Based on this, determining the area where the intelligent device is located according to the target pose information and the area information may include: and acquiring a target area corresponding to the target pose information in the environment map as an area where the intelligent equipment is located. Therefore, the electronic equipment can correspond the pose information of the electronic equipment when the signal intensity of the searched intelligent equipment meets the first preset condition to the region information in the environment map, so that the region of the intelligent equipment in the environment map can be automatically positioned, and the accurate detection of the region of the intelligent equipment on the environment map by the electronic equipment is realized.
In a possible implementation manner, the area information may include room area information, and the acquiring, as an area where the intelligent device is located, a target area corresponding to the target pose information in the environment map may include: and acquiring a target room corresponding to the target pose information in the environment map as a room in which the intelligent equipment is located. Therefore, the electronic equipment can correspond the pose information of the electronic equipment when the signal intensity of the searched intelligent equipment meets the first preset condition to the room area in the environment map, so that the room of the intelligent equipment can be automatically positioned, and the accurate detection of the room of the intelligent equipment by the electronic equipment is realized.
In one possible implementation, the environment map may be generated according to environment information detected by the electronic device during the movement process. Therefore, the electronic equipment can search wireless signals of surrounding intelligent equipment while performing environment detection to construct and generate an environment map, and determine the signal strength of the searched wireless signals, so that the positioning of the intelligent equipment in the environment can be automatically completed according to the generated environment map and the pose information of the electronic equipment when the signal strength of the searched intelligent equipment meets the first preset condition.
In one possible implementation, the environment map may be preconfigured in the electronic device. Therefore, when the environment of the electronic equipment is detected, only the wireless signal of the intelligent equipment can be searched, the workload of the electronic equipment is reduced, and the detection speed of the position information of the intelligent equipment in the environment is accelerated.
In one possible implementation manner, the device positioning method provided by the application may further include: and generating a device distribution map according to the region where the intelligent device is located, wherein the device distribution map comprises intelligent devices distributed in different regions. Therefore, the electronic equipment can autonomously divide the intelligent equipment belonging to each area without manual division of users.
It can be understood that when the electronic device completes the recording of the signal intensity of the wireless signals of all the intelligent devices in the whole scene in the moving process, the electronic device can determine the area information of each intelligent device in the whole scene based on the possible implementation mode, so that the electronic device can automatically construct the distribution diagram of the intelligent devices in the whole scene, and the intelligent experience of the user on the intelligent products is improved.
In one possible implementation, the device positioning method may further include: when the interactive information of the user is obtained, determining the area where the user is located; determining all intelligent devices distributed in the area where the user is located based on the device distribution map; and responding to the interaction information according to all the intelligent devices, and executing preset operation corresponding to the interaction information. Therefore, the electronic equipment can realize accurate management and control of the intelligent equipment in the area where the user is located on the premise that the user does not specify specific equipment.
In one possible implementation manner, the first preset condition may include: the signal intensity is the strongest among the recorded information at a plurality of times. That is, when the signal intensity of the wireless signal of the intelligent device searched by the electronic device recorded at the target moment in the plurality of moments is strongest, the electronic device can consider that the distance between the electronic device at the target moment and the intelligent device is nearest, so that the electronic device can accurately position the position information of the intelligent device based on the pose information of the electronic device when the signal intensity of the searched intelligent device is strongest.
In one possible implementation, the device positioning method may further include: when the wireless signal of the intelligent device is detected to meet a second preset condition in the moving process of the electronic device, wireless connection with the intelligent device is established; and when the wireless connection is successfully established, the intelligent equipment is associated with the electronic equipment. Therefore, when the electronic equipment searches for the wireless signal of the intelligent equipment in the scene, the electronic equipment can also actively send the wireless connection signal to establish wireless connection with the intelligent equipment so as to complete automatic association of the intelligent equipment in the scene after the connection is successful, and a user does not need to manually associate the intelligent equipment in the scene.
It can be understood that, when the electronic device searches for the wireless signal of the intelligent device not associated with the electronic device in the moving process, the electronic device can complete the association of the intelligent devices by establishing wireless connection. Therefore, after the detection of the whole scene is finished, the electronic equipment can automatically complete the serial connection of all intelligent equipment in the whole scene.
In one possible implementation manner, the second preset condition may include: the electronic device searches for the wireless signal of the intelligent device for the first time. Thus, the electronic device can realize automatic association with the intelligent device when searching the wireless signal of the intelligent device for the first time.
In one possible implementation manner, when the signal strength of the wireless signal of the smart device is weak, the probability that the electronic device successfully establishes the wireless connection with the smart device is low, so the second preset condition may also include: the signal strength of the wireless signal of the intelligent device reaches a preset strength threshold. Therefore, when the electronic equipment searches that the signal intensity of the wireless signal of the intelligent equipment is strong, the automatic association with the intelligent equipment can be realized, and the association success rate of the electronic equipment is improved.
In one possible implementation manner, associating the intelligent device with the electronic device when the wireless connection is established successfully may include: when the wireless connection is established successfully, acquiring equipment information of the intelligent equipment; based on the device information, the intelligent device is associated with the electronic device. Therefore, when the wireless connection between the electronic device and the searched intelligent device is successfully established, the electronic device can actively acquire and record the device information of the intelligent device, so that the electronic device can automatically complete the registration and information confirmation of the intelligent device in the scene.
In one possible implementation, after associating the smart device with the electronic device, the device location method may further include: and generating an environment map containing intelligent device information according to the intelligent devices associated with the electronic devices. Thus, after the electronic device completes registration and information confirmation of the intelligent devices in the scene, the electronic device can mark the information of the intelligent devices in the scene in the environment map.
Optionally, the electronic device may determine, according to the possible implementation manner, information of an area where the intelligent device associated with the electronic device is located, so that the electronic device may label information of each intelligent device in an area where the intelligent device is located in the environment map, and complete construction of an intelligent device distribution diagram.
In a possible implementation manner, before detecting that the wireless signal of the intelligent device meets the second preset condition in the moving process of the electronic device, the device positioning method may further include: and searching for the wireless signal of the intelligent equipment when the interaction information of the user on the intelligent equipment is acquired. In this way, the electronic device may also search for the wireless signal of the smart device when the user uses the smart device, so as to implement automatic association with the smart device by establishing a wireless connection.
For example, when a user needs to use the intelligent projection device in a bedroom, the electronic device may acquire interaction information sent by the user, such as "turn on the projector of the bedroom", and then the electronic device may determine that the intelligent projection device is contained in the bedroom region according to the interaction information, so that the electronic device may move to the bedroom region to search for a wireless signal of the intelligent projection device. When the electronic device searches for the wireless signal of the intelligent projection device, the electronic device can automatically complete the association with the intelligent projection device by establishing wireless connection with the intelligent projection device. In addition, after the electronic equipment is successfully associated with the intelligent projection equipment, the information of the intelligent projection equipment can be marked in a bedroom area in the equipment distribution map of the whole scene, so that the equipment distribution map of the whole scene is updated.
In one possible implementation, the device positioning method may further include: and acquiring the signal intensity of the wireless signal of the intelligent device which is currently searched and the pose information of the current electronic device at preset intervals in the moving process of the electronic device. Therefore, in the moving process of the electronic equipment, the signal intensity of the wireless signals of the intelligent equipment searched around and the pose information of the electronic equipment can be continuously recorded according to the preset interval. It can be appreciated that the smaller the preset interval, the more information is recorded, and the more accurate the location information of the intelligent device is determined by the electronic device.
In one possible implementation manner, the preset interval is a preset distance interval, and the acquiring the signal strength of the wireless signal of the currently searched intelligent device and the pose information of the current electronic device at intervals of the preset interval may include: acquiring the signal intensity of a wireless signal of the intelligent equipment searched at the ith moment and pose information of the electronic equipment at the ith moment; when the displacement length of the electronic equipment between the current moment and the i-th moment reaches a preset distance interval, acquiring the signal intensity of the wireless signal of the intelligent equipment searched at the current moment, the pose information of the electronic equipment at the current moment, and the signal intensity of the wireless signal of the intelligent equipment searched at the i+1-th moment and the pose information of the electronic equipment at the i+1-th moment. Therefore, in the moving process of the electronic equipment, the signal intensity of the wireless signals of the intelligent equipment searched around and the pose information of the electronic equipment can be recorded once in a certain moving distance.
In one possible implementation manner, the preset interval is a preset time interval, and the acquiring the signal strength of the wireless signal of the currently searched intelligent device and the pose information of the current electronic device at intervals of the preset interval includes: acquiring the signal intensity of a wireless signal of the intelligent equipment searched at the ith moment and pose information of the electronic equipment at the ith moment; when the time length between the current moment and the i moment reaches a preset time interval, the signal intensity of the wireless signal of the intelligent device searched at the current moment, the pose information of the electronic device at the current moment, the signal intensity of the wireless signal of the intelligent device searched at the i+1 moment and the pose information of the electronic device at the i+1 moment are obtained. Therefore, in the moving process of the electronic equipment, the signal intensity of wireless signals of the intelligent equipment searched around and the pose information of the electronic equipment can be recorded at fixed time.
In one possible implementation, the device positioning method may further include: responding to a first instruction, wherein the electronic equipment moves in a target area, the first instruction is used for indicating that newly added intelligent equipment exists in the environment, and the target area is an area where the electronic equipment is located in a preset environment map when the electronic equipment searches wireless signals of the newly added intelligent equipment. Based on this, the acquiring the record information of a plurality of moments in the moving process of the electronic device includes: acquiring record information of the electronic equipment at a plurality of moments in the moving process, wherein the record information of each moment in the plurality of moments comprises: the pose information of the electronic equipment at the moment and the signal intensity of the wireless signal of the newly added intelligent equipment searched by the electronic equipment at the moment.
Thus, when the user indicates to the electronic device that the newly added intelligent device exists in the environment, the electronic device can search the wireless signal of the newly added intelligent device according to the area distribution information in the existing environment map. When the electronic equipment searches the wireless signal of the newly added intelligent equipment in the target area, the electronic equipment can move in the target area and continuously record the pose information of the electronic equipment and the signal intensity of the wireless signal of the newly added intelligent equipment searched by the electronic equipment in the moving process. Therefore, the electronic equipment can determine target pose information of the electronic equipment corresponding to the target moment of which the signal strength meets the first preset condition according to the recorded information of the electronic equipment at a plurality of moments in the moving process, so that the position information of the intelligent equipment is determined according to the target pose information.
Alternatively, when the electronic device does not search for the wireless signal of the newly added smart device in a certain area, the electronic device may move directly to the next area without moving in the area to search for the wireless signal of the newly added smart device.
Alternatively, the user may indicate the number of newly added smart devices when the user indicates to the electronic device that there are newly added smart devices in the environment. Thus, when the electronic device searches the wireless signals of the indicated number of the newly added intelligent devices, the electronic device can stop searching the wireless signals of other newly added intelligent devices and determine the position information of the indicated number of the newly added intelligent devices.
Optionally, the user may not indicate the number of newly added smart devices when the newly added smart devices exist in the environment to the electronic device. At this time, the electronic device may search for wireless signals of all newly added intelligent devices in the full scene, and determine location information of all newly added intelligent devices.
In one possible implementation, the path during movement of the electronic device may be the shortest movement path. Therefore, the electronic equipment can finish the positioning of the intelligent equipment in the shortest time, the detection efficiency of the electronic equipment is improved, and the intelligent experience of a user on intelligent products is improved.
In one possible implementation, the electronic device may be a robot or a smart car. Thus, the scheme can be suitable for different application scenes. For example, the robot may be a sweeping robot applied to a home scene.
Alternatively, the electronic device may be another mobile electronic device such as a drone.
In a second aspect, the present application provides a removable electronic device. The electronic device may include: an information acquisition unit, an information processing unit, and an information determination unit. The information obtaining unit may be configured to obtain recording information of multiple times in a moving process of the electronic device, where the recording information of each time in the multiple times includes: the pose information of the electronic equipment at the moment and the signal intensity of the wireless signal of the intelligent equipment searched by the electronic equipment at the moment. The information processing unit can be used for determining target pose information of the electronic equipment corresponding to the target moment with the signal strength meeting the first preset condition according to the recorded information of the plurality of moments. The information determining unit can be used for determining the position information of the intelligent equipment according to the target pose information.
According to the scheme provided by the second aspect, the movable electronic equipment can acquire pose information of the electronic equipment recorded at a plurality of moments in the moving process and the signal strength of wireless signals of the intelligent equipment searched by the electronic equipment. In the information recorded at the moments, if the signal intensity of the wireless signal of the intelligent device searched by the electronic device meets a first preset condition at one moment, the electronic device can consider that the signal intensity of the intelligent device searched at the moment is very strong, so that the electronic device at the moment can be considered to be very close to the intelligent device, and the electronic device can locate the position information of the intelligent device according to the pose information of the electronic device at the moment. Thus, the electronic equipment can automatically complete the positioning of the intelligent equipment in the environment based on the pose information of the electronic equipment when the signal intensity of the intelligent equipment searched meets the first preset condition, the accurate detection of the electronic equipment on the position information of the intelligent equipment in the environment is realized, the position information of the intelligent equipment is not required to be manually added by a user, and the intelligent experience of the user on intelligent products is improved.
In one possible implementation manner, the information determining unit may be configured to: and acquiring region information in the moving process of the electronic equipment, and then determining the region where the intelligent equipment is located according to the target pose information and the region information. Therefore, the electronic equipment can correspond to the region information of the electronic equipment in the moving process when the signal intensity of the searched intelligent equipment meets the first preset condition, so that the region of the intelligent equipment in the environment can be automatically positioned, the accurate detection of the region information of the intelligent equipment in the environment by the electronic equipment is realized, the specific position of the intelligent equipment is not required to be accurately positioned, and the user does not need to manually divide the region of the intelligent equipment.
In a possible implementation manner, the information determining unit may also be configured to: and acquiring an environment map, wherein the environment map comprises area information, and acquiring a target area corresponding to the target pose information in the environment map as an area where the intelligent equipment is located. Therefore, the electronic equipment can correspond the pose information of the electronic equipment when the signal intensity of the searched intelligent equipment meets the first preset condition to the region information in the environment map, so that the region of the intelligent equipment in the environment map can be automatically positioned, and the accurate detection of the region of the intelligent equipment on the environment map by the electronic equipment is realized.
In a possible implementation manner, the area information in the environment map may include room area information, and the information determining unit may be further configured to: and acquiring a target room corresponding to the target pose information in the environment map as a room in which the intelligent equipment is located. Therefore, the electronic equipment can correspond the pose information of the electronic equipment when the signal intensity of the searched intelligent equipment meets the first preset condition to the room area in the environment map, so that the room of the intelligent equipment can be automatically positioned, and the accurate detection of the room of the intelligent equipment by the electronic equipment is realized.
In one possible implementation, the environment map may be generated according to environment information detected by the electronic device during the movement process. Optionally, the electronic device may further include a map construction unit for constructing an environment map according to the environment information detected by the electronic device during the moving process. Therefore, the electronic equipment can search wireless signals of surrounding intelligent equipment while performing environment detection to construct and generate an environment map, and determine the signal strength of the searched wireless signals, so that the positioning of the intelligent equipment in the environment can be automatically completed according to the generated environment map and the pose information of the electronic equipment when the signal strength of the searched intelligent equipment meets the first preset condition.
In one possible implementation, the environment map may be preconfigured in the electronic device. Therefore, when the environment of the electronic equipment is detected, only the wireless signal of the intelligent equipment can be searched, the workload of the electronic equipment is reduced, and the detection speed of the position information of the intelligent equipment in the environment is accelerated.
In one possible implementation, the electronic device may further include: the first map generation unit is used for generating a device distribution map according to the area where the intelligent device is located, wherein the device distribution map comprises intelligent devices distributed in different areas. Therefore, the electronic equipment can autonomously divide the intelligent equipment belonging to each area without manual division of users.
It can be understood that when the electronic device completes the recording of the signal intensity of the wireless signals of all the intelligent devices in the whole scene in the moving process, the electronic device can determine the area information of each intelligent device in the whole scene based on the possible implementation mode, so that the electronic device can automatically construct the distribution diagram of the intelligent devices in the whole scene, and the intelligent experience of the user on the intelligent products is improved.
In one possible implementation, the electronic device may further include: a user positioning unit, a device identification unit and an information response unit. The user positioning unit can be used for determining the area where the user is located when the interactive information of the user is acquired. The device identification unit can be used for determining all intelligent devices distributed in the area where the user is located based on the device distribution map. The information response unit can be used for responding to the interaction information according to all intelligent devices in the area where the user is located, and executing preset operation corresponding to the interaction information. Therefore, the electronic equipment can realize accurate management and control of the intelligent equipment in the area where the user is located on the premise that the user does not specify specific equipment.
In one possible implementation manner, the first preset condition may include: the signal intensity is the strongest among the recorded information at a plurality of times. That is, when the signal intensity of the wireless signal of the intelligent device searched by the electronic device recorded at the target moment in the plurality of moments is strongest, the electronic device can consider that the distance between the electronic device at the target moment and the intelligent device is nearest, so that the electronic device can accurately position the position information of the intelligent device based on the pose information of the electronic device when the signal intensity of the searched intelligent device is strongest.
In one possible implementation, the electronic device may further include: a wireless connection unit and a device association unit. The wireless connection unit can be used for establishing wireless connection with the intelligent device when the wireless signal of the intelligent device is detected to meet the second preset condition in the moving process of the electronic device. And the equipment association unit can be used for associating the intelligent equipment with the electronic equipment when the wireless connection is successfully established. Therefore, the electronic device can actively send the wireless connection signal when searching the wireless signal of the intelligent device, and establishes wireless connection with the intelligent device, so that automatic association of the intelligent device is completed after connection is successful, and a user does not need to manually associate the intelligent device in a scene.
It can be understood that, when the electronic device searches for the wireless signal of the intelligent device not associated with the electronic device in the moving process, the electronic device can complete the association of the intelligent devices by establishing wireless connection. Therefore, after the detection of the whole scene is finished, the electronic equipment can automatically complete the serial connection of all intelligent equipment in the whole scene.
In one possible implementation manner, the second preset condition may include: the electronic device searches for the wireless signal of the intelligent device for the first time. Thus, the electronic device can realize automatic association with the intelligent device when searching the wireless signal of the intelligent device for the first time.
In one possible implementation manner, when the signal strength of the wireless signal of the smart device is weak, the probability that the electronic device successfully establishes the wireless connection with the smart device is low, so the second preset condition may also include: the signal strength of the wireless signal of the intelligent device reaches a preset strength threshold. Therefore, when the electronic equipment searches that the signal intensity of the wireless signal of the intelligent equipment is strong, the automatic association with the intelligent equipment can be realized, and the association success rate of the electronic equipment is improved.
In one possible implementation manner, the device association unit may be configured to: when the wireless connection is established successfully, acquiring equipment information of the intelligent equipment; based on the device information, the intelligent device is associated with the electronic device. Therefore, when the wireless connection between the electronic device and the searched intelligent device is successfully established, the electronic device can actively acquire and record the device information of the intelligent device, so that the electronic device can automatically complete the registration and information confirmation of the intelligent device in the scene.
In one possible implementation, the electronic device may further include: and the second map generation unit is used for generating an environment map containing intelligent device information according to the intelligent devices associated with the electronic devices. Thus, after the electronic device completes registration and information confirmation of the intelligent devices in the scene, the electronic device can mark the information of the intelligent devices in the scene in the environment map.
Optionally, the electronic device may determine, according to the possible implementation manner, information of an area where the intelligent device associated with the electronic device is located, so that the electronic device may label information of each intelligent device in an area where the intelligent device is located in the environment map, and complete construction of an intelligent device distribution diagram.
In one possible implementation, the electronic device may further include: and the association triggering unit is used for searching the wireless signal of the intelligent equipment when the interaction information of the user on the intelligent equipment is acquired. In this way, the electronic device may also search for the wireless signal of the smart device when the user uses the smart device, so as to implement automatic association with the smart device by establishing a wireless connection.
In one possible implementation, the electronic device may further include: the information recording unit is used for acquiring the signal intensity of the wireless signal of the currently searched intelligent device and the pose information of the current electronic device at preset intervals in the moving process of the electronic device. Therefore, in the moving process of the electronic equipment, the signal intensity of the wireless signals of the intelligent equipment searched around and the pose information of the electronic equipment can be continuously recorded according to the preset interval. It can be appreciated that the smaller the preset interval, the more information is recorded, and the more accurate the location information of the intelligent device is determined by the electronic device.
In one possible implementation, the preset interval may be a preset distance interval, and the information recording unit may be configured to: acquiring the signal intensity of a wireless signal of the intelligent equipment searched at the ith moment and pose information of the electronic equipment at the ith moment; when the displacement length of the electronic equipment between the current moment and the i-th moment reaches a preset distance interval, acquiring the signal intensity of the wireless signal of the intelligent equipment searched at the current moment, the pose information of the electronic equipment at the current moment, and the signal intensity of the wireless signal of the intelligent equipment searched at the i+1-th moment and the pose information of the electronic equipment at the i+1-th moment. Therefore, in the moving process of the electronic equipment, the signal intensity of the wireless signals of the intelligent equipment searched around and the pose information of the electronic equipment can be recorded once in a certain moving distance.
In one possible implementation, the preset interval may be a preset time interval, and the information recording unit may be configured to: acquiring the signal intensity of a wireless signal of the intelligent equipment searched at the ith moment and pose information of the electronic equipment at the ith moment; when the time length between the current moment and the i moment reaches a preset time interval, the signal intensity of the wireless signal of the intelligent device searched at the current moment, the pose information of the electronic device at the current moment, the signal intensity of the wireless signal of the intelligent device searched at the i+1 moment and the pose information of the electronic device at the i+1 moment are obtained. Therefore, in the moving process of the electronic equipment, the signal intensity of wireless signals of the intelligent equipment searched around and the pose information of the electronic equipment can be recorded at fixed time.
In one possible implementation, the electronic device may further include: the electronic equipment is used for searching the wireless signal of the newly added intelligent equipment in the environment map, and the target area is the area where the electronic equipment is located in the preset environment map. Based on this, the above information acquisition unit may be configured to: acquiring record information of the electronic equipment at a plurality of moments in the moving process, wherein the record information of each moment in the plurality of moments comprises: the pose information of the electronic equipment at the moment and the signal intensity of the wireless signal of the newly added intelligent equipment searched by the electronic equipment at the moment.
Thus, when the user indicates to the electronic device that the newly added intelligent device exists in the environment, the electronic device can search the wireless signal of the newly added intelligent device according to the area distribution information in the existing environment map. When the electronic equipment searches the wireless signal of the newly added intelligent equipment in the target area, the electronic equipment can move in the target area and continuously record the pose information of the electronic equipment and the signal intensity of the wireless signal of the newly added intelligent equipment searched by the electronic equipment in the moving process. Therefore, the electronic equipment can determine target pose information of the electronic equipment corresponding to the target moment of which the signal strength meets the first preset condition according to the recorded information of the electronic equipment at a plurality of moments in the moving process, so that the position information of the intelligent equipment is determined according to the target pose information.
Alternatively, the instruction response unit may be configured to: when the electronic device does not search for the wireless signal of the newly added intelligent device in a certain area, the electronic device can directly move to the next area without moving in the area so as to search for the wireless signal of the newly added intelligent device.
Alternatively, the user may indicate the number of newly added smart devices when the user indicates to the electronic device that there are newly added smart devices in the environment. Thus, when the electronic device searches the wireless signals of the indicated number of the newly added intelligent devices, the electronic device can stop searching the wireless signals of other newly added intelligent devices and determine the position information of the indicated number of the newly added intelligent devices.
Optionally, the user may not indicate the number of newly added smart devices when the newly added smart devices exist in the environment to the electronic device. At this time, the electronic device may search for wireless signals of all newly added intelligent devices in the full scene, and determine location information of all newly added intelligent devices.
In one possible implementation, the path during movement of the electronic device is the shortest movement path. Therefore, the electronic equipment can finish the positioning of the intelligent equipment in the shortest time, the detection efficiency of the electronic equipment is improved, and the intelligent experience of a user on intelligent products is improved.
Optionally, the electronic device may further include: and the path generation unit is used for generating the shortest moving path for determining the position information of the intelligent equipment so that the electronic equipment can move according to the shortest moving path.
In one possible implementation, the electronic device may be a robot or a smart car. Thus, the scheme can be suitable for different application scenes. For example, the robot may be a sweeping robot applied to a home scene.
Alternatively, the electronic device may be other removable electronic devices such as an unmanned aerial vehicle.
In a third aspect, the present application provides a removable electronic device comprising one or more processors and one or more memories. The one or more memories are coupled to the one or more processors, the one or more memories being operable to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the device-locating method in any of the possible implementations of the first aspect described above.
In a fourth aspect, the present application provides a device positioning apparatus, which is included in a mobile electronic device, and has a function of implementing the behavior of the electronic device in any one of the above first aspect and the possible implementation manners of the first aspect. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules or units corresponding to the functions described above.
In a fifth aspect, the present application provides a chip system for use in a removable electronic device. The system-on-chip includes one or more interface circuits and one or more processors. The interface circuit and the processor are interconnected by a wire. The interface circuit is for receiving a signal from a memory of the electronic device and transmitting the signal to the processor, the signal including computer instructions stored in the memory. When the processor executes the computer instructions, the electronic device performs the device positioning method in any of the possible implementations of the first aspect.
In a sixth aspect, the present application provides a computer storage medium comprising computer instructions which, when run on a removable electronic device, cause the electronic device to perform the device location method in any one of the possible implementations of the first aspect.
In a seventh aspect, the present application provides a computer program product for, when run on a computer, causing the computer to perform the device positioning method in any one of the possible implementations of the first aspect.
It will be appreciated that the advantages achieved by the electronic device of the second aspect and any possible implementation thereof, the electronic device of the third aspect, the apparatus of the fourth aspect, the chip system of the fifth aspect, the computer storage medium of the sixth aspect, and the computer program product of the seventh aspect provided above may refer to the advantages in the first aspect and any possible implementation thereof, and are not described herein.
Drawings
Fig. 1 is a schematic hardware structure of a robot according to an embodiment of the present application.
Fig. 2 is a schematic product diagram of a robot according to an embodiment of the present application.
Fig. 3 is a flowchart of a method for positioning a device according to an embodiment of the present application.
Fig. 4 is a second flowchart of a method of positioning a device according to an embodiment of the present application.
Fig. 5 is a flowchart of a method of a device positioning method according to an embodiment of the present application.
Fig. 6 is an example schematic diagram of an application scenario provided in an embodiment of the present application.
Fig. 7 is a system flowchart of a device positioning method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" is used to describe association relationships of associated objects, meaning that there may be three relationships, e.g., "a and/or B" may mean: only a, only B and both a and B are present, wherein a, B may be singular or plural.
With the advent of fifth generation mobile communication technology (5th generation mobile communication technology,5G), the realization of intelligent experience of everything interconnection has become more possible. Meanwhile, mobile intelligent products in various forms are also pushed, and the situation of vigorous development is shown. The present mobile intelligent products mainly comprise a robot form, an intelligent vehicle form and the like. Among the more common robot-like products are floor sweeping robots appearing in home scenes and service robots appearing in the companion interaction field.
When the mobile intelligent product is applied to a home scene at present, the mobile intelligent product is generally provided with the capability of autonomously acquiring scene information so as to promote the realization of full-house intelligence. Therefore, when the robot is used for the first time in a general home scene, the robot can always detect the home environment and autonomously construct a home scene map.
However, current mobile intelligent products have limited capabilities, intelligent devices in the environment cannot be accurately detected, and the realization of the internet of everything requires the correlation between the intelligent devices in the scene. Therefore, users often need to manually finish the addition of equipment information and the addition of position information of intelligent equipment in a home scene after the detection of the robot is finished, and the intelligent equipment is complex in operation, and meanwhile, the intelligent equipment is quite unfriendly to groups such as old people, children and the like which have difficulty in using the intelligent technology, the learning cost is high, and the intelligent experience of the users on intelligent products is affected.
In order to solve the above-mentioned problems, the embodiments of the present application provide a device positioning method and a mobile electronic device, where the electronic device may search for wireless signals of surrounding intelligent devices while performing environment detection, and determine signal strength of the searched wireless signals. Further, the electronic device can complete information confirmation of the intelligent device in the environment according to the searched wireless signals, and can complete positioning of the intelligent device in the environment according to pose information of the electronic device detected by the environment and signal strength of the searched wireless signals. So, electronic equipment can accomplish the location to intelligent device in the environment voluntarily to electronic equipment can accomplish the interpolation of the equipment information and the interpolation of positional information to intelligent device in the environment voluntarily, need not the user to participate in, has reduced the use degree of difficulty of crowds such as old man, child, has promoted the intelligent experience of user to intelligent product.
The movable electronic device may be a movable robot, a movable intelligent vehicle or other electronic devices with movement capability.
The technical solution provided in the embodiments of the present application will be specifically described below by taking an electronic device as an example of a movable robot (hereinafter referred to as a robot).
By way of example, fig. 1 shows a schematic diagram of a robot 100. The robot 100 may be a robot capable of moving in an indoor environment (e.g., a home, a shopping mall, a factory shop, etc.) and performing a certain task. A sweeping robot for performing a sweeping operation as shown in fig. 2.
As shown in fig. 1, the robot 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a microphone 170B, a sensor module 180, keys 190, a motor 191, an indicator 192, a driving module 193, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
It will be appreciated that the configuration shown in fig. 1 is merely illustrative, and is not intended to limit the configuration of the robot in the embodiments of the present application. In other embodiments of the present application, robot 100 may include more or less components than illustrated, or may combine certain components, or may split certain components, or may have a different arrangement of components, or may have an equivalent function to that illustrated in fig. 1, or a different configuration than that illustrated in fig. 1. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. For example, the robot may also include other components such as a display screen, a robotic arm, and the like.
Wherein the driving module 193 is used for driving the robot 100 to move. Alternatively, the drive module 193 may include a track/wheel-type mobile mechanism. For example, the drive module 193 may include wheels or tracks and inducers of tracks or the like that provide a movement function.
The sensor module 180 includes different sensors. The sensor module 180 may include one or more sensors for detecting information about the surrounding environment of the robot, such as the location of obstacles in the environment, among others. The sensor may comprise a camera (video camera) for capturing or scanning information of the surroundings of the robot. Alternatively, the camera may be a high definition camera having a wide angle lens. Such as a fisheye lens type high definition wide angle camera. Alternatively, the camera may be a depth camera or a binocular camera capable of providing object distance information.
In some embodiments, the sensors for detecting information about the environment may also include other optical and/or acoustic sensors capable of measuring objects in the robot's surroundings (e.g., walls, obstacles, etc.), which may measure the spacing by means of triangulation or runtime measurement of the transmitted signals. Such as triangulation sensors, lidar, ultrasonic sensors, etc. The sensor for detecting information about the environment is not limited in the embodiment of the present application, and for example, the sensor may further include a collision sensor for detecting an obstacle in the surrounding environment of the robot.
The sensor module 180 may also include one or more sensors for detecting status information of the robot itself. Alternatively, the sensor may comprise one or more of a speed sensor, an acceleration sensor, a gyroscope sensor and a magnetometer sensor. The speed sensor can be used for detecting the moving speed of the robot, the acceleration sensor can be used for detecting the acceleration of the robot, the gyroscope sensor can be used for detecting the rotating angular speed of the robot, and the magnetometer sensor can be used for detecting the included angle between the robot and the southeast and northwest directions.
In some embodiments, the sensors for detecting the state information of the robot itself may also include inertial measurement unit (inertial measurement unit, IMU) sensors for detecting the acceleration and angular velocity of the robot, which may include one or more of acceleration sensors, gyroscopic sensors, and magnetometer sensors.
In some embodiments, the sensor for detecting the state information of the robot itself may also include other motion sensors capable of measuring motion information of the robot, such as a wheel speed sensor, an optical flow sensor, and the like. The embodiment of the application is not limited to a sensor for detecting the state information of the robot.
The sensor module 180 may also include navigation sensors. Among other things, navigation sensors may be used to calculate the position of the robot 100 within space, as well as to generate a work map for the robot. For example, the navigation sensor may be a dead reckoning sensor, an obstacle detection and avoidance (obstacle detection obstacle avoidance, odaa) sensor, a simultaneous localization and mapping (simultaneous localization and mapping, SLAM) sensor, or the like.
It should be noted that, in addition to the above-mentioned sensors, the sensor module 180 may further include other types of sensors, such as a pressure sensor, a barometric sensor, a magnetic sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, a gravity sensor, etc., and the types and the number of sensors included in the sensor module 180 are not limited in the embodiments of the present application.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
Wherein the controller may be a neural hub and command center of the robot 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the sensor module 180, charger, flash, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the camera through an I2C interface, so that the processor 110 and the camera communicate through an I2C bus interface to implement an image capturing function of the robot 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function.
The MIPI interface may be used to connect the processor 110 with peripheral devices such as a display screen, camera, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and the camera communicate through a CSI interface to implement the image acquisition functionality of robot 100. The processor 110 and the display screen communicate through the DSI interface to implement the display function of the robot 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with a camera, a display screen, a wireless communication module 160, an audio module 170, a sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the robot 100, or may be used to transfer data between the robot 100 and peripheral devices. The interface may also be used to connect other robots 100, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the robot 100. In other embodiments of the present application, the robot 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the robot 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The processor 110 executes various functional applications of the robot 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an operating system, an application program required for at least one function, and the like. The storage data area may store data created during use of the robot 100, etc. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The wireless communication function of the robot 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. Alternatively, the robot 100 may enable wireless communication with other devices, such as various smart home devices, servers.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in robot 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the robot 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device such as speaker 170A or displays images or video through a display screen. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied on the robot 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, the antenna 1 and the mobile communication module 150 of the robot 100 are coupled, and the antenna 2 and the wireless communication module 160 are coupled, so that the robot 100 can communicate with a network and other devices through wireless communication technology. Wireless communication techniques may include global system for mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
In this embodiment of the present application, after the robot 100 establishes a wireless communication connection with another device, the area where the originating device corresponding to the signal is located may be located based on the signal strength of the signal received by the wireless communication connection.
Illustratively, in a smart home scenario, when the robot 100 establishes bluetooth communication with a surrounding smart device through a bluetooth low energy (bluetooh low energy, BLE) module in the wireless communication module 160, the robot 100 may acquire the received signal strength (received signal strength indicator, RSSI) of the BLE signal from the smart device in real time, so as to confirm the area of the smart device in the environment according to the pose of the robot 100 in the environment when the received signal strength of the BLE signal is strongest.
The robot 100 may implement an image capturing function through an ISP, a camera (video camera) in the sensor module 180, a video codec, a GPU, an application processor, and the like.
ISPs are used to process data fed back by cameras (cameras). For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in a camera (webcam).
Cameras (video cameras) are used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, robot 100 may include 1 or N cameras (cameras), N being a positive integer greater than 1.
The robot 100 may implement audio collection and audio output functions through an audio module 170, a speaker 170A, a microphone 170B, an application processor, and the like.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The robot 100 may output a sound signal through the speaker 170A.
Microphone 170B, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. Microphone 170B may collect sound signals emitted by the user. The robot 100 may be provided with at least one microphone 170B. In other embodiments, the robot 100 may be provided with two microphones 170B, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the robot 100 may also be provided with three, four, or more microphones 170B to collect sound signals, reduce noise, identify sound sources, implement directional recording functions, etc.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the robot 100. The charging management module 140 may also supply power to the robot through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display, the camera, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The methods in the following embodiments may be implemented in a robot having the above-described hardware configuration.
Further, as another implementation, the processor of the robot has only a simple processing function, and the further processing of the data can be completed by the server. For example, a communication module (mobile communication module and/or wireless communication module) provided on the robot communicates with a server, an intelligent terminal (such as a mobile phone), and the like through an antenna. The communication module can send an environment map and signal intensity information between the robot and intelligent equipment in the environment in the detection process to the server, receive the equipment partition map of the room where each intelligent equipment is located, and the processor generates a control instruction according to the equipment partition map so as to control the functional module to complete corresponding work (such as cleaning of home environment). Wherein the memory of the robot may hold the device partition map.
The following describes a device positioning method provided in an embodiment of the present application with reference to the accompanying drawings. As shown in fig. 3, the device positioning method may include:
s310, searching Bluetooth broadcasting signals of the intelligent equipment when the robot detects surrounding environment information.
The robot can be placed at any position in the working scene of the robot and can move randomly in the working scene. The user can start the Bluetooth modules of all intelligent devices in the working scene before the robot detects the environment, so that the intelligent devices in the working scene can be searched and found by the robot with the Bluetooth modules. Alternatively, the bluetooth module may comprise a classical bluetooth BT module and/or a bluetooth low energy BLE module. Optionally, the bluetooth module of the intelligent device can also be automatically started without manual starting.
The working scene of the robot may be a scene such as a home, a market, a factory workshop and the like, and the intelligent device may be any device with a bluetooth function in the scene, and the specific working scene and the intelligent device are not limited in the embodiment of the present application. The intelligent device can be any intelligent household device with Bluetooth function, such as an intelligent air conditioner, an intelligent sound box, an intelligent lamp, an intelligent television, an intelligent camera, an intelligent fan, an electric cooker or a body fat scale when the working scene of the robot is a household scene.
When the robot starts to detect the environment, the robot can automatically start the Bluetooth function, or the user manually starts the Bluetooth function of the robot. So that the robot can search for bluetooth devices around within a bluetooth transmission distance range using the bluetooth module of the robot. The surrounding Bluetooth devices comprise intelligent devices in a working scene.
It will be appreciated that the robot is free to access the various rooms (or areas) of the current work scene during the environmental detection, so that the robot can perform environmental detection and smart device searching for all rooms (or areas).
In this embodiment of the application, the smart device in the environment may send a bluetooth broadcast signal to the surrounding after the bluetooth module of the smart device is turned on. For example, the smart device may send a bluetooth low energy BLE broadcast signal to the surroundings after the self bluetooth low energy BLE module is turned on.
Alternatively, the bluetooth broadcast signal of the smart device may be transmitted at a certain broadcast interval. The intelligent device opens a radio frequency receiving window for a period of time after each bluetooth broadcast signal transmission to receive a connection request that the robot may transmit.
In this embodiment of the present application, the robot may search for bluetooth broadcast signals of surrounding smart devices while detecting surrounding environmental information. Alternatively, the robot may search for surrounding bluetooth broadcast signals at a preset period or a preset frequency. The preset period and the preset frequency can be set according to actual needs, and the embodiment of the application is not limited. For example, the preset period may be 5 milliseconds (S), or the preset frequency may be 200 hertz (Hz).
After searching the bluetooth broadcast signal of the intelligent device, the robot may actively send a connection request message to the intelligent device to request to establish a bluetooth connection with the intelligent device. After the Bluetooth connection between the robot and the intelligent device is established successfully, the robot can acquire and record the device information of the intelligent device, and the association between the robot and the intelligent device is completed. Thus, when detecting a certain area (such as a certain room), the robot can complete automatic association with the intelligent devices in the area. When the robot completes environment detection of the whole working scene, automatic association with the intelligent equipment searched in the whole scene can be completed.
Alternatively, the device information may include a device type, a device identification, a manufacturer and/or product model of the device, and the like. The device type may be used to indicate a function of the smart device, such as a smart air conditioner, a smart light, a router, etc., that may determine the function of the smart device. The device identification may be used to uniquely identify a device, such as a product serial number of a smart device.
Optionally, after the bluetooth connection between the robot and the intelligent device is successfully established, besides the robot can acquire and record the device information of the intelligent device so as to finish the registration of the intelligent device on the robot, the intelligent device can also acquire and record the device information of the robot so as to finish the registration of the robot on the intelligent device, thereby realizing the interconnection of the robot and the intelligent device.
In some embodiments, the robot may establish a bluetooth connection with the smart device when the robot detects that the bluetooth broadcast signal of the smart device satisfies a second preset condition. Alternatively, the second preset condition may be that the robot first searches for a bluetooth broadcast signal of the smart device. Therefore, when the robot searches the Bluetooth broadcast signal of the intelligent device for the first time, the robot can automatically associate with the intelligent device.
When the signal intensity of the Bluetooth broadcast signal of the intelligent device is weak, the probability of the robot successfully establishing Bluetooth connection with the intelligent device is low. Thus, alternatively, the second preset condition may be that the signal strength of the bluetooth broadcast signal of the smart device reaches the preset strength threshold. The preset strength threshold may be a minimum signal strength allowed when the bluetooth connection is successfully established. The specific preset intensity threshold may be set reasonably according to practical situations, which is not limited in the embodiment of the present application. Therefore, when the robot searches that the Bluetooth broadcast signal of the intelligent equipment has stronger signal intensity, the robot can automatically associate with the intelligent equipment, and the association success rate of the robot is improved.
Alternatively, the robot may also implement an automatic association with the smart device when the user uses the smart device. Illustratively, the robot may search for a bluetooth broadcast signal of the smart device when the robot acquires the user's interaction information with the smart device. Then when the Bluetooth broadcast signal of the intelligent device meets a second preset condition, the robot can establish Bluetooth connection with the intelligent device. The robot may then acquire and record device information for the smart device based on the bluetooth connection to complete information registration of the smart device on the robot.
For example, when a user needs to use an intelligent projection device in a bedroom, the robot may receive a voice command from the user to turn on the projector of the bedroom, and then the robot may determine that the intelligent projection device is contained in the bedroom region according to the voice command, so that the robot may move to the bedroom region to search for a bluetooth broadcast signal of the intelligent projection device. When the robot searches the Bluetooth broadcast signal of the intelligent projection equipment, the robot can automatically complete the association with the intelligent projection equipment by establishing Bluetooth connection with the intelligent projection equipment. It can be understood that the robot in the application can realize information confirmation and automatic association of intelligent equipment in a scene through environment detection and Bluetooth broadcast signal search, so that a mode of autonomous serial connection of full-scene intelligent equipment is provided for a user, a mode of manually adding intelligent equipment is replaced, the use difficulty of the user is reduced, and the full-house intelligent experience of the user is improved.
In some embodiments, when the robot detects surrounding environment information, a scene map of the current working scene can be constructed according to the detection result. The robot can navigate the motion track of the robot in the working scene according to a path rule, so that the robot can traverse the whole working scene and construct a scene map of the whole working scene. Alternatively, the robot may construct a scene map using on-the-fly localization and mapping SLAM techniques.
The SLAM technology is a technology that a robot can determine the space position of the robot through sensor information in an unknown environment and establish an environment model of the space. The robot can start from an unknown place with an unknown environment, position and posture of the robot can be positioned through repeatedly observed environment characteristics (such as corners, furniture, columns and the like) in the motion process, and then a scene map can be built in an incremental mode according to the position of the robot, so that the purposes of simultaneous positioning and map building are achieved. Thus, by SLAM technology, the robot can build a scene map by carrying the sensor to patrol the environment for one week.
The sensors carried by the robot may include sensors for sensing information of the surrounding environment of the robot, such as cameras, laser sensors, collision sensors, etc. The sensors carried by the robot can also comprise sensors for sensing the pose information of the robot, such as an IMU sensor, a wheel speed sensor and the like.
Optionally, when the robot detects surrounding environment information, the robot senses the surrounding environment where the robot is located and collects environment data of the surrounding environment to obtain an environment image. The environmental image may include an image acquired during movement of the robot.
Wherein the environmental image is perceived by a sensor device provided on the robot. For example, the environmental image may be a visual sensor (such as a monocular camera, a binocular camera, a depth camera RGBD, or the like, which is provided on the robot, which captures a three-dimensional point cloud image, or a two-dimensional point cloud image, which is captured by a laser sensor provided on the robot, or the like, which is not particularly limited in this embodiment.
Optionally, the robot may acquire surrounding environment images at a fixed period or a fixed frequency during the movement process, then extract image feature information, and continuously accumulate such information to implement incremental scene map construction. The fixed period and the fixed frequency can be set according to actual needs, and the embodiment of the application is not limited. For example, the fixed period may be 2 milliseconds, or the fixed frequency may be 500 hertz (Hz).
S320, the robot acquires the signal strength of the Bluetooth broadcast signal of the intelligent device searched at each moment and pose information corresponding to the robot at each moment.
It can be understood that when the electromagnetic wave signal (bluetooth broadcast signal) emitted by the smart device is transmitted to the robot through the space, the robot can receive the electromagnetic wave signal of the smart device. Since the signal strength of the electromagnetic wave signal decays with the propagation distance, and part of the electromagnetic wave signal may finally reach the robot via different transmission paths after being reflected and refracted by the shielding object (such as a wall, a ground, etc.), that is, the signal strength received by the robot is the superposition of the strengths of the signals arriving via different transmission path combinations. Therefore, when the robot moves to different positions, the transmission distances of the received electromagnetic wave signals are different, the combination of the transmission paths experienced is different, and the corresponding path loss is different, so that the signal intensities of the superimposed signals received by the robot at different positions may be different. For example, the signal strength obtained by the robot at a location close to the smart device is higher than the signal strength obtained by the robot at a location far from the smart device, and is also higher than the signal strength obtained by the robot at a location close to the smart device by the partition wall.
That is, the stronger the signal strength of the smart device sampled by the robot, the shorter the propagation distance of the signal, and the smaller the transmission path loss, the closer the distance between the smart device and the robot, or the less the obstruction therebetween. Therefore, the robot can determine the distance between the intelligent device and the robot or the shielding object between the intelligent device and the robot through the strength of the signal intensity. And the robot can position the intelligent equipment according to the pose information and the signal strength of the robot.
In the embodiment of the application, the robot can continuously sample the signal strength of the Bluetooth broadcast signal of the surrounding intelligent equipment while detecting the environment information to construct the scene map, and record the pose information of the robot in the constructed scene map at the sampling moment. The signal strength may be a received signal strength indication (received signal strength indication, RSSI) indicating the signal strength at a location within the wireless signal coverage. The pose information may include a position P of the robot in the constructed scene map. In some embodiments, the pose information may also include the pose of the robot, such as the current orientation of the robot, the rotation angle, etc.
In some embodiments, the robot may sample the signal strength of bluetooth broadcast signals of all searchable intelligent devices at preset intervals, and record pose information of the robot in the map at the sampling time. Optionally, each time the robot samples, the signal intensity of the bluetooth broadcast signals of all the searchable intelligent devices may be stored in a set manner, where each element in the set is the signal intensity of each intelligent device currently sampled.
It can be appreciated that, when the signal strength of the bluetooth broadcast signal of the searched smart device is very weak, the smart device cannot normally establish a connection with the robot successfully, and at this time, the smart device can be considered to be far from the robot, and the sampled signal strength has little effect on the subsequent positioning of the smart device. Thus, alternatively, the robot may sample the signal strength of the bluetooth broadcast signal for all successfully connected smart devices.
For example, the robot ith sample (current sampling time T i ) When the robot can store the signal strength of all connectable Bluetooth broadcast signals as a set S i . Wherein set S i The i-th element of the system is the RSSI of each intelligent device sampled by the robot. At the same time, the robot can correspondingly sample the time T i Position P of robot of (2) i And storing.
Alternatively, the preset interval may be a preset distance interval D threshold . That is, after the ith sampling of the robot, the displacement length D of the robot from the ith sampling can be calculated in real time, and when the displacement length D is greater than or equal to the preset distance interval D threshold In this case, the robot can perform the (i+1) th samplingAnd (5) sampling. Wherein the displacement length D can be the current position P of the robot in the map and the position P of the robot in the map at the ith sampling i The distance length between them, i.e. D= |P-P i |。
In some embodiments, after each sampling by the robot, the displacement length of the robot from the last sampling pose can be calculated at a preset period or a preset frequency. The preset period and the preset frequency can be set according to actual needs, and the embodiment of the application is not limited. For example, the preset period may be 10 milliseconds, or the preset frequency may be 100 hertz (Hz).
Alternatively, the predetermined interval may be a predetermined time interval T threshold . That is, after the ith sampling of the robot, the time interval of the current distance at the ith sampling can be calculated in real time, and when the time interval is greater than or equal to the preset time interval T threshold At this time, the robot may take the (i+1) th sampling. Wherein the time interval can be the current time T and the sampling time T of the ith sampling i Length of time between T-T i
S330, the robot positions the intelligent equipment according to target pose information of the robot corresponding to the target moment of the signal strength meeting the first preset condition.
The first preset condition may be used to indicate that the signal strength of the bluetooth broadcast signal of the intelligent device searched by the robot is very strong. In this way, in the signal intensities recorded at a plurality of moments in the moving process of the robot, if the signal intensity of the Bluetooth broadcast signal of the intelligent device searched by the robot meets a first preset condition at one moment, the robot can consider that the signal intensity of the intelligent device searched at the moment is very strong, so that the distance between the robot at the moment and the intelligent device can be considered to be very close, and the robot can locate the position information of the intelligent device according to the pose information of the robot at the moment.
Alternatively, the first preset condition may be that the signal strength is strongest among the recorded signal strengths at a plurality of times.
It can be understood that the signal intensity of the intelligent device acquired by the robot at a certain sampling moment is strongest, the position of the intelligent device can be considered to be closest to the position of the robot at the sampling moment, and the wall type shielding object is minimum or no wall is separated between the intelligent device and the position of the robot at the sampling moment can be approximately positioned as the position of the intelligent device. Therefore, the robot can automatically locate the position of the full-scene intelligent device while realizing full-scene detection.
It should be understood that the setting of the first preset condition is merely exemplary, and the specific first preset condition is not limited in this embodiment. For example, the first preset condition may be that the signal strength is greater than a preset threshold.
The preset threshold value can be a larger value, and is used for indicating that the signal strength of the Bluetooth broadcast signal of the intelligent device searched by the robot is very strong and is not interfered by wall shielding objects. The preset threshold value can be reasonably set according to practical application conditions, and the embodiment of the application is not limited to this.
It can be understood that in the signal intensities recorded at a plurality of moments in the moving process of the robot, the signal intensity of the Bluetooth broadcast signal of the intelligent device searched by the robot reaches a preset threshold at one moment, the robot can consider that the signal intensity of the intelligent device searched at the moment is very strong and is not interfered by wall shielding objects, so that the distance between the robot at the moment and the intelligent device is very close and exists in the same space, and the robot can locate the position information of the intelligent device according to the pose information of the robot at the moment. Therefore, the robot can directly position the intelligent equipment according to the pose information of the robot corresponding to the target moment with the signal intensity larger than the preset threshold, and the workload of the robot is reduced.
Optionally, the robot may also implement preliminary positioning of the intelligent device by using the first preset condition that the signal strength is greater than the preset threshold. When a plurality of target moments with the signal intensity larger than a preset threshold exist, the robot can judge whether the pose information of the robot corresponding to the target moments is in the same area, and when the robot is not in the same area, the robot can modify the positioning of the intelligent equipment according to the pose information of the robot corresponding to the target moment with the strongest signal intensity. Optionally, when pose information of the robots corresponding to the plurality of target moments are in the same area, the robots may not modify the preliminary positioning of the intelligent device.
Optionally, when the robot locates the intelligent device according to the target pose information of the robot corresponding to the target moment when the signal strength meets the first preset condition, the robot may also acquire the region information of the environment detection, so as to determine the region where the intelligent device is located according to the target pose information and the region information, thereby implementing the location of the region where the intelligent device is located. Optionally, the robot may map the target pose information to the region information to determine a region where the target pose information is located, so that the robot may determine the region where the target pose information is located as the region where the intelligent device is located.
Alternatively, the robot may acquire an environment map, and the environment map may include therein area information of the robot environment probe. Therefore, the robot can map the target pose information to the environment map so as to determine a target area corresponding to the target pose information in the environment map. And the robot can further take the target area as the area where the intelligent device is located. The environment map may be a scene map of the entire working scene of the robot.
Optionally, when the robot completes detection of the whole working scene, the robot can construct a scene map of the whole working scene according to the detected environment information. Then the robot can acquire the signal intensity information of all the intelligent devices sampled during the detection period, and determine the strongest signal intensity sampled by each intelligent device and the pose information of the robot corresponding to the strongest signal intensity sampled. Therefore, the robot can position each intelligent device according to the pose information of the robot corresponding to the strongest signal intensity of each intelligent device.
Optionally, the robot may first screen out multiple signal intensities sampled by the robot to each intelligent device, and pose information of the robot corresponding to each signal intensity sampled by the robot. And then aiming at the multiple signal intensities of the screened single intelligent equipment, the robot can sort the multiple signal intensities according to the sequence from strong to weak, so that the strongest signal intensity sampled by the single intelligent equipment and the pose information of the robot corresponding to the strongest signal intensity can be obtained.
Therefore, the robot can obtain the strongest signal intensity sampled by each intelligent device and the pose information of the robot corresponding to the strongest signal intensity sampled by each intelligent device by sequencing the signal intensities sampled by each intelligent device. Therefore, the robot can position each intelligent device according to the pose information of the robot corresponding to the strongest signal intensity of each intelligent device.
Optionally, because the pose information includes the position P of the robot in the constructed scene map, the robot may determine the position P of the robot in the map, which corresponds to the position P of the robot in the map when the strongest signal strength of the smart device is sampled, as the position of the smart device in the map.
Optionally, when pose information includes current pose information such as a current direction and a rotation angle of the robot, the robot may also determine a position of the intelligent device in the map according to the direction and the rotation angle of the robot corresponding to the strongest signal strength of the intelligent device.
Optionally, in some possible implementations, the robot may also determine pose information of the smart device according to one or more received signals of the smart device, where the pose information of the smart device includes, but is not limited to, a position, an orientation, angle information of the smart device with respect to a known coordinate system, and the like.
Optionally, the robot may also detect the entire working scene multiple times to respectively implement the construction of the scene map and the positioning of the intelligent device in the scene. The robot can detect the environmental information of the whole working scene at one time to construct a scene map of the whole working scene. And then the robot can detect the whole working scene once again to search the Bluetooth broadcasting signals and the signal intensity of the Bluetooth broadcasting signals of the whole scene, so that the autonomous serial connection and/or autonomous positioning of the robot on the intelligent equipment of the whole scene are realized.
Optionally, if the robot has previously constructed a scene map of the entire working scene, the scene map may be preconfigured in the robot, and at this time, the environment detection in the present application may be only used to search for bluetooth broadcast signals of the full scene, so as to implement autonomous serial connection and/or autonomous positioning of the robot to the intelligent device of the full scene.
Optionally, the robot may search only for the bluetooth broadcast signal of the specified device when detecting the environment, so as to realize the positioning of the robot to the specified device. Therefore, when a user forgets the storage position of some equipment such as the intelligent watch, the robot can also search the Bluetooth broadcast signal of the intelligent watch and continuously sample the signal intensity of the Bluetooth broadcast signal so as to determine the position of the intelligent watch according to the position of the robot when the strongest signal intensity of the Bluetooth broadcast signal is sampled, thereby reducing the searching range of the user.
Optionally, the robot may also perform the device positioning method provided in the present application during the environment detection process. That is, the robot may acquire the signal strength of the bluetooth broadcast signal of the intelligent device searched at each time before the current time and pose information corresponding to each time of the robot in the moving process, and then position the intelligent device according to the target pose information of the robot corresponding to the target time when the signal strength meets the first preset condition. Therefore, the robot can move and locate the area where the intelligent device is likely to be located in the environment detection process. And along with the movement of the robot, the robot can be continuously optimized according to the acquired information, so that the area where the intelligent equipment is located is gradually and accurately positioned.
In summary, the embodiment of the application provides a device positioning method, a robot can detect environmental information of an entire working scene, meanwhile, during detection of the entire working scene, the robot can also search intelligent devices of the entire working scene by searching surrounding bluetooth broadcast signals, automatic addition of the intelligent devices of the entire scene is completed, and positioning of the intelligent devices of the entire scene is realized by continuously sampling signal intensity of the intelligent devices and pose information of the robot during detection.
It should be noted that, in the embodiment of the present application, the wireless connection manner between the robot and the intelligent device in the scene is illustrated by using bluetooth connection, and the wireless connection manner between the robot and the intelligent device in the scene is not limited. When other wireless connection modes are adopted between the robot and the intelligent equipment in the scene to replace the Bluetooth connection mode, all the method embodiments are also applicable.
For example, when Wi-Fi connection is adopted between the robot and the intelligent devices in the scene, during the detection period of the robot on the whole working scene, the robot can search the intelligent devices in the whole working scene by searching surrounding Wi-Fi signals, automatic addition of the intelligent devices in the whole scene is completed, and positioning of the intelligent devices in the whole scene is realized by continuously sampling the signal intensity of the Wi-Fi signals of the intelligent devices and the pose information of the robot during the detection period.
In some embodiments, the robot may also have autonomous zoning capability to zone the scene map after constructing the scene map of the entire work scene. And the robot can determine the respective area of the intelligent devices in the scene.
Referring to fig. 4, fig. 4 is a flow chart illustrating a device positioning method according to an embodiment of the present application. As shown in fig. 4, the device positioning method may include:
s410, the robot starts environment detection.
In some embodiments, after the robot starts the environment detection, the robot may perform the method steps of S421 and S430 to implement automatic generation of the room partition map. Wherein:
s421, the robot constructs a full scene map according to the environment detection result.
Alternatively, the robot may determine an origin and a coordinate system of the scene map to be created in advance. The selection rule of the origin may be set before shipment. For example, the origin may be a location where the charging stake of the robot is located. For another example, the origin may be an initial position at the time of starting the environment detection by the robot.
After the origin and the coordinate system are determined, the robot can acquire the pose information based on the sensor during the movement of the robot in the working scene. The pose information may include coordinates of a current location of the robot. The robot can obtain the coordinates of the environmental image features in the map coordinate system by acquiring the positions of the image features of the surrounding environment information relative to the robot and the coordinates of the robot in the map coordinate system. Therefore, the robot can construct a scene map of the working scene in the coordinates of the map coordinate system by moving in the working scene and continuously acquiring the coordinates of the surrounding environment feature points.
In some embodiments, the robot-constructed scene map may be in the form of a grid map. A grid map may refer to a map that divides an environment into a series of grids, where each grid may include one piece of map information. Alternatively, the representation of the map information for the environment information may be divided into three types: known occupancy (i.e., where the robot detected, where there is an obstacle, which may refer to all areas where the robot cannot pass), known non-occupancy (i.e., where the robot detected, where there is no obstacle that can impede the robot's movement), and unknown (i.e., where the robot has not detected).
In some embodiments, the robot may read the scene map that has been constructed at the current moment in real time, and navigate the subsequent motion path of the robot through the boundary points in the scene map, so as to continuously perform environment detection of the unknown area until environment detection of the whole scene is achieved.
Alternatively, the robot may define a map point of the interface between the unknown grid and the known unoccupied grid as the boundary point. That is, if the current grid is known to be unoccupied, the robot may consider the current grid as a boundary point if at least 1 grid among the surrounding 8 grids is unknown.
The robot can continuously acquire boundary points in the scene map which is already constructed at the current moment during detection so as to select one boundary point from the boundary points for navigation. As an embodiment, the robot may randomly select a boundary point for navigation.
Alternatively, the robot may perform the environment detection with the shortest moving path. Therefore, the robot can finish environment detection in the shortest time, the detection efficiency of the robot is improved, and the intelligent experience of a user on intelligent products is improved.
In some embodiments, the robot may also evaluate the navigation cost of each boundary point, and then select one boundary point with the lowest navigation cost for navigation. Based on this, the robot can realize the environment detection with the shortest moving path.
Optionally, any boundary point is selected, and the navigation cost of the boundary point may be obtained by: the robot can firstly calculate the distance D from the current position to the boundary point and calculate the path curvature P from the current position to the boundary point; then, the robot can calculate the navigation cost C of the boundary point according to the distance D and the path curvature P, wherein the navigation cost C is as follows: c=d+p×d.
Alternatively, the robot may first find all boundary points in the map based on the scene map that has been constructed at the current time. The robot may then perform reachability analysis on all boundary points based on the current scene map to reject non-reachable boundary points. Therefore, the robot can select one boundary point with the lowest navigation cost from all the reachable boundary points to navigate.
Among them, reachability may refer to the ease with which the robot moves from the current position to the boundary point. If a path exists so that the robot can reach the boundary point, the boundary point is reachable. Conversely, if there is no path so that the robot can reach the boundary point, the boundary point is not reachable.
Optionally, the boundary point searching may adopt a global traversal searching mode, a local searching mode, or a combination of global traversal searching and local searching mode.
Optionally, the local search may use a surrounding boundary point traversal search method based on the current robot pose, that is, the current robot is used as a center, and the surrounding preset radius R is searched to meet the defined boundary point. Alternatively, local search may also be performed by local fast random search. Alternatively, the local search may also be a combination of a local fast random search and a surrounding boundary point traversal search based on the current robot pose.
It will be appreciated that the specific manner of searching for the boundary points is not limited in the embodiments of the present application.
S430, the robot divides the room of the full scene map.
In the embodiment of the application, after the robot constructs the scene map of the whole working scene, the scene map can be divided into areas according to the room partitioning technology to obtain the room partitioning map. Thus, the robot can obtain a room partition map consistent with the real house type of the working scene.
It can be appreciated that the specific room partitioning technique is not limited in the embodiments of the present application, and the robot may partition a reasonable room area according to the actual environment. For example, the robot may divide the scene map into a plurality of room areas according to whether or not the acquired environment image includes image features such as a door and a window.
In other embodiments, after the robot starts the environment detection, the robot may also perform the method steps of S422 and S423 to enable information confirmation and association of the smart devices in the environment during the environment detection. Wherein:
s422, the robot searches for Bluetooth broadcast signals.
S423, the robot acquires the equipment information of the intelligent equipment when searching the Bluetooth broadcast signal of the intelligent equipment.
Optionally, when the robot searches bluetooth broadcast signals of a plurality of intelligent devices, the robot can respectively establish bluetooth connection with each intelligent device, and after the connection is successful, the robot can actively acquire and record information of each intelligent device, so as to complete automatic association with each intelligent device in the environment. Therefore, after the whole environment detection is finished, the robot can realize autonomous serial connection of intelligent devices in the whole scene.
In still other embodiments, after the robot starts the environment detection, the robot may further perform the method steps of S422 and S424 to enable signal strength sampling of the smart devices in the scene during the environment detection. Wherein:
s422, the robot searches for Bluetooth broadcast signals.
S424, the robot acquires the signal strength of the searched Bluetooth broadcast signal and the pose information of the robot. The signal intensity of the Bluetooth broadcast signal of the intelligent device searched at each moment and the pose information corresponding to the robot at each moment are determined.
It can be understood that after the robot starts the environment detection, the robot can synchronously construct a full scene map, confirm information of intelligent equipment in the environment and sample signal intensity. That is, the robot may perform the method steps of S421 and S430, the method steps of S422 and S423, and the method steps of S422 and S424 simultaneously after starting the environment detection.
Optionally, the robot may also perform any one or both of the method steps of S421 and S430, the method steps of S422 and S423, and the method steps of S422 and S424 after starting the environmental detection, which is not limited in the embodiment of the present application.
In some embodiments, after performing the method steps of S421 and S430, the method steps of S422 and S423, and the method steps of S422 and S424, the robot may further perform the method steps of S440, S450, S460, S470 to achieve positioning of the smart device in the environment. Wherein:
s440, sorting and screening the signal intensity by the robot. To determine each strongest signal strength and pose information of the corresponding robot.
S450, mapping the pose of the robot. To determine the room corresponding to the pose information of the robot.
In the embodiment of the application, after the robot obtains the room partition map of the whole working scene, the robot can determine the room where the robot is currently located according to the position of the robot in the map.
Alternatively, because the robot has acquired and recorded the signal strength of the bluetooth broadcast signal of the intelligent device searched at each moment and the pose information corresponding to each moment, the robot can map the pose information corresponding to each moment of the robot to the partitioned room partition map according to the recorded pose information corresponding to each moment of the robot, so that the robot can correspondingly determine the room in which the robot corresponds at each moment.
S460, the robot confirms the room where the intelligent device is located.
It can be understood that when the signal intensity of the intelligent device acquired by the robot at a certain sampling moment is strongest, the position of the robot at the sampling moment can be approximately positioned as the position of the intelligent device. Therefore, in the embodiment of the application, the robot can determine the room area to which the intelligent device belongs according to the pose information of the robot corresponding to the strongest signal intensity of the intelligent device.
Optionally, the robot may perform pose mapping on pose information of the robot corresponding to the sampled strongest signal strength of the intelligent device, so as to determine a target room corresponding to the pose information of the robot corresponding to the sampled strongest signal strength of the intelligent device. The target room can be used as the room where the intelligent device belongs. Therefore, the robot can automatically divide the area of the intelligent equipment by taking the room as a unit, replaces a mode of manually dividing the area of the equipment, realizes more accurate equipment positioning, reduces the use difficulty of crowds such as old people, children and the like, and provides more intelligent experience for users.
S470, the robot generates an intelligent device distribution map of the whole scene according to the room in which each intelligent device is located.
Optionally, after determining the room to which each intelligent device in the scene belongs, the robot may mark information of the intelligent devices in the corresponding room area in the room partition map, so as to complete construction of the intelligent device distribution map of the whole scene.
It can be appreciated that, for other intelligent devices without wireless communication capability, the robot may also support autonomous user addition, and embodiments of the present application are not specifically described.
Further, in some embodiments, the robot can also construct a full-scene intelligent device distribution map, so as to realize accurate control of intelligent devices and provide better intelligent service for users.
In some embodiments, when the robot detects that the intelligent device is newly added in the environment, the robot can also realize positioning of the intelligent device based on the existing environment map.
Alternatively, the robot may receive a first instruction sent by the user, where the first instruction is used to indicate that an additional smart device exists in the environment. The robot may then move within the target area in response to the first instruction. When the robot searches the Bluetooth broadcast signal of the newly added intelligent device, the target area is the area where the robot is located in the environment map. Then the robot can acquire the recorded pose information of the robot at each moment in the moving process of the robot and the signal strength of the Bluetooth broadcast signal of the newly added intelligent device searched by the robot at the moment. Therefore, the robot can realize the positioning of the newly added intelligent equipment based on the method.
Thus, when a user indicates to the robot that the newly added intelligent device exists in the environment, the robot can search the Bluetooth broadcast signal of the newly added intelligent device according to the area distribution information in the existing environment map. When the robot searches the Bluetooth broadcast signal of the newly added intelligent device in the target area, the robot can move in the target area and continuously record the pose information of the robot and the signal intensity of the Bluetooth broadcast signal of the newly added intelligent device searched by the robot in the moving process. Therefore, the robot can determine the target pose information of the robot corresponding to the target moment of which the signal strength meets the first preset condition according to the recorded information of the robot at a plurality of moments in the moving process, so as to determine the position information of the newly-added intelligent equipment according to the target pose information.
Alternatively, when the robot does not search for the bluetooth broadcast signal of the newly added smart device in a certain area, the robot may directly move to the next area without moving in the area to search for the bluetooth broadcast signal of the newly added smart device.
Alternatively, the user may indicate the number of newly added smart devices when the user indicates to the robot that there are newly added smart devices in the environment. Thus, when the robot searches the Bluetooth broadcast signals of the indicated number of the newly added intelligent devices, the robot can stop searching the Bluetooth broadcast signals of other newly added intelligent devices and determine the position information of the indicated number of the newly added intelligent devices.
Optionally, the user may not indicate the number of newly added smart devices when the newly added smart devices exist in the environment to the robot. At this time, the robot can search the bluetooth broadcast signals of all the newly added intelligent devices in the whole scene and determine the position information of all the newly added intelligent devices.
Referring to fig. 5, fig. 5 is a schematic flow chart of a device positioning method according to an embodiment of the present application. As shown in fig. 5, the device positioning method may include:
s510, the robot acquires a distribution map of the full-scene intelligent equipment.
In the embodiment of the application, the robot can acquire the distribution map of the full-scene intelligent device on the premise of constructing the room partition map and the intelligent device distribution map of the whole working scene. The distribution map of the full scene intelligent device comprises the room distribution situation of the full scene and the intelligent device distribution situation in each room. Thus, when the user interacts with the robot, the robot can provide accurate intelligent service for the user based on the intelligent equipment in the room where the user is located. For example, the robot detects that the user says that the light is off, and the robot accurately controls to turn off only the light of the room in which the user is located.
S520, the robot receives interaction information sent by a user.
When a user interacts with the robot, the robot can receive interaction information sent by the user. The interaction information may be voice information or gesture information. The present application is not limited in this regard.
Alternatively, the interaction information may contain control instructions for the designated smart device. After receiving the interaction information sent by the user, the robot can perform data processing on the interaction information to extract key information. The key information may be control instructions for a specific smart device. Alternatively, the specified smart device contained in the interaction information may be one or more in the current scenario.
S530, the robot performs user identification to confirm the identity of the user.
In the embodiment of the application, when the robot receives the interaction information sent by the user, the robot can identify the user who sends the interaction information currently, so as to confirm the identity of the user.
Optionally, the robot may pre-store feature information of one or more authority users, such as face features and voiceprint features of the authority users, so that the robot may perform identity recognition through recognition modes such as face recognition and voiceprint recognition, so as to confirm whether the user who sends the interaction information currently is the authority user. The authority user can be a user with the use authority of the robot. The specific manner of user identification is not limited in the embodiments of the present application.
Optionally, after receiving the interaction information sent by the user, if only one designated intelligent device included in the interaction information is detected in the current scene, the robot can directly control the designated intelligent device to execute the operation corresponding to the control instruction according to the control instruction specifically sent by the user after confirming the user identity. For example, taking the current scene as a home environment as an example, when a user sends a voice control instruction for turning on a television to the smart television, since only one smart television is in the home environment, the robot can control the smart television to execute a startup operation corresponding to the voice control instruction of turning on the television after confirming the user identity.
Optionally, after receiving the interaction information sent by the user, if detecting that a plurality of specified intelligent devices are included in the interaction information in the current scene, the robot can identify a specific scene (such as a room in which the user is located) in which the user is located after confirming the identity of the user, so as to accurately determine one specified intelligent device which the user intends to interact based on the distribution condition of the intelligent devices in the scene in which the user is located, and further control the specified intelligent device to execute corresponding operations. Therefore, the robot can accurately control the room equipment where the user is located on the premise that the user does not specify specific equipment.
S540, the robot performs scene recognition.
In the embodiment of the application, after the robot confirms the identity of the user who sends out the interactive information currently, the scene of the user can be identified through the scene identification technology, so that the room of the user is found out currently. Therefore, the intelligent equipment in the scene is accurately controlled by combining scene identification and user identification, and more intelligent service is provided for the user.
Alternatively, the robot may identify the scene in which the user is located by techniques such as repositioning, room confirmation, and the like. As a mode, the robot can locate the current position of the user when receiving the interactive information sent by the user, and map the user position to the distribution map of the full-scene intelligent equipment, so that the current room of the user and the distribution situation of the intelligent equipment in the room of the user can be determined.
Optionally, when the robot receives the interaction information sent by the user, if the robot is far away from the user (i.e. not in the same room), the robot can acquire a scene image containing the user after determining the current position of the robot, and identify the distance between the user and the robot through the scene image, so that the current position of the user is determined according to the distance and the position of the robot, and map the user position to a distribution map of the full-scene intelligent device, so that the current room of the user and the distribution situation of the intelligent device in the room of the user can be determined.
Typically, the robot is in the vicinity of the user (i.e., in the same room) upon receiving the interactive information from the user. Thus, alternatively, the robot may also determine the room in which the user is located based on the current location of the robot. That is, S540 may include S540a and S540b:
s540a, the robot performs repositioning to determine the current position of the robot.
The repositioning may be to determine the corresponding position of the robot on the map by sensing the information of the surrounding scene for the previously constructed and generated environment map information.
Optionally, the robot may determine the current position of the robot according to the scene features around the robot. Specifically, the robot can collect surrounding scene data, extract scene characteristics and perform similarity matching with a scene map constructed and generated by full scene detection, so that the robot can confirm the current position of the robot.
S540b, the robot confirms the room to determine the room in which the user is located.
Optionally, after the robot confirms the current position of the robot, the position can be mapped into a distribution map of the full-scene intelligent device, so that the robot can acquire the current room of the robot and the intelligent device distribution in the room. The robot is located in a specific room and intelligent devices in the room are distributed, and the robot can be used as the current room of a user and the intelligent devices in the current room of the user.
Alternatively, the robot may not execute the method step of S530, and directly execute the method step of S540 (S540 a and S540 b), that is, after receiving the interaction information sent by the user, the robot may not identify the identity of the user, and directly identify the scene where the user is located.
Optionally, the robot may also perform the method step of S530 and the method step of S540 (S540 a and S540 b) synchronously, that is, after the robot receives the interaction information sent by the user, the robot may also recognize the scene where the user is located while recognizing the identity of the user, and then the robot may perform the subsequent steps according to the user recognition result and the scene recognition result where the user is located.
It will be appreciated that the specific order of execution of the method steps of S530 and the method steps of S540 (S540 a and S540 b) is not limited in the embodiments of the present application. For example, the robot may perform the method steps of S540 (S540 a and S540 b) first, and then perform the method step of S530, that is, after the robot receives the interaction information sent by the user, the robot may identify the scene where the user is located first, and then identify the identity of the user, so that the robot may perform the subsequent steps according to the user identification result and the user location scene identification result.
S550, the robot executes preset operation corresponding to the interaction information according to the intelligent equipment in the room where the user is located.
In the embodiment of the application, since the intelligent device in the room where the user is currently located is more likely to be controlled and used by the user, the robot can execute the preset operation corresponding to the interaction information according to the intelligent device in the room where the user is located.
Optionally, after receiving the interaction information sent by the user, if the robot detects that a plurality of specified intelligent devices are included in the interaction information in the current scene, the robot can confirm the specified intelligent devices in the room where the user is located as the specified intelligent devices which the user intends to interact after confirming the identity of the user, and further control the specified intelligent devices to execute corresponding operations. Therefore, the robot can accurately control the room equipment where the user is located on the premise that the user does not specify specific equipment.
For example, referring to fig. 6, by informing the robot 600 of "off" of the user 601, in the case that the user 601 does not designate a room, the robot may determine that the room in which the user 601 is located is the room 7 shown in fig. 6, and further the robot may determine that the room 7 in which the user 601 is located contains the smart light, and then autonomously determine to turn off the smart light in the room 7 in which the user 601 is located.
Optionally, when the user tells the robot that "turn off the light", if the user definitely gives the light instruction outside the closed room, the robot carries out analysis processing through the instruction to the user that sends, can draw that key information contains the intelligent lamp outside the closed room, and then the robot also can accurately realize the management and control to the light outside the room.
For another example, when the user is in a bedroom and wishes to listen to music or watch a movie, the user interacts with the robot so that the robot can receive interaction information including listening to the music or watching the movie. The robot can determine the control instruction of the user for controlling the media playing device to play music or movie by analyzing and processing the interaction information, so that the robot can only open the corresponding media playing device in the bedroom to play music or movie after confirming the identity of the user.
Optionally, after determining the designated intelligent device that the user intends to interact with, the robot may also perform intelligent recommendation according to the user's preference. For example, when the robot opens the corresponding media playing device in the bedroom to play music or movies, the robot may recommend the video and audio works meeting the user's preference to play according to the user's identity.
In summary, according to the equipment positioning method, under the condition that the robot acquires the distribution map of the full-scene intelligent equipment, the robot can combine the equipment distribution condition of the scene where the user is located with user information such as user identity, preference and the like, so that accurate management and control of the intelligent equipment in the scene are realized, and more intelligent full-house intelligent experience is provided for the user.
Referring to fig. 7, fig. 7 shows a system architecture diagram of a device positioning method according to an embodiment of the present application. In the system, when the robot performs a full scene detection task, the robot acquires scene information through various sensors such as a camera, an IMU, a wheel speed sensor, a guar optical flow sensor, a collision sensor and the like, and completes autonomous construction of a full scene map through environment detection, path navigation and SLAM technology. Meanwhile, during the whole scene detection period, the robot searches intelligent devices of the whole scene through the low-power consumption Bluetooth BLE signals, and continuously samples the signal intensity of BLE signals of all intelligent devices at all times and the pose information of the robot at all sampling times.
After the robot completes the construction of the full scene map, the robot can divide the region of the full scene map by using a room partitioning technology to obtain a room partitioning map. And then the robot can screen out the strongest signal intensity sampled by each intelligent device and determine the pose information of the robot at the corresponding sampling time so as to map the pose information to a room partition map, thereby the robot can determine the room of each intelligent device and complete the construction of the intelligent device distribution map of the whole scene.
After the robot acquires the intelligent device distribution map of the full scene, if the user interacts with the robot, the robot can perform user identification to confirm the identity of the user, and can identify the scene of the user through a scene identification technology to find the current room of the user. Therefore, the robot can control intelligent equipment in the scene by combining scene information and user identity, and can provide more intelligent service.
Still further embodiments of the present application provide a removable electronic device, which may include: a memory and one or more processors. The memory is coupled to the processor. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the various functions or steps performed by the robot in the method embodiments described above. The structure of the electronic device may refer to the structure of the robot 100 shown in fig. 1.
Still further embodiments of the present application provide a device positioning apparatus, which may be applied to an electronic device including the foregoing. The device is used for executing each function or step executed by the robot in the method embodiment.
Alternatively, the device positioning means may be in particular a chip, a component or a module, which means may comprise a processor and a memory connected; the memory is configured to store computer-executable instructions, and when the apparatus is running, the processor may execute the computer-executable instructions stored in the memory, so that the chip, the component or the module performs the device positioning method executed by the robot in the above method embodiments.
Embodiments of the present application also provide a chip system including at least one processor and at least one interface circuit. The processors and interface circuits may be interconnected by wires. For example, the interface circuit may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit may be used to send signals to other devices (e.g., processors). The interface circuit may, for example, read instructions stored in the memory and send the instructions to the processor. The instructions, when executed by the processor, may cause the electronic device to perform the various steps performed by the robot in the embodiments described above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
Embodiments of the present application also provide a computer storage medium having stored therein computer instructions that, when executed on an electronic device, cause the electronic device to perform the above-described related method steps to implement the device positioning method performed by the robot in the above-described embodiments.
Embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the above-mentioned related steps to implement the device positioning method performed by the robot in the above-mentioned embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are used to execute the corresponding methods provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding methods provided above, and will not be described herein.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (23)

1. A device positioning method, applied to a mobile electronic device, the method comprising:
acquiring record information of a plurality of moments in the moving process of the electronic equipment; wherein the recording information for each of the plurality of time instants includes: the pose information of the electronic equipment at the moment and the signal intensity of the wireless signal of the intelligent equipment searched by the electronic equipment at the moment;
determining target pose information of the electronic equipment corresponding to target time with signal strength meeting a first preset condition according to the recorded information of the multiple times;
and determining the position information of the intelligent equipment according to the target pose information.
2. The method of claim 1, wherein determining location information of the smart device from the target pose information comprises:
acquiring region information in the moving process of the electronic equipment;
and determining the area where the intelligent equipment is located according to the target pose information and the area information.
3. The method of claim 2, wherein the acquiring the region information during the movement of the electronic device comprises:
Acquiring an environment map, wherein the environment map comprises area information;
the determining the area where the intelligent device is located according to the target pose information and the area information includes:
and acquiring a target area corresponding to the target pose information in the environment map as an area where the intelligent equipment is located.
4. The method according to claim 3, wherein the area information includes room area information, and the obtaining the target area corresponding to the target pose information in the environment map as the area where the intelligent device is located includes:
and acquiring a target room corresponding to the target pose information in the environment map as a room where the intelligent equipment is located.
5. The method of claim 3, wherein the environment map is pre-configured in the electronic device;
or,
the environment map is generated according to environment information detected by the electronic equipment in the moving process.
6. The method according to any one of claims 2-5, further comprising:
and generating an equipment distribution map according to the area where the intelligent equipment is located, wherein the equipment distribution map comprises intelligent equipment distributed in different areas.
7. The method of claim 6, wherein the method further comprises:
when the interactive information of the user is obtained, determining the area where the user is located;
determining all intelligent devices distributed in the area where the user is located based on the device distribution map;
and responding to the interaction information according to all the intelligent devices, and executing preset operation corresponding to the interaction information.
8. The method according to any one of claims 1-7, wherein the first preset condition comprises: the signal intensity is strongest in the recorded information of the plurality of moments.
9. The method according to any one of claims 1-8, further comprising:
when the wireless signal of the intelligent equipment is detected to meet a second preset condition in the moving process of the electronic equipment, establishing wireless connection with the intelligent equipment;
and when the wireless connection is successfully established, the intelligent equipment is associated with the electronic equipment.
10. The method of claim 9, wherein the second preset condition comprises: the electronic device searches the wireless signal of the intelligent device for the first time, or the signal strength of the wireless signal of the intelligent device reaches a preset strength threshold.
11. The method of claim 9, wherein associating the smart device with the electronic device when the wireless connection establishment is successful comprises:
when the wireless connection is successfully established, acquiring equipment information of the intelligent equipment;
and associating the intelligent device with the electronic device based on the device information.
12. The method of any of claims 9-11, wherein after associating the smart device with the electronic device, the method further comprises:
and generating an environment map containing intelligent device information according to the intelligent devices associated with the electronic devices.
13. The method according to any of the claims 9-12, wherein before detecting that the wireless signal of the smart device satisfies the second preset condition during the movement of the electronic device, the method further comprises:
and searching for wireless signals of the intelligent equipment when the interactive information of the user to the intelligent equipment is acquired.
14. The method according to any one of claims 1-13, further comprising:
and acquiring the signal intensity of the wireless signal of the intelligent device which is currently searched and the pose information of the electronic device at intervals preset in the moving process of the electronic device.
15. The method according to claim 14, wherein the preset interval is a preset distance interval, and the obtaining, at preset intervals, a signal strength of a wireless signal of the currently searched smart device and pose information of the current electronic device includes:
acquiring the signal intensity of a wireless signal of the intelligent equipment searched at the ith moment and pose information of the electronic equipment at the ith moment;
when the displacement length between the current moment and the i-th moment reaches the preset distance interval, acquiring the signal intensity of the wireless signal of the intelligent device searched at the current moment, the pose information of the electronic device at the current moment, the signal intensity of the wireless signal of the intelligent device searched at the i+1-th moment and the pose information of the electronic device at the i+1-th moment.
16. The method according to claim 14, wherein the preset interval is a preset time interval, and the obtaining the signal strength of the wireless signal of the currently searched smart device and the pose information of the current electronic device at intervals of preset intervals includes:
acquiring the signal intensity of a wireless signal of the intelligent equipment searched at the ith moment and pose information of the electronic equipment at the ith moment;
When the time length between the current moment and the ith moment reaches the preset time interval, acquiring the signal strength of the wireless signal of the intelligent device searched at the current moment, the pose information of the electronic device at the current moment, the signal strength of the wireless signal of the intelligent device searched at the (i+1) th moment and the pose information of the electronic device at the (i+1) th moment.
17. The method according to claim 1 or 2, characterized in that the method further comprises:
responding to a first instruction, wherein the electronic equipment moves in a target area, the first instruction is used for indicating that newly added intelligent equipment exists in the environment, and the target area is an area where the electronic equipment is located in a preset environment map when the electronic equipment searches wireless signals of the newly added intelligent equipment;
the obtaining the record information of a plurality of moments in the moving process of the electronic equipment comprises the following steps:
acquiring record information of the electronic equipment at a plurality of moments in the moving process, wherein the record information of each moment in the plurality of moments comprises: the pose information of the electronic equipment at the moment and the signal strength of the wireless signal of the newly added intelligent equipment searched by the electronic equipment at the moment.
18. The method of any of claims 1-17, wherein the path during movement of the electronic device is a shortest movement path.
19. The method of any one of claims 1-18, wherein the electronic device is a robot or a smart car.
20. A removable electronic device, the electronic device comprising a memory and one or more processors; the memory is coupled to the processor; the memory is for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1-19.
21. A chip system, wherein the chip system is applied to a mobile electronic device; the system-on-chip includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a circuit; the interface circuit is configured to receive a signal from a memory of the electronic device and to send the signal to the processor, the signal including computer instructions stored in the memory; the electronic device, when executing the computer instructions, performs the method of any of claims 1-19.
22. A computer storage medium comprising computer instructions which, when run on a removable electronic device, cause the electronic device to perform the method of any of claims 1-19.
23. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method according to any of claims 1-19.
CN202210794647.0A 2022-07-07 2022-07-07 Equipment positioning method and movable electronic equipment Pending CN117412238A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210794647.0A CN117412238A (en) 2022-07-07 2022-07-07 Equipment positioning method and movable electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210794647.0A CN117412238A (en) 2022-07-07 2022-07-07 Equipment positioning method and movable electronic equipment

Publications (1)

Publication Number Publication Date
CN117412238A true CN117412238A (en) 2024-01-16

Family

ID=89496677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210794647.0A Pending CN117412238A (en) 2022-07-07 2022-07-07 Equipment positioning method and movable electronic equipment

Country Status (1)

Country Link
CN (1) CN117412238A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118102445A (en) * 2024-04-28 2024-05-28 中孚安全技术有限公司 Intelligent positioning system, method and medium for wireless signals in places

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118102445A (en) * 2024-04-28 2024-05-28 中孚安全技术有限公司 Intelligent positioning system, method and medium for wireless signals in places

Similar Documents

Publication Publication Date Title
AU2022201137B2 (en) System and method for monitoring a property using drone beacons
CN106444786B (en) The control method and device and electronic equipment of sweeping robot
KR20180039437A (en) Cleaning robot for airport and method thereof
CN104887155A (en) Intelligent sweeper
KR102600269B1 (en) Cleaning robot for airport and method thereof
CN110238838B (en) Autonomous moving apparatus, autonomous moving method, and storage medium
CN117412238A (en) Equipment positioning method and movable electronic equipment
CN109388238A (en) The control method and device of a kind of electronic equipment
US20210125369A1 (en) Drone-assisted sensor mapping
WO2022183936A1 (en) Smart home device selection method and terminal
CN114945133A (en) Method, device and system for determining equipment position
TWI768724B (en) Method for positioning in a three-dimensional space and positioning system
US11527926B2 (en) Receiver, reception method, transmitter and transmission method
JP7306546B2 (en) AUTONOMOUS MOBILE DEVICE, AUTONOMOUS MOVEMENT METHOD AND PROGRAM
CN114071003B (en) Shooting method and system based on optical communication device
WO2023013131A1 (en) Information processing system, and information processing program
EP4207100A1 (en) Method and system for providing user interface for map target creation
TWI788217B (en) Method for positioning in a three-dimensional space and positioning system
CN117257170A (en) Cleaning method, cleaning display method, cleaning apparatus, and storage medium
CN117268382A (en) Robot navigation method and device
CN118233822A (en) Sound field calibration method and electronic equipment
CN112581530A (en) Indoor positioning method, storage medium, equipment and system
CN118233821A (en) Sound field calibration method and electronic equipment
CN118057353A (en) Display method, storage medium and terminal device
CN116938618A (en) Method for determining controlled device, method for determining target user and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination