CN116419159A - Indoor positioning method and electronic equipment - Google Patents

Indoor positioning method and electronic equipment Download PDF

Info

Publication number
CN116419159A
CN116419159A CN202111667028.7A CN202111667028A CN116419159A CN 116419159 A CN116419159 A CN 116419159A CN 202111667028 A CN202111667028 A CN 202111667028A CN 116419159 A CN116419159 A CN 116419159A
Authority
CN
China
Prior art keywords
fingerprint
location
user
electronic device
room
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111667028.7A
Other languages
Chinese (zh)
Inventor
刘俊材
许振强
高翔宇
赵安
谢波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111667028.7A priority Critical patent/CN116419159A/en
Publication of CN116419159A publication Critical patent/CN116419159A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/60Context-dependent security
    • H04W12/69Identity-dependent
    • H04W12/79Radio fingerprint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Telephone Function (AREA)

Abstract

The application discloses an indoor positioning method and electronic equipment, wherein the indoor positioning method can collect position fingerprints corresponding to behaviors based on the behaviors of users, determine positions corresponding to the position fingerprints according to the behaviors, and then cluster the collected position fingerprints by using a clustering algorithm so as to obtain a clustering result. And finally, when the position fingerprint to be positioned appears, determining the category of the position fingerprint to be positioned according to the clustering result, thereby determining the position corresponding to the position fingerprint to be positioned. The position fingerprint is acquired based on the behaviors, the position corresponding to the position fingerprint can be automatically determined according to the behaviors of the user, the trouble of manually calibrating the position by the user is reduced, the operation of the user is convenient, and the efficiency of indoor positioning and indoor environment recognition is improved.

Description

Indoor positioning method and electronic equipment
Technical Field
The application relates to the technical field of terminals and communication, in particular to an indoor positioning method and electronic equipment.
Background
With the popularity of the global positioning system (global positioning system, GPS), outdoor positioning technology has matured very well and recommendations and services based on outdoor positioning are becoming more and more abundant. The indoor environment is not ideal due to the influence of signal attenuation, multipath effect and the like, and indoor positioning, indoor environment identification and the like are not ideal, so that the targeted service provided for the user is limited, and improvement is needed.
Disclosure of Invention
The application provides an indoor positioning method and electronic equipment, and the method can automatically calibrate the position of a user according to the behavior of the user, so that the operation of the user is convenient, and the indoor positioning efficiency is improved.
In a first aspect, an embodiment of the present application provides an indoor positioning method, including: the first device acquires a first location fingerprint; the first device determines a first location where the first device is located based on a second location fingerprint, the second location fingerprint comprising a location fingerprint collected when a first activity occurs, the first activity comprising an activity that occurs at the first location with a probability greater than a first value, the second location fingerprint comprising a characteristic of the first location.
By implementing the method provided in the first aspect, the electronic device can collect the position fingerprint based on the behavior of the user, namely the second position fingerprint, and determine the position of the electronic device when the electronic device collects the position fingerprint according to the behavior, so that when the electronic device collects the position fingerprint to be positioned, namely the first position fingerprint, the position of the electronic device in the room can be determined according to the relation between the first position fingerprint and the second position fingerprint, and the indoor positioning effect is achieved. According to the electronic device, the electronic device is calibrated based on the behavior, the position where the electronic device is located when the electronic device collects the position fingerprint is calibrated, the trouble of manual calibration of a user can be reduced, a faster and more effective indoor positioning effect is provided for the user, and the experience of the user is improved.
With reference to the first aspect, in one implementation, the first location fingerprint or the second location fingerprint includes one or more of the following: signal identification, signal strength, signal round trip time, signal delay time of one or more communication signals in a wireless network, a base station, bluetooth or ZigBee, information acquired by a sensor, base station information and access point information.
With reference to the first aspect, in an embodiment, the method further includes: the first device acquires a third position fingerprint; the first device determines a second location where the first device is located based on a fourth location fingerprint, the fourth location fingerprint comprising a location fingerprint collected when a second activity occurs, the second activity comprising an activity occurring at the second location with a probability greater than a second value, the fourth location fingerprint comprising a characteristic of the second location.
That is, when the electronic device has a plurality of position fingerprints to be positioned, the electronic device can determine positions corresponding to the position fingerprints to be positioned according to the position fingerprints acquired under different behaviors, so as to position different indoor positions.
With reference to the first aspect, in an implementation manner, after the first device determines the first location, the method further includes: the first device performs a first operation; after the first device determines the second location, the method further comprises: the first device performs a second operation; wherein the first operation and the second operation are different operations performed when the first device is in different indoor positions.
That is, after the electronic device achieves indoor positioning, the electronic device may perform a corresponding operation based on the determined position.
With reference to the first aspect, in one implementation manner, the determining, by the first device, the first location according to the second location fingerprint specifically includes: the first device determines a first position according to the second position fingerprint and a fourth position fingerprint, wherein the fourth position fingerprint comprises position fingerprints acquired when second behaviors occur, the second behaviors comprise behaviors with probability of occurring at the second position being larger than a second value, and the fourth position fingerprint comprises characteristics of the second position; the first position is the most position corresponding to partial position fingerprints in the second position fingerprints and the fourth position fingerprints, and the distance between the partial position fingerprints is smaller than a third value.
Before the electronic device achieves positioning of the position fingerprints to be positioned, the electronic device can divide the position fingerprints acquired based on the behaviors into different categories, wherein the positions corresponding to the categories are the positions with the largest number of position fingerprints in the categories, for example, one category comprises three position fingerprints, two position fingerprints correspond to the position 1, one position corresponds to the position 2, and then the positions corresponding to the categories are the position 1. Then, the electronic device can determine the category to which the position fingerprint to be positioned belongs according to the distance between the position fingerprint to be positioned and the position fingerprints in each category, and determine the position corresponding to the position fingerprint to be positioned according to the position corresponding to the category.
With reference to the first aspect, in one embodiment, the partial position fingerprints are position fingerprints included in a first cluster of the N clusters after the second position fingerprint and the fourth position fingerprint are clustered into N clusters, where the first position fingerprint is closest to a center point of the first cluster or closest to the position fingerprint in the first cluster.
That is, in the embodiment of the present application, the electronic device may use a clustering algorithm to cluster the location fingerprints, so as to divide the location fingerprints into different categories (clusters), and the electronic device may determine the category to which the location fingerprint to be located belongs according to the distance between the location fingerprint to be located and the center point of the category, or the distance between the location fingerprint to be located and the location fingerprint in each category.
With reference to the first aspect, in one embodiment, a center point of the first cluster is a mean of the location fingerprints in the first cluster.
With reference to the first aspect, in one implementation, the number of location fingerprints included in the first cluster is greater than a fourth value.
That is, the number of location fingerprints included in the category obtained by clustering may be greater than a threshold, and when the number of location fingerprints included in the category is less than the threshold, the category may be deleted, so that the electronic device is prevented from determining the location fingerprint acquired by mistake as a category, the error of clustering is reduced, and the accuracy of indoor positioning is improved.
With reference to the first aspect, in one implementation manner, a fifth location fingerprint is further included in the first cluster, where the fifth location fingerprint includes a location fingerprint acquired when the first behavior or the second behavior does not occur; the first device determines a first position according to the second position fingerprint and the fourth position fingerprint, and specifically comprises: the first device determines the first position according to the second position fingerprint, the fourth position fingerprint and the fifth position fingerprint.
The second position fingerprint and the fourth position fingerprint are position fingerprints collected based on behaviors and can be also called special fingerprints, the fifth position fingerprint is a position fingerprint collected when no specific behaviors occur and can be also called common fingerprint, in the process of clustering by using the special fingerprints, the electronic equipment can further cluster by combining the common fingerprints, namely, twice clustering, and the precision of a clustering result is improved, so that the precision of indoor positioning is improved.
With reference to the first aspect, in an embodiment, the distance between the first location fingerprint and any one of the location fingerprints in the first cluster is less than a fifth value.
After the electronic device obtains the positioning result, that is, the position corresponding to the position fingerprint to be positioned is obtained according to the clustering result, the electronic device can further determine the accuracy of the positioning result, and at this time, the electronic device can determine the accuracy of the positioning result according to the distance relation between the position fingerprint to be positioned and each position fingerprint in the category.
With reference to the first aspect, in one implementation, the distance is determined by the first device according to a profile factor.
With reference to the first aspect, in one implementation, after the first device determines the first location, the method includes: the first device acquires a sixth location fingerprint; the first device updates the N clusters according to the sixth location fingerprint.
That is, the electronic device may update the clustering result after acquiring the new position fingerprint, so as to improve the accuracy of indoor positioning.
With reference to the first aspect, in one implementation, the first behavior includes: the method comprises the steps that a user uses the behavior of a first device or a second device, the user triggers the behavior of the first device to execute a first operation, and the first device automatically executes a second operation; the second device is a device that establishes a communication connection with the first device.
The electronic device collects a location fingerprint based on a behavior, which may be a behavior of a user acting on the device, such as a behavior of turning on or off the device, or a behavior performed automatically or under the control of the user, such as a behavior of rice cooker cooking, a refrigerating behavior of air conditioner, etc.
With reference to the first aspect, in one implementation, the distance between the first device and the second device is less than a sixth value.
In this embodiment of the present application, the second device may be a thin device, and the position fingerprint collected by the electronic device only belongs to a special fingerprint when the electronic device is closer to the thin device, and the user controls the behavior of the thin device. Therefore, when the distance between the user and the controlled equipment is far, the position of the currently acquired position fingerprint is determined to be the position of the controlled equipment, and the positioning error is reduced.
With reference to the first aspect, in one implementation, the first behavior includes controlling a smart television, and the designated location includes a living room; the first behavior comprises sleep detection or early morning alarm clock, and the designated position comprises bedroom; the first behavior includes controlling the cooking appliance, the designated location including a kitchen; the first behavior comprises washing or controlling a smart toilet, and the designated position comprises a toilet; the first behavior includes controlling a smart lock, wearing shoes or slippers, and the designated location includes a lobby.
With reference to the first aspect, in an implementation manner, before the first device acquires the first location fingerprint, the method further includes: the first device receives an instruction triggering a third operation of the third device; after the first device determines the first location based on the second location fingerprint, the method further comprises: the first device controls a third device located at or near the first location to perform a third operation.
That is, the indoor positioning method may be applied specifically when there are a plurality of controllable devices indoors, and the electronic device determines the device to be finally controlled according to the position. For example, when there are multiple air conditioners in a room, and when a user located in a living room triggers to turn on the indoor air conditioner, the electronic device may combine with the location of the current user to turn on the air conditioner located in the living room. It can be seen that the indoor positioning method can be combined with the position of the user, provide more targeted service for the user, and promote the home experience of the user.
With reference to the first aspect, in an implementation manner, after the first device determines the first location according to the second location fingerprint, the method further includes: the first device displays a user interface containing one or more of links, pictures, icons, video, audio, or text information related to the first location.
That is, after the electronic device determines the location, the electronic device can push some information related to the location, so as to provide more targeted application service for the user, and the user can check the related information related to the current location without additional searching or searching operation, so that the operation of the user is simplified, and the experience of the user is improved.
In a second aspect, embodiments of the present application further provide an indoor positioning method, where the method includes: the first device obtains a first location fingerprint, and the first device performs a first operation based on the first location fingerprint; the first device obtains a third location fingerprint, the second device performs a second operation based on the third location fingerprint, the location fingerprint including characteristics of an indoor location, the first operation and the second operation being different operations performed when the first device is in different indoor locations.
By implementing the method provided by the embodiment of the application, the electronic equipment can position the equipment according to the position fingerprint and execute corresponding operation related to the position. That is, the electronic device may achieve the indoor positioning effect by collecting the location fingerprint, and provide a targeted service based on the location after determining the location, thereby providing a better service experience for the user.
With reference to the second aspect, in one embodiment, the first location fingerprint or the third location fingerprint includes one or more of: signal identification, signal strength, signal round trip time, signal delay time of one or more communication signals in a wireless network, a base station, bluetooth or ZigBee, information acquired by a sensor, base station information and access point information.
With reference to the second aspect, in one implementation manner, after the first device acquires the first location fingerprint, the method further includes:
the first device determines a first location where the first device is located based on a second location fingerprint, the second location fingerprint comprising a location fingerprint collected when a first activity occurs, the first activity comprising an activity that occurs at the first location with a probability greater than a first value, the second location fingerprint comprising a characteristic of the first location.
That is, the electronic device may collect a position fingerprint based on a behavior of the user, i.e. the second position fingerprint, and determine a position where the electronic device is located when collecting the position fingerprint according to the behavior, so that when the electronic device collects a position fingerprint to be located, i.e. the first position fingerprint, the electronic device may determine a position of the electronic device in the room according to a relationship between the first position fingerprint and the second position fingerprint, thereby achieving an indoor location effect. According to the electronic device, the electronic device is calibrated based on the behavior, the position where the electronic device is located when the electronic device collects the position fingerprint is calibrated, the trouble of manual calibration of a user can be reduced, a faster and more effective indoor positioning effect is provided for the user, and the experience of the user is improved.
With reference to the second aspect, in one implementation manner, the determining, by the first device, the first location according to the second location fingerprint specifically includes: the first device determines a first position according to the second position fingerprint and a fourth position fingerprint, wherein the fourth position fingerprint comprises position fingerprints acquired when second behaviors occur, the second behaviors comprise behaviors with probability of occurring at the second position being larger than a second value, and the fourth position fingerprint comprises characteristics of the second position; the first position is the most position corresponding to partial position fingerprints in the second position fingerprints and the fourth position fingerprints, and the distance between the partial position fingerprints is smaller than a third value.
Before the electronic device achieves positioning of the position fingerprints to be positioned, the electronic device can divide the position fingerprints acquired based on the behaviors into different categories, wherein the positions corresponding to the categories are the positions with the largest number of position fingerprints in the categories, for example, one category comprises three position fingerprints, two position fingerprints correspond to the position 1, one position corresponds to the position 2, and then the positions corresponding to the categories are the position 1. Then, the electronic device can determine the category to which the position fingerprint to be positioned belongs according to the distance between the position fingerprint to be positioned and the position fingerprints in each category, and determine the position corresponding to the position fingerprint to be positioned according to the position corresponding to the category.
With reference to the second aspect, in one embodiment, the partial position fingerprints are position fingerprints included in a first cluster of the N clusters after the second position fingerprint and the fourth position fingerprint are clustered into N clusters, and the first position fingerprint is closest to a center point of the first cluster or closest to the position fingerprint in the first cluster.
That is, in the embodiment of the present application, the electronic device may use a clustering algorithm to cluster the location fingerprints, so as to divide the location fingerprints into different categories (clusters), and the electronic device may determine the category to which the location fingerprint to be located belongs according to the distance between the location fingerprint to be located and the center point of the category, or the distance between the location fingerprint to be located and the location fingerprint in each category.
With reference to the second aspect, in one embodiment, the first cluster further includes a fifth location fingerprint, where the fifth location fingerprint is a location fingerprint collected when the first behavior or the second behavior does not occur, and the first device determines, according to the second location fingerprint and the fourth location fingerprint, a first location where the first device is located, specifically including: the first device determines the first position according to the second position fingerprint, the fourth position fingerprint and the fifth position fingerprint.
The second position fingerprint and the fourth position fingerprint are position fingerprints collected based on behaviors and can be also called special fingerprints, the fifth position fingerprint is a position fingerprint collected when no specific behaviors occur and can be also called common fingerprint, in the process of clustering by using the special fingerprints, the electronic equipment can further cluster by combining the common fingerprints, namely, twice clustering, and the precision of a clustering result is improved, so that the precision of indoor positioning is improved.
With reference to the second aspect, in one embodiment, the first behavior includes: the user uses the behavior of the first device or the second device, the user triggers the behavior of the first device to execute the third operation, and the first device automatically executes the fourth operation; the second device is a device that establishes a communication connection with the first device.
The electronic device collects a location fingerprint based on a behavior, which may be a behavior of a user acting on the device, such as a behavior of turning on or off the device, or a behavior performed automatically or under the control of the user, such as a behavior of rice cooker cooking, a refrigerating behavior of air conditioner, etc.
With reference to the second aspect, in one embodiment, the first behavior includes controlling a smart television, and the designated location includes a living room; the first behavior comprises sleep detection or early morning alarm clock, and the designated position comprises bedroom; the first behavior includes controlling the cooking appliance, the designated location including a kitchen; the first behavior comprises washing or controlling a smart toilet, and the designated position comprises a toilet; the first behavior includes controlling a smart lock, wearing shoes or slippers, and the designated location includes a lobby.
With reference to the second aspect, in one implementation, before the first device acquires the first location fingerprint, the method further includes: the first device receives an instruction triggering a third operation of the third device; after the first device determines the first location according to the second location fingerprint, the method further includes: the first device controls a third device located at or near the first location to perform a third operation.
That is, the indoor positioning method may be applied specifically when there are a plurality of controllable devices indoors, and the electronic device determines the device to be finally controlled according to the position. For example, when there are multiple air conditioners in a room, and when a user located in a living room triggers to turn on the indoor air conditioner, the electronic device may combine with the location of the current user to turn on the air conditioner located in the living room. It can be seen that the indoor positioning method can be combined with the position of the user, provide more targeted service for the user, and promote the home experience of the user.
With reference to the second aspect, in one embodiment, the first operation includes: a user interface is displayed that contains one or more of links, pictures, icons, video, audio, or text information related to a location corresponding to the first location fingerprint.
That is, after the electronic device determines the location, the electronic device can push some information related to the location, so as to provide more targeted application service for the user, and the user can check the related information related to the current location without additional searching or searching operation, so that the operation of the user is simplified, and the experience of the user is improved.
In a third aspect, an embodiment of the present application provides a map generating method, which is characterized in that the method includes: the first device obtains environmental information of a first area, the environmental information including: images acquired by the second equipment in the process of moving in the first area and/or moving routes; the first device determines the area of the first area, the number of rooms contained in the first area, the areas and the types of the plurality of rooms and the positions in the first area according to the environment information; the first device obtains the positions of M devices contained in the first area; the first device generates a map indicating the rooms in which the M devices are located.
By implementing the method provided by the embodiment of the application, the first equipment (namely the control equipment) can acquire the environment information automatically acquired by the second equipment (namely the mobile equipment), the trouble of manually acquiring the environment information by a user is reduced, and the room equipment in the first area (namely the detection area) is classified according to the room type, so that the user can manage the room equipment pertinently, the room equipment can provide pertinence service for the user, and immersive and personalized full-scene intelligent experience is provided for the user.
With reference to the third aspect, in one embodiment, the environment information further includes one or more of the following information: obstacle information, temperature, humidity, brightness, and audio, wherein the obstacle information is obtained from a moving route.
With reference to the third aspect, in one implementation manner, after the first device generates the map, the method further includes: the first device displays a first user interface comprising a map.
That is, after the electronic device obtains the mapping relationship between the room device and the room type, the electronic device may display a map including the mapping relationship, so that a user can conveniently view the positions of different room devices and the room types of different room devices.
With reference to the third aspect, in one implementation manner, after the first device generates the map, the method further includes: the first device displays a second user interface provided by the first application, wherein the second user interface comprises one or more device options, and the device options indicate a room in which one device of the M devices is located.
In the embodiment of the application, the first application may refer to a family application program, for example, a smart life application program, where the family application program may be used to manage one or more room devices in a family, and in a process that a user adds a room device through the family application program, the family application program may automatically add a room type to the room device according to the room device added by the user, so that the user does not need to further manually add the room type of the room device, which facilitates the operation of the user, and provides convenience for the user to manage and control the room device.
With reference to the third aspect, in one implementation manner, the one or more device options include a first device option, where the first device option corresponds to a fourth device of the M devices, and after the first device displays the second user interface provided by the first application, the method further includes: the first device detects an operation of a user acting on the first device option, and in response to the operation, the first device controls the fourth device to perform the first operation, and controls devices belonging to the same room as the fourth device to perform the second operation.
When a user controls one room device through a family application program, the family application program can automatically control another device belonging to the same room with the room device in a linkage mode, so that a plurality of devices can cooperate together, service is provided for the user, and the family experience of the user is improved.
With reference to the third aspect, in one implementation manner, after the first device generates the map, the method further includes: the first device sends the map to one or more of the M devices.
That is, after the electronic device obtains the mapping relationship between the room device and the room type, the electronic device may send the mapping relationship to other devices, so that other devices may also provide a targeted service for the user according to the mapping relationship.
With reference to the third aspect, in one implementation manner, after the first device generates the map, the method further includes: the first device detects that a user triggers the device contained in the first area to execute a third operation; the first device controls a device located in the first room among the M devices to perform a third operation.
When there are multiple controllable devices in the first area, the electronic device may determine the final controlled device from the map. For example, in a home scenario, when a user wakes up an air conditioner using a wake-up word, the air conditioner starts cooling, and the map may be combined to determine the air conditioner that should eventually wake up.
With reference to the third aspect, in one embodiment, the first room is a room in which the user is located or a room closest to the user.
That is, the electronic device may determine the finally controlled room device in combination with the map and the location of the user, for example, when the user wakes up the air conditioner using the wake word, the air conditioner closest to the user among the plurality of air conditioners may be controlled, or the air conditioner in the room where the user is located starts cooling.
With reference to the third aspect, in one implementation manner, after the first device generates the map, the method further includes: the first device detects that a user triggers a fifth device in the M devices to execute a fourth operation; the first device controls a sixth device belonging to one room together with the fifth device to perform a fifth operation.
That is, after the electronic device detects that the user controls the operation of one device, the electronic device can control the other device which belongs to the same room as the device in a linkage manner to perform the corresponding operation, for example, when the user turns on a television in a living room, the electronic device can turn on an intelligent sound box in the living room in a linkage manner, so as to provide more comfortable viewing experience for the user and more intelligent home experience for the user.
With reference to the third aspect, in one embodiment, the first device uses a room classification model to determine, based on the environmental information, an area of the first area, the number of rooms contained in the first area, an area, a type of each room, and a location in the first area.
With reference to the third aspect, in one embodiment, the room classification model is obtained by training, by the first device, the initial room classification model using sample information carrying a type of room, where the sample data includes a plurality of images and room areas, and the images and the room areas are information acquired from rooms belonging to the same type.
With reference to the third aspect, in one embodiment, the second device is a sweeping robot.
That is, the mobility of the sweeping robot can be utilized, the environment information of the detection area can be automatically collected in the moving process of the sweeping robot, the condition that a user manually adds or modifies the environment information is avoided, and the operation of the user is simplified.
In a fourth aspect, embodiments of the present application provide an electronic device comprising a memory, one or more processors, and one or more programs; the one or more processors, when executing the one or more programs, cause the electronic device to implement any one of the possible implementations of the first aspect, or any one of the possible implementations of the second aspect, or any one of the possible implementations of the third aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium comprising instructions that, when run on an electronic device, cause the electronic device to implement any one of the possible implementations of the first aspect, or any one of the possible implementations of the second aspect, or any one of the possible implementations of the third aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to implement any one of the possible implementations as in the first aspect, or any one of the possible implementations as in the second aspect, or any one of the possible implementations as in the third aspect.
Drawings
Fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic software structure of an indoor positioning device according to an embodiment of the present application;
fig. 3 to fig. 7 are specific flow diagrams of an indoor positioning method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an indoor positioning process according to an embodiment of the present disclosure;
fig. 9 and fig. 10 are schematic diagrams of application scenarios provided in the embodiments of the present application;
fig. 11 is a schematic flow chart of an indoor positioning method according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of a communication system according to an embodiment of the present application;
fig. 13 is a schematic software structure of a map generating system according to an embodiment of the present application;
fig. 14 is a specific flowchart of a map generating method according to an embodiment of the present application;
fig. 15 is a schematic diagram of bluetooth positioning according to an embodiment of the present application;
fig. 16 is a process schematic diagram of a map generating method according to an embodiment of the present application;
fig. 17-20 are schematic diagrams of some application scenarios provided in embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and thoroughly described below with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and in addition, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
The term "User Interface (UI)" in the following embodiments of the present application is a media interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and an acceptable form of the user. The user interface is a source code written in a specific computer language such as java, extensible markup language (extensible markup language, XML) and the like, and the interface source code is analyzed and rendered on the electronic equipment to finally be presented as content which can be identified by a user. A commonly used presentation form of the user interface is a graphical user interface (graphic user interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be a visual interface element of text, icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, etc., displayed in a display of the electronic device.
In order to facilitate understanding of the present solution, related terms are first conceptually explained:
1) Position fingerprint
Location fingerprints are used to indicate locations in the actual environment that can relate locations in the actual environment to some kind of "fingerprint". The location fingerprint may be single-dimensional or multi-dimensional, e.g. the device to be located is receiving or transmitting information, then the location fingerprint may be one or more characteristics of this information or signal (most common characteristics are signal strength, channel).
The location fingerprint may be of various types, any "location-unique" feature may be used as a location fingerprint, such as a multipath structure of a communication signal at a location, whether an access point or base station can be detected at a location, the received signal strength from a base station signal detected at a location, the round trip time or delay of a signal when communicating at a location, etc., or may be combined as a location fingerprint.
2) Thin equipment
The thin device is a device which does not contain a sensor or contains fewer sensors, has weak computing power and limited memory or running space, and can be connected with other electronic devices such as a mobile phone, a tablet, a smart television and the like through Bluetooth, internet or ZigBee and the like. Wherein the thin device includes, but is not limited to: smart home devices (e.g., smart refrigerator, smart speakers, televisions, etc.), wearable devices (headphones, bracelets, watches, smart glasses, etc.).
3) Power line communication (power line communication, PLC)
PLC is a carrier communication method for transmitting data and information using a power line (low voltage, medium voltage or direct current) as a medium. The PLC realizes the high-speed, reliable, real-time and long-distance transmission of data on the power line, and has the outstanding characteristics that the network can be accessed to the network by power supply without additional deployment of special communication lines. The whole house intelligent system is an application scene that the PLC technology is applied to the home environment, and can be communicated with all intelligent devices in the home through the PLC technology, wherein the intelligent devices applying the PLC technology can be called as PLC devices, a user can independently or simultaneously control each PLC device through the whole house intelligent system, and each PLC device can be displayed through a visualized interface so as to be checked or controlled by the user at a remote end or a near end. In addition, the whole house intelligent system can acquire the position of the PLC equipment connected to the power socket in the home through the position of the power socket arranged indoors.
In order to accurately realize indoor positioning or indoor environment identification, one possible implementation manner is as follows: the method comprises the steps of establishing a position fingerprint database of each indoor position manually in advance, and determining the corresponding relation between different position fingerprints and each indoor position by utilizing the position fingerprint database, so that when the position fingerprint to be positioned appears, the specific position of the positioning point can be indirectly obtained by utilizing the similarity relation between the position fingerprint of the positioning point and each data in the position fingerprint database and the corresponding relation between the different position fingerprints and each indoor position in the position fingerprint database.
It can be seen that although the method can accurately realize indoor positioning, when a position fingerprint database is established, the corresponding relationship between the position fingerprint and the indoor position can be determined according to the position fingerprint of the known indoor position by manually calibrating the indoor position corresponding to the acquired position fingerprint in advance. When the correspondence between the location fingerprint and the indoor location changes, there is a problem that an error occurs in positioning, or the user needs to manually update the location fingerprint database of each indoor location frequently, which causes inconvenience to the user. Therefore, how to conveniently, effectively and efficiently perform indoor positioning or indoor environment identification, and provide targeted service for users is a problem to be solved at present.
The embodiment of the application provides an indoor positioning method, which can be divided into three stages: data acquisition, fingerprint clustering and user positioning. In the specific implementation, firstly, a position fingerprint corresponding to the behavior can be acquired based on the behavior of a user, the position corresponding to the position fingerprint is determined according to the behavior, then, the acquired position fingerprint is clustered by using a clustering algorithm, so that a clustering result is obtained, the position fingerprint is divided into a plurality of categories, one category corresponds to one actual position, and finally, when the position fingerprint to be positioned is acquired, the category to which the position fingerprint to be positioned belongs can be determined through the clustering result, so that the position corresponding to the position fingerprint to be positioned is determined.
The indoor positioning method can collect the position fingerprint corresponding to the behavior based on the behavior of the user, and determine the position corresponding to the position fingerprint according to the behavior, so that the user does not need to manually calibrate the indoor position corresponding to the obtained position fingerprint. This is because some specific actions of the user can imply the actual location of the user, for example, taking home location as an example, the action of the user opening the electric cooker implies that the current location of the user is a kitchen, and the user controlling the switch of the intelligent door lock implies that the current location of the user is a hall. Therefore, the position can be automatically calibrated for the position fingerprint based on the behavior of the user, the operation of the user is simple and convenient, and the indoor positioning efficiency is improved.
The method can acquire the position fingerprint related to the user behavior through the electronic equipment carried by the user, such as a mobile phone, periodically acquire the position fingerprint, acquire the position fingerprint when the specific application is in a running state, or acquire the position fingerprint when the electronic equipment carried by the user is in a bright screen, and the like. In embodiments of the present application, the location fingerprint may include, but is not limited to, one or more of the following: identification (ID) of communication signals of wireless networks, base stations, bluetooth, zigBee, and the like, signal strength, signal round trip time, or signal delay time, information acquired by sensors (for example, gyro sensors, acceleration sensors, geomagnetic sensors, and the like) associated with positions, base station information, information of wireless Access Points (APs), and the like. The description of the location fingerprint may be referred to in the foregoing, and will not be repeated here.
In general, the indoor positioning method provided by the embodiment of the application does not need manual intervention of a user, the position fingerprint related to the behavior of the user can be obtained through the electronic equipment, the position corresponding to the position fingerprint is automatically calibrated according to the behavior corresponding to the position fingerprint, the complicated operation of the user on indoor positioning is avoided, a faster and more effective indoor positioning effect and more effective indoor environment recognition are provided for the user, and the experience of the user is improved.
The embodiment of the application also provides a map generation method, which relates to a mobile device and a control device, wherein the mobile device can move in a detection area, collect environment information in the moving process, send the environment information to the control device, the control device can input the environment information into a trained classification model to obtain a plurality of areas and room types of each area, in addition, the mobile device can obtain position information of room devices existing in the detection area in the moving process and send the position information to the control device, and the control device can determine the mapping relation between the room devices and the room types according to the position information of the room devices, the plurality of areas and the room types of each area, so that a map of the detection area containing the mapping relation between the room devices and the room types is generated.
In this way, in the process of using the indoor positioning method, when the electronic device detects the behavior of the user acting on the room device and collects the position fingerprint when the behavior is detected, the position corresponding to the position fingerprint can be determined according to the corresponding relation between the room device and the room indicated in the map, so that the position fingerprint collected based on the user behavior is further increased, the indoor positioning precision is improved, the indoor environment recognition precision is improved, and targeted service is provided for the user.
The mobile device is a device capable of moving in the detection area, and the mobile device can collect the environmental information of the detection area in the moving process, so that the situation that a user manually adds or modifies the environmental information is avoided, and the operation of the user is simplified. The mobile device may be a sweeping robot, and the detection area may be an indoor area where a home is located, for example. In other embodiments of the present application, the mobile device may also refer to an intelligent service robot, and the detection area may be an area where an exhibition hall is located. The application scenario of the map generating method may refer to the following content, and will not be described in detail here.
The control equipment is used for acquiring information acquired by the mobile equipment, calculating and processing the information acquired by the mobile equipment, and generating a map of an area moved by the mobile equipment. The control device may be, for example, a user's cell phone. The mobile equipment collects data, and the control equipment processes the data, so that the map generation efficiency can be improved, and the management of the mobile equipment and the control equipment by a user is facilitated.
The room equipment is intelligent equipment placed or fixed in the detection area, and the mobile equipment can detect the position of the room equipment in the detection area through positioning modes such as Bluetooth positioning, WIFI positioning, RFID positioning, UWB positioning and the like. In a home scenario, the smart device may refer to various smart home or electronic products placed in the home, such as a computer, a television, a smart speaker, a smart desk lamp, a smart refrigerator, a smart air conditioner, a smart toilet, a smart door lock, a router, a camera, a body fat scale, and the like.
The environmental information refers to information about the room type, which may include, but is not limited to, one or more of the following: image data, moving route, perception data. The image data are object images acquired by the camera in the moving process of the mobile equipment. The moving route is route data acquired by the mobile device in the moving process, and the control device can calculate and obtain data such as the number of rooms contained in the detection area, the area of each room, the position of each room in the detection area, barrier information and the like according to the moving route. The sensing data are data of audio frequency, humidity, temperature, brightness and the like acquired by hardware such as a sensor, a microphone and the like in the moving process of the mobile equipment.
The trained classification model may be a classification model obtained by training a large amount of known environmental information, for example, information of known room types, such as home images of different room types, areas of different room types, and the like, using a classification model algorithm in advance. For training of the model, reference is made in particular to the following, which is not developed here.
The plurality of areas obtained by the control device are a plurality of partitions of the detection area, and different partitions correspond to one room type. The map generation method provided by the embodiment of the invention can classify the room equipment existing in the detection area according to the plurality of partitions, and help a user to better control and manage the room equipment in the detection area. In a home scenario, the room types may include: kitchen, bedroom, living room, dining room, study room, bathroom, etc.
In general, according to the map generation method provided by the embodiment of the application, the data of the detection area can be automatically collected through the mobile equipment, so that the operation of a user is reduced, the room equipment in the detection area is classified according to the room types, the user is facilitated to manage the room equipment, and targeted, immersed and personalized full-scene intelligent experience is provided for the user.
Fig. 1 shows a schematic hardware configuration of an electronic device 100.
The electronic device 100 may be a cell phone, tablet, desktop, laptop, handheld, notebook, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, as well as a cellular telephone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) device, virtual Reality (VR) device, artificial intelligence (artificial intelligence, AI) device, wearable device, vehicle-mounted device, smart home device, and/or smart city device, with the specific types of such electronic devices not being particularly limited in the embodiments of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, a geomagnetic sensor 180N, and the like.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
In some embodiments, the processor 110 may be configured to cluster the collected position fingerprints by using a clustering algorithm, obtain a clustering result, and determine, when the position fingerprint to be located is obtained, a position corresponding to the position fingerprint to be located according to the clustering result. The details of the determination and positioning of the clustering result can be found in the following, and are not described in detail herein.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data.
The charge management module 140 is configured to receive a charge input from a charger.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, demodulates and filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
In some embodiments, the electronic device 100 may obtain the location fingerprint through the mobile communication module 150 or the wireless communication module 160. For example, when the electronic device 100 detects an operation of turning on the air conditioner by the user, a signal of the base station is acquired through the wireless communication module 160, and the acquired base station information and the strength of the signal are used as a location fingerprint corresponding to the location where the air conditioner is currently turned on by the user.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD). The display panel may also be manufactured using organic light-emitting diode (OLED), active-matrix organic light-emitting diode (AMOLED) or active-matrix organic light-emitting diode (active-matrix organic light emitting diode), flexible light-emitting diode (FLED), mini, micro-OLED, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals.
Video codecs are used to compress or decompress digital video.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning.
The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM).
In some embodiments, the internal memory 121 may contain a fingerprint database for storing location fingerprints collected by the electronic device 100 and a clustering result database for storing clustering results obtained by the electronic device 100 through a clustering algorithm. The descriptions of the fingerprint database and the clustering result database may be referred to later, and will not be repeated here.
The external memory interface 120 may be used to connect external non-volatile memory to enable expansion of the memory capabilities of the electronic device 100.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. The earphone interface 170D is used to connect a wired earphone.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
In some embodiments, the electronic device 100 may acquire a location fingerprint through the gyro sensor 180B or the acceleration sensor 180E, which may be an orientation, a gravity magnitude, or a gravity direction of the electronic device 100 determined through the gyro sensor 180B or the acceleration sensor 180E, or the like.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. The ambient light sensor 180L is used to sense ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The temperature sensor 180J is for detecting temperature. The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The bone conduction sensor 180M may acquire a vibration signal.
The geomagnetic sensor 180N may detect the magnitude of the geomagnetic field. In some embodiments, the geomagnetic sensor 180N may be used to determine the floor on which the electronic device 100 is located when the electronic device 100 is in an indoor environment of a high floor. Further, the electronic device 100 may use the floor determined by the geomagnetic sensor 180N as a location fingerprint required for clustering or positioning.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100. The motor 191 may generate a vibration cue. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card.
In the embodiment of the present application, when the electronic device 100 is a mobile device:
the processor 110 may be used to obtain environmental information and to determine the location of room devices in the detection area.
The mobile communication module 150 and the wireless communication module 160 may be used to transmit the environment information and the location information of the room device to the control device 1002.
The camera 193 may be used to collect environmental information, which may refer to image data in a detection area.
In the embodiment of the present application, when the electronic apparatus 100 is a control apparatus:
the processor 110 may be configured to input the environmental information into the trained classification model, obtain a plurality of areas and a room type of each area, and determine a mapping relationship between the room device and the room type according to the location information of the room device, the plurality of areas and the room type of each area, and obtain a map of the detection area in which the mapping relationship between the room device and the room type is wrapped.
The internal memory 121 may be used to store environmental information, location information of room devices, trained classification models, and maps of detection areas, etc.
The mobile communication module 150 and the wireless communication module 160 may be used to receive environmental information, location information of room devices, and trained classification models, among others.
The display 194 may be used to display a map generated by the control device or to display a user interface provided by the home-type application, see, in particular, the user interfaces shown in fig. 17, 18 with respect to what is displayed by the display 104.
Fig. 2 shows a software structure schematic diagram of an indoor positioning device according to an embodiment of the present application.
As shown in fig. 2, the indoor positioning device may include: the device comprises a position fingerprint acquisition module 01, a special fingerprint detection module 02, a fingerprint database 03, a clustering module 04, a clustering result database 05 and a positioning module 06. The electronic device 100 provided in the embodiment of the present application may be used to implement all functions of the indoor positioning device.
The location fingerprint acquisition module 01 is used for acquiring a location fingerprint of a location where a user is located, the location fingerprint can be used for indicating the location where the user is located, and the location fingerprint can comprise one or more of the following: an ID, signal strength, signal round trip time or signal delay time of a communication signal of a wireless network, a base station, bluetooth, zigBee, etc., information collected by a sensor (for example, a gyro sensor, an acceleration sensor, a geomagnetic sensor, etc.) having an association relationship with a position, base station information, information of a wireless access point, etc. The indoor positioning device can collect position fingerprints based on the behaviors of the user through the position fingerprint collection module, the behaviors represent the behaviors of the user in the area to be positioned, and the indoor positioning device can divide the collected position fingerprints through the behaviors of the user. For example, in a home location scenario, there is a location fingerprint 1, where the location fingerprint 1 is the location fingerprint collected when the user turns on the range hood, there is a location fingerprint 2, and where the location fingerprint 2 is the location fingerprint collected when the user turns off the alarm clock. In addition, the position fingerprint acquisition module 01 can send an original position fingerprint to the special fingerprint detection module 02, where the original position fingerprint is used to obtain a clustering result, and the position fingerprint acquisition module can also send a position fingerprint to be positioned to the positioning module 06, where the position fingerprint to be positioned is a position fingerprint needing to confirm an actual position.
The special fingerprint detection module 02 is used for marking a special fingerprint from the acquired original position fingerprints. The special fingerprint is a position fingerprint which is strongly related to the position in the position fingerprint, and the indoor positioning device can automatically mark the actual position corresponding to the position fingerprint through the special fingerprint. This is because, as is common in the general knowledge of the public, certain actions of a user generally only take place in a fixed area, for which a location fingerprint taken under such actions can directly determine its corresponding location. For example, in a home location scenario, the act of a user turning on a range hood typically occurs in the kitchen, so a location fingerprint collected based on this act can automatically calibrate its corresponding location to the kitchen. Thereafter, the special fingerprint detection module 02 may send special fingerprints marked with actual locations and special fingerprints unmarked with actual locations to the fingerprint data, wherein these fingerprints may be collectively referred to as location fingerprints to be clustered.
The fingerprint database 03 is used for storing the position fingerprints to be clustered collected by the indoor positioning device, and the fingerprint database 03 can send the stored position fingerprints to be clustered to the clustering module so that the clustering module can obtain a clustering result according to the position fingerprints to be clustered. For example, the fingerprint database 03 may send the position fingerprints to be clustered to the clustering module when the number of special fingerprints reaches a threshold value among the position fingerprints to be clustered.
The clustering module 04 is used for acquiring position fingerprints from the fingerprint database, and clustering the position fingerprints by using a clustering algorithm, wherein one category corresponds to one indoor area. For example, in a home location scenario, location fingerprints can be separated into three categories by a clustering algorithm: living room, dining room, study room. That is, the clustering module 04 may obtain a clustering result according to the location fingerprints, where the clustering result classifies the location fingerprints to be clustered into a plurality of categories and indicates a correspondence between the location fingerprints to be clustered and the locations. In addition, the clustering module 04 can acquire the position fingerprints for multiple times, update the clustering result and improve the accuracy of indoor positioning.
The clustering result database 05 is used for storing the clustering results generated or updated by the indoor positioning device.
The positioning module 06 may be configured to obtain a clustering result stored in the clustering result database 05, and determine, according to the clustering result, a user position corresponding to the fingerprint of the position to be positioned. Specifically, the indoor positioning device may determine, according to a relationship such as similarity or distance between the to-be-positioned position fingerprint and the to-be-clustered position fingerprint, a category to which the to-be-positioned position fingerprint belongs, so as to determine a user position corresponding to the to-be-positioned position fingerprint. For example, there is a to-be-clustered position fingerprint, the category to which the to-be-clustered position fingerprint belongs after training is area 1, and when the to-be-positioned position fingerprint is the same as or similar to the to-be-clustered fingerprint, the position corresponding to the to-be-positioned position fingerprint can be determined to be area 1.
In general, in the whole process of completing indoor positioning by the indoor positioning device, the indoor positioning device can be mainly divided into three stages: data acquisition, fingerprint clustering and user positioning. The data collection refers to collecting position fingerprints to be clustered, the fingerprint clustering refers to generating a clustering result by utilizing the position fingerprints to be clustered, and the user positioning refers to determining the position corresponding to the position fingerprints to be positioned by utilizing the clustering result.
In the embodiment of the present application, the specific function of the location fingerprint acquisition module 01 may be implemented by the mobile communication module 150, the wireless communication module 160, the sensor module 180, or the like in the electronic device 100, where the sensor module 180 may specifically refer to the gyro sensor 180B, the acceleration sensor 180G, the geomagnetic sensor 180N, or the like. Specific functions of the special fingerprint detection module 02, the clustering module 04, the positioning module 06 may be implemented by the processor 110 in the electronic device 100, and specific functions of the fingerprint database 03 and the clustering result database 05 may be implemented by the internal memory 121 in the electronic device 100.
The following describes a specific flow of the indoor positioning method provided in the embodiment of the present application with reference to fig. 3 to fig. 7.
The indoor positioning method provided by the embodiment of the application can be divided into three stages:
Stage 1: data acquisition
Fig. 3 shows schematically a flow chart of the indoor positioning method involved in stage 1.
As shown in fig. 3, stage 1 may include the steps of:
s101, the electronic device 100 collects position fingerprints.
A location fingerprint is one or more features that represent a location in an actual environment. The electronic device 100 may trigger the acquisition of the location fingerprint in one or more of the following cases:
1) The electronic device 100 detects an operation of the user and collects a location fingerprint in response to the operation.
The operation may be a touch operation such as clicking, sliding, long pressing, etc. performed by the user on the display screen of the electronic device 100, or a voice command of the user, or a physical operation in which the user acts on the electronic device 100 to change the position or placement of the electronic device 100.
2) The electronic device 100 periodically captures a location fingerprint.
The electronic device 100 may acquire the location fingerprint once at intervals, for example, 5 minutes. In this way, the electronic device 100 can be ensured to uniformly acquire the position fingerprint, and the situation that the user moves to a certain area and does not acquire the position fingerprint is avoided.
3) The electronic device 100 triggers the acquisition of a location fingerprint when running a specific application.
The particular application may be an application predetermined by the electronic device 100, which may be an application requiring use of location information, such as a navigation-type application, a shopping-type application, and the like. Alternatively, the application may be an application that requires communication with a network and other devices using wireless communication technology, such as a conversational class application, a social class application, and the like.
4) The electronic device 100 triggers the acquisition of a location fingerprint when it is on a bright screen.
The electronic device 100 being on a bright screen illustrates that the user is using the electronic device 100, that is, the electronic device 100 may continuously trigger the capturing of the location fingerprint while in the operating state, and acquire the location fingerprint as much as possible for training the model.
It will be appreciated that the manner in which the electronic device 100 is triggered to collect a location fingerprint is not limited to the above.
In the embodiment of the present application, the electronic device 100 may collect location fingerprints based on the behavior of the user, that is, the location fingerprints collected by the electronic device 100 are divided according to the behavior of the user, so that the electronic device 100 is convenient to distinguish a specific fingerprint from a large number of collected fingerprints. For example, when the electronic device 100 collects a location fingerprint upon detecting an operation of turning off the alarm by the user, the location fingerprint may be determined as the location fingerprint collected upon turning off the alarm by the user.
S102, the electronic device 100 judges whether the position fingerprint belongs to a special fingerprint.
The special fingerprint refers to a location fingerprint strongly related to a location, and the electronic device 100 may automatically determine a location corresponding to the special fingerprint. The electronic device 100 may determine whether the location fingerprint belongs to a special fingerprint according to whether the behavior corresponding to the location fingerprint is a behavior strongly related to the location. Wherein a behavior strongly related to a location may refer to a behavior having a probability of occurring at the location that is greater than a threshold. The actions that are strongly correlated to location may include, but are not limited to, the following three:
1) Interactive control class behavior
The interactive control class behavior may refer to behavior of a user to control other electronic devices contained in the indoor environment. In a home location scenario, the other electronic device may refer to various smart home devices placed in the home, such as a large screen, a television, a refrigerator, an air conditioner, a microwave oven, a range hood, a smart desk lamp, and the like. Controlling other electronic devices may refer to turning on or off operation of the other electronic devices, or altering configuration information of the other electronic devices while operating, and so on. For example, in a home location scenario, the behavior may refer to a user's behavior of turning on a microwave oven, a behavior of adjusting the brightness of an intelligent desk lamp, a behavior of turning off a large screen, and so on. This is due to the fact that other electronic devices contained in the indoor environment are typically stationary in their placement, e.g., in home location settings, large screens are typically located in living rooms, refrigerators are typically located in kitchens, etc. Thus, when the electronic device 100 detects the interactive control class behavior of the user, the location corresponding to the location fingerprint may be automatically determined according to the interactive control class behavior.
Then, when the electronic device 100 detects that the user controls the behavior of other electronic devices, the currently acquired location fingerprint is a special fingerprint strongly related to the location.
2) Application management class behavior
Application management class behavior may refer to specific behavior of a user acting on the electronic device 100 in an indoor environment. For example, in a home location scenario, the specific behavior may refer to a behavior of turning off a wake-up alarm clock provided on the electronic device 100, a behavior of charging the electronic device 100 at night, and so on. This is because certain specific actions of the user on the electronic device 100 will generally only be initiated in a specific area in the indoor environment, for example, in a home location scenario, the actions of the user turning off the alarm clock on bed, and the actions of charging the device at night will generally only occur in a bedroom, so that when the electronic device 100 detects an application management class action, the location corresponding to the location fingerprint can be automatically determined according to the class of actions.
Then, when the electronic device 100 detects the application management class behavior of the user, the currently collected location fingerprint is a special fingerprint strongly related to the location.
3) Scene perception class behavior
Scene-aware class behavior may refer to behavior that a user does not directly act on the electronic device 100. The electronic device 100 may detect changes in or obtain other data after such actions are initiated by the user to determine where the user is currently located. For example, when a user moves with the electronic device 100, so that the electronic device 100 is connected to a specific wireless AP in an indoor environment, the electronic device 100 may automatically determine the location of the current user according to the specific wireless AP, where the specific wireless AP refers to a wireless AP placed in a designated area. Alternatively, when the user acts on the indoor PLC device, the electronic device 100 may determine the current user's location through the whole house intelligent system or a signal transmitted from the PLC device.
Then, when the electronic device 100 detects the scene-aware behavior of the user, the currently acquired location fingerprint is a special fingerprint strongly related to the location.
It should be noted that when the user triggers the above three actions, the user may also act on other devices (e.g., thin devices) that establish a connection with the electronic device 100 to trigger the above actions, but the device that actually captures the location fingerprint is the electronic device 100, and the electronic device 100 captures the current location fingerprint if and only if the other devices that the user acts on are closer to the electronic device 100 (e.g., less than the sixth value). For example, when the electronic device 100 detects that the user turns on the switch of the air conditioner through the electronic device 100, the electronic device 100 may acquire the current location fingerprint, when the user turns off the early morning alarm clock through the smart band, when the distance between the smart band and the electronic device 100 is less than the threshold value, the electronic device 100 acquires the current location fingerprint.
In a specific implementation, the electronic device 100 may preset a location rule, and determine a location corresponding to the specific fingerprint according to the location rule. The location fingerprint includes a correspondence between a user behavior and a location, and the location rule may be a rule obtained by a developer according to big data or data research analysis, and the electronic device 100 may determine an actual location corresponding to the location fingerprint according to the user behavior corresponding to the location fingerprint and the location rule. Table 1 exemplarily shows the correspondence between part of user behaviors and positions in a home location scenario.
TABLE 1
User behavior Position of
Control intelligent television Parlor (living room)
Sleep monitoring and early morning alarm clock Bedroom
Control of cooking devices such as microwave ovens Kitchen
Intelligent toilet bowl for washing and hand washing and controlling Toilet seat
Intelligent door lock and shoes putting on and taking off control Hall hall
It can be seen that the location fingerprint acquired when the user controls the smart tv through the electronic device 100, and the corresponding location thereof can be determined as a living room. The location fingerprint acquired during the initiation of sleep monitoring by the electronic device 100, the corresponding location of which may be determined as a bedroom. When the electronic device 100 detects a user's behavior of turning off the early morning alarm clock, the location corresponding to the collected location fingerprint may be determined as a bedroom. When the electronic device 100 detects that the user controls a cooking device such as a microwave oven, the location corresponding to the collected location fingerprint may be determined as a kitchen. When the electronic device 100 detects the behavior of the user to wash the hands through the bracelet, the location corresponding to the collected location fingerprint may be determined as the hand wash. When the electronic device 100 detects an operation of controlling the smart toilet by a user, for example, adjusting the temperature, the location corresponding to the acquired location fingerprint may be determined as the toilet. When the electronic device 100 detects the behavior of the user controlling the door lock, the location corresponding to the collected location fingerprint may be determined as a hall. When the electronic device 100 detects a user's putting on or taking off shoes through, for example, a sensor on a bracelet, the location corresponding to the acquired location fingerprint may be determined as a hall.
It is to be understood that the correspondence between user behavior and location shown in table 1 is only an exemplary example, and is not to be construed as limiting the embodiments of the present application.
When the electronic device 100 determines that the location fingerprint belongs to a special fingerprint, the electronic device 100 performs step S103, otherwise, the location fingerprint is a normal fingerprint, and performs step S104.
It should be noted that, in this embodiment of the present application, the behavior may be, in addition to the behavior of the user, including the behavior of the user acting on the device, for example, the behavior of the user turning off the mobile phone alarm clock, the behavior of the user detected by the device, for example, the behavior of the bracelet detecting the user washing, which may refer to the behavior of the device, the behavior may be the behavior of the device automatically performed, for example, the behavior of the electric cooker automatically turning on the cooking, and the behavior of the user triggering the device to perform, for example, the behavior of the user turning on the air conditioner, the behavior of the air conditioner starting to cool, and so on.
S103, the electronic device 100 increases the weight of the location fingerprint.
When the electronic device 100 determines that the location fingerprint is a special fingerprint, a weight may be added to the location fingerprint, where the weight is a parameter that is required when the electronic device 100 performs training of the clustering result. Training specifically on the clustering results can be found in the following, which is not developed first.
In some embodiments, different special fingerprints may be uniformly weighted, e.g., all special fingerprints may be weighted 1. Or, weights of different sizes are set according to the corresponding positions, for example, in a home location scene, the home environment can be divided into: kitchen, living room, bedroom, etc., the weights that kitchen, living room and bedroom correspond respectively may be: 1. 2, 3. Further, the magnitude of the weights corresponding to the different positions can be determined according to the number of the special fingerprints possibly collected at the different positions, specifically, the smaller the number of the special fingerprints possibly collected in one position is, the larger the weights corresponding to the special fingerprints are, and the larger the number of the special fingerprints corresponding to the special fingerprints is, the smaller the weights corresponding to the special fingerprints are. For example, for two locations, the living room and the bathroom, since the user is active in the living room for a much longer time than in the bathroom, the number of special fingerprints collected in the living room may be greater than the number of special values collected in the bathroom, and thus, when determining the weights of the special fingerprints corresponding to the living room and the special fingerprints corresponding to the bathroom, the weight of the special fingerprints corresponding to the living room may be smaller than the weight of the special fingerprints corresponding to the bathroom.
S104, the electronic device 100 stores the location fingerprint in the fingerprint database.
When the electronic device 100 needs to cluster the position fingerprints, the position fingerprints can be obtained from the fingerprint database, and the clustering result can be obtained by using the position fingerprints.
Stage 2: fingerprint clustering
Fig. 4 illustrates a schematic flow diagram of the indoor positioning method involved in stage 2.
As shown in fig. 4, stage 2 includes:
s201, the electronic device 100 acquires a location fingerprint set.
The electronic device 100 may obtain a set of location fingerprints from a fingerprint database, the set of location fingerprints comprising a plurality of location fingerprints. In some embodiments, the electronic device 100 may begin clustering the location fingerprints when the number of special fingerprints in the fingerprint database reaches a threshold (e.g., 30). That is, the electronic device 100 may start to perform step S201 when the number of special fingerprints in the fingerprint database reaches a threshold.
S202, the electronic equipment 100 performs self-adaptive clustering on the position fingerprint set, and obtains a clustering result.
In this embodiment, the adaptive clustering includes a two-time clustering process, where the first clustering process uses special fingerprints in the location fingerprint set for clustering, and the second clustering process uses the location fingerprint set for clustering. The clustering method has the advantages that the clustering is performed twice, special fingerprints are used in the first clustering, and fingerprints in all positions are used in the second clustering, so that the dependence of a clustering algorithm on the number of the special fingerprints can be reduced, and more accurate positioning effect is obtained.
The clustering of the position fingerprints specifically refers to classifying the position fingerprints into one or more categories according to the similarity of the position fingerprints, wherein the similarity between the position fingerprints in one category is higher. In this embodiment of the present application, the class may also be referred to as a cluster, a group, a class, or the like, which is not limited by the embodiment of the present application.
The detailed process of the first clustering can be referred to as a flow chart shown in fig. 5. As shown in fig. 5, the first clustering specifically includes:
s301, the electronic device 100 extracts a special fingerprint set D from the position fingerprint sets D s From a special fingerprint set D s Randomly selecting a particular fingerprint x not accessed k As a center point.
Wherein d= { (x) 1 ,y 1 ),(x 2 ,y 2 ),…,(x n ,y n )},x i =R d ,i∈(1,2,3,……,n),
Figure BDA0003448558030000198
D S ={(x 1 ,y 1 ),(x 2 ,y 2 ),…,(x m ,y m )},m<n,
x i For representing the location fingerprint, n represents the number of location fingerprints comprised in the set of location fingerprints, and m represents the number of special fingerprints comprised in the set of location fingerprints.x i =R d The parameter representing the position fingerprint is a d-dimension. y is i I epsilon (1, 2,3 … …, n) is used to indicate whether the location fingerprint belongs to a special fingerprint, illustratively y i ={0,1},y i =0 can indicate that the location fingerprint belongs to a common fingerprint, y i =1 may indicate that the location fingerprint belongs to a special fingerprint.
S302, the electronic device 100 finds the off-center point x k A special fingerprint set D within a distance h Sk
In particular, electronic device 100 may be represented by x k Taking the center as h and taking the radius as D dimension sphere, wherein all special fingerprints contained in the sphere are the special fingerprint set D Sk . Where h is a bandwidth, and the size of h may be a parameter value preset by the electronic device 100, for example, the electronic device 100 may calculate a distance between any two special fingerprints among all special fingerprints, and order them from small to large, and the electronic device 100 may use the quartile (25%) of the bandwidth as the value size of h.
S303, the electronic device 100 calculates a special fingerprint x k To a special fingerprint set D Sk The vectors of each special fingerprint in the database are added to obtain an offset vector M h (x)。
M h (x) The value can be obtained by equation 1:
Figure BDA0003448558030000193
wherein x is k Is a special fingerprint of the center point,
Figure BDA0003448558030000194
for special fingerprint set D Sk S is a special fingerprint set D Sk The number of special fingerprints in (a), h is the bandwidth, < >>
Figure BDA0003448558030000195
For the drift vector kernel function, +.>
Figure BDA0003448558030000196
Is special fingerprint->
Figure BDA0003448558030000197
Is a weight of (2).
S304, the electronic device 100 generates an offset vector M h (x) Determining a new center point x k+1
x k+1 Can be obtained according to equation 2:
x k+1 =x k +M h (x) Equation 2
x k A special fingerprint, x, representing the original center point k+1 Representing the redefined centre point, M h (x) The vector sum representing the particular fingerprint.
S305, the electronic device 100 determines the center point x k+1 Whether or not to converge.
If the center point x k+1 If not, the electronic device 100 performs step S302 to re-determine the new center point x k+1 Special fingerprint set D S(k+1) Thereby recalculating the new center point; if the center point x k+1 The electronic device 100 performs step S306.
After the electronic device 100 determines convergence, a center point of the present cluster and a plurality of special fingerprints centered on the center point may be determined, where the plurality of special fingerprints belong to one class together. In addition, after the electronic device 100 determines the center point of the primary cluster, it may determine whether the distance between the center point and the previously determined center point is less than the threshold epsilon, and if so, merge the classes where the two center points with the distances less than the threshold epsilon are located, and redetermine the center point of the merged class. That is, if two classes are closer together, the two classes may be merged into one large class.
S306, judging special set D by the electronic equipment S Whether all special fingerprints in (a) have been accessed.
When the electronic device 100 determines that all the special fingerprints have been accessed, the electronic device 100 performs step S307, otherwise, the electronic device 100 performs step S301 to find a center point in the non-accessed special fingerprints again, and find a special access set according to the center point, and update the center point of the set.
S307, the electronic device 100 obtains L categories and a center point in each category.
In the course of the electronic device 100 repeatedly performing S301-S305, the electronic device 100 may determine a plurality of center points x 1 ,x 2 ,x 3 ,……,x L L represents the number of categories. The center point of each category is specifically the mean value of the location fingerprints contained in the category.
S308, the electronic device 100 deletes the class with the number of the special fingerprints smaller than the threshold value and the center point of the class, thereby obtaining L' classes.
When the electronic device 100 determines that the number of special fingerprints in a category is small (e.g., less than the fourth value), the category may be deleted. This can avoid that a few erroneous special fingerprints affect the accuracy of indoor positioning. Illustratively, the threshold may be 5. After the electronic device 100 clears the categories that contain fewer special fingerprints, the electronic device 100 may obtain L 'categories, where L' +.L.
It is understood that when there is no category with a specific fingerprint smaller than the threshold value among the L categories acquired by the electronic device 100, the electronic device 100 may not perform step S308.
S309, the electronic device 100 determines the label of each of the L' categories.
The label is the actual position corresponding to the category. Illustratively, in a home location scenario, the tag may be a kitchen, bedroom, study, restaurant, living room, restroom, lobby, and the like. Specifically, the electronic device 100 may determine, as the tag of the category, a location corresponding to a special fingerprint having the highest occurrence frequency in the category. For example, when the bedroom accounts for the highest proportion in the positions corresponding to all the special fingerprints in one category, the label of the category is the bedroom.
In this way, the electronic device 100 can divide all the collected special fingerprints into a plurality of categories, and each category corresponds to a tag, that is, the actual location corresponding to the category.
As can be seen from the combination of steps S301 to S309, the electronic device 100 uses the mean shift clustering for all the special fingerprints, so that the first clustering process can be completed, and L' categories and the center point of each category are obtained.
In some embodiments, the electronic device 100 may perform user positioning using only the L' categories obtained by the first clustering and the center point of each category, that is, the electronic device 100 may obtain a clustering result using only a special fingerprint and complete positioning using the clustering result, where the accuracy of the clustering result may be improved by collecting a large number of special fingerprints.
The detailed process of the second clustering can be referred to as a flow chart shown in fig. 6. As shown in fig. 6, the second clustering specifically includes:
s401, the electronic device 100 takes L' central points obtained by mean shift clustering as initial clustering centers.
The electronic device 100 uses the center point in the L' categories obtained by the first clustering as the initial clustering center of the second clustering. The process of the first clustering may be referred to in fig. 5 and the related description, and will not be repeated here.
S402, the electronic device 100 calculates the distance from each position fingerprint in the position fingerprint set D to L' cluster centers, and divides the distances into categories in which the cluster centers with the smallest distances are located.
Wherein the set of location fingerprints d= { (x) 1 ,y 1 ),(x 2 ,y 2 ),…,(x n ,y n ) For any one position fingerprint x in the position fingerprint set D } i The electronic device 100 may calculate the location fingerprint x i Distances D1, D2, D3, … …, DL 'to L' center points and fingerprint the location x i The category in which the distance from the center point is the smallest is determined as the category in which the center point is located.
That is, the electronic device 100 may divide the location fingerprint into categories closest to the cluster center according to the principle of closest distance. Thus, all location fingerprints can be categorized into these L' categories.
S403, the electronic device 100 recalculates the clustering center of each category.
After the electronic device 100 repartitions all the location fingerprints into L' categories, the electronic device 100 may recalculate the cluster center of each category due to the change in the location fingerprints contained in the category. The clustering center is the mass center of fingerprints at all positions in the category, and specifically, the mass center of each category can be calculated by a vector dimension de-averaging method.
S404, the electronic device 100 determines whether the suspension condition is reached.
The suspension condition may refer to a preset number of iterations, a minimum error change, etc. of the electronic device 100, which is not limited in the embodiment of the present application.
When the electronic device 100 determines that the suspension condition is reached, the electronic device 100 performs step S405, otherwise, the electronic device 100 performs step S402 to recalculate the distance from each position fingerprint to the clustering center, and recalculate the clustering center by dividing all position fingerprints according to the distance.
S405, the electronic device 100 obtains L' categories and clustering centers of each category.
After the electronic device 100 completes the clustering, L' categories, and a cluster center for each category, may be obtained. The L' categories and the clustering center of each category are parameters included in the clustering result obtained by the electronic device 100 after twice clustering, each category corresponds to one position, and the clustering center of each category reflects the corresponding relationship between the position fingerprint and the position.
It can be seen that in the second clustering process, the electronic device 100 clusters all the location fingerprints using the k-means clustering algorithm, thereby obtaining L' categories and a cluster center of each category. The clustering result obtained by the electronic device 100 may be a plurality of categories and a clustering center corresponding to each category, or the clustering result may also be a plurality of position fingerprints to be clustered used in clustering, and a category corresponding to each position fingerprint to be clustered.
S203, the electronic device 100 checks and repartitions the clustering result.
Since the initial cluster center and L' categories used in the second clustering process are determined by the special fingerprints used in the first clustering, and the special fingerprints have sparsity, there may be some actual areas where the special fingerprints temporarily do not exist, only the areas where the ordinary fingerprints exist, and the areas are erroneously divided into other areas. For example, in a home location scenario, the electronic device 100 actually collects special fingerprints of a bedroom and a living room, and common fingerprints of the bedroom, the living room and the kitchen, and since the first clustering can only obtain two categories of the bedroom and the living room, the second clustering can mistakenly classify the fingerprints of the kitchen into the categories of the bedroom or the living room, which results in inaccurate clustering results.
Thus, the electronic device 100 may verify and re-partition the clustering results to avoid classifying the location fingerprints into erroneous categories. Illustratively, the electronic device 100 may use the profile coefficient S (x i ) And judging the clustering effect of the common fingerprints, and if the contour coefficient is smaller than a threshold value (for example, 0.5), removing the common fingerprints from the original categories and classifying the common fingerprints into position fingerprints to be classified. And then, along with the accumulation of the special fingerprints and the updating of the clustering result, dividing the position fingerprints to be classified into correct categories so as to update the clustering center of the clustering result again.
Wherein the contour coefficient S (x i ) Indicating whether the clustering effect is good or bad. Contour coefficient S (x i ) The larger the cluster effect is, the better the contour coefficient S (x i ) The smaller the clustering effect is, the worse the clustering effect is. Contour coefficient S (x i ) Can be obtained from equation 3:
Figure BDA0003448558030000221
x i representing a location fingerprint, a (x i ) Represents x i To x i Average value of distances of fingerprints at other positions of belonging category, b (x i ) Represents x i To x i The minimum of the distances of the other location fingerprints of the category to which it belongs.
As can be seen, the profile factor S (x i ) For indicating position fingerprint x i Distance from other location fingerprints in its category to which the profile coefficient S (x i ) The greater the distance, the further the profile coefficient S (x i ) The smaller the distance, the closer the distance. Wherein the location fingerprint x is if and only if the distance is less than a threshold value (e.g., a fifth value) i The clustering effect of the method is good.
It can be appreciated that the electronic device 100 may also determine whether the clustering effect is good or bad in other manners, which is not limited in the embodiment of the present application.
S204, the electronic device 100 stores the clustering result in a clustering result database.
After the electronic device 100 finally obtains the clustering result, the clustering result can be stored in a clustering result database, after the next time the electronic device 100 collects enough position fingerprints, the clustering can be performed again, the clustering result is determined again, and the clustering result stored in the clustering result database is updated.
Stage 3: user positioning
Fig. 7 shows a detailed flow of stage 3 in the indoor positioning method provided in the embodiment of the present application, as shown in fig. 7, stage 3 includes:
s501, the electronic device 100 obtains a location fingerprint to be located.
The electronic device 100 may trigger the acquisition of a location fingerprint and use the location fingerprint to perform positioning after receiving the operation of triggering positioning by the user. Alternatively, the electronic device 100 may automatically acquire a location fingerprint after training of the model is completed, and locate the current location of the user using the location fingerprint. The embodiment of the application does not limit the triggering time of the electronic device 100 to trigger the acquisition of the position fingerprint to be positioned. The description of the location fingerprint may be referred to in the foregoing, and will not be repeated here.
S502, the electronic equipment 100 determines the category to which the position fingerprint to be positioned belongs according to the clustering result.
The electronic device 100 may determine the category to which the location fingerprint to be located belongs according to the similarity between the location fingerprint to be located and the location fingerprint used when the fingerprints are clustered. Specifically, the electronic device 100 may determine the category to which the location fingerprint to be located belongs as the category to which the location fingerprint having the greatest similarity belongs. Alternatively, the electronic device 100 may determine the category to which the position fingerprint to be located belongs according to the distances between the position fingerprint to be located and all the cluster centers in the cluster result. Specifically, the electronic device 100 may use a category of the closest cluster center as a category to which the location fingerprint to be located belongs.
Optionally, the electronic device 100 may further determine a distance or similarity between the to-be-located position fingerprint and the to-be-clustered position fingerprint used in clustering, and the electronic device 100 may determine a category of the to-be-clustered position fingerprint with the closest distance or the highest similarity as a category of the to-be-located position fingerprint, so as to achieve more accurate location.
After the electronic device 100 determines the category to which the position fingerprint to be located belongs, the position corresponding to the position fingerprint to be located may be determined, and then, the electronic device 100 may further check whether the location result is accurate in combination with the profile factor, and for the process of checking the location result by the electronic device, see the following steps S503 to S505.
S503, the electronic device 100 determines the contour coefficient of the position fingerprint to be located.
Specifically, the electronic device 100 may calculate the profile coefficient of the location fingerprint to be located using the above formula 3. For a description of the contour coefficients, reference may be made to the related content of the foregoing formula 3, and details thereof are not repeated here.
S504, the electronic device 100 judges whether the contour coefficient is larger than a threshold value.
When the electronic device 100 determines that the contour coefficient is greater than the threshold, it indicates that the positioning result is correct, otherwise, it indicates that the positioning result is incorrect, and the electronic device 100 may execute step S505.
S505, the electronic device 100 refuses to locate the location fingerprint to be located.
When the profile coefficient is smaller than the threshold value, it indicates that the classification of the position fingerprint to be positioned by the electronic device 100 is not accurate enough, which is possibly because the position fingerprint to be positioned does not belong to any one of the classification in the clustering result, the electronic device 100 does not perform positioning, and the problem of positioning errors is avoided as much as possible.
It can be appreciated that steps S503 to S505 are optional steps, and in this embodiment of the present application, the electronic device 100 may directly determine the location corresponding to the fingerprint of the location to be located by using the clustering result. Or further, after determining the position corresponding to the position fingerprint to be located, the electronic device 100 may further determine the location effect, and if the location effect is poor, reject the positioning to the position fingerprint to be located.
It should be noted that the execution process of the stages 1-3 may be completed by different devices, for example, the acquisition of the position fingerprint of the stage 1 may be completed by the first device and the acquired position fingerprint is sent to the second device, the second device may generate a clustering result according to the acquired position fingerprint and send the clustering result to the third device, and the third device may determine the position corresponding to the position fingerprint to be positioned according to the clustering result, so as to implement the positioning of the user. The first device, the second device, and the third device may be the same device, that is, the electronic device 100, or the first device and the second device are the same device, and the third device is the electronic device 100, so at this time, the electronic device 100 may obtain a clustering result determined by other devices, and complete user positioning according to the clustering result. It can be appreciated that the number of execution subjects of the indoor positioning method and the content executed by different execution subjects are not limited in the embodiments of the present application.
In general, the indoor positioning method can automatically determine the position corresponding to the position fingerprint based on the behavior of the user, thereby avoiding the manual construction of a position fingerprint database by the user, reducing the operation of the user, expanding the application scene of the indoor positioning method, in addition, the indoor positioning method adopts a self-adaptive clustering algorithm, combines a mean shift clustering algorithm and a k-means clustering algorithm to cluster the position fingerprint, improving the accuracy of fingerprint clustering, and enabling the user to obtain a more accurate positioning result.
The indoor positioning process described above is exemplarily described below with reference to fig. 8.
All location fingerprints collected by the electronic device 100 for clustering are shown in fig. 8 (a). After classifying all the location fingerprints, a specific fingerprint included in all the location fingerprints as well as a general fingerprint is shown in fig. 8 (b). Thereafter, the electronic device 100 clusters all the special fingerprints using a mean shift clustering algorithm, and fig. 8 (c) shows three categories and three cluster centers obtained after clustering the special fingerprints. Thereafter, the electronic device 100 clusters all the location fingerprints using the k-means clustering algorithm, thereby obtaining final cluster centers, and fig. 8 (d) shows three categories and three cluster centers determined by the electronic device 100 using the k-means clustering algorithm. Wherein the electronic device 100 determines the cluster center determined using the mean shift clustering algorithm as the initial cluster center of the current k-means cluster. Then, the electronic device 100 determines the location corresponding to the category according to the location corresponding to the special fingerprint with the highest occurrence frequency in each category. Fig. 8 (e) shows positions corresponding to three categories determined by the electronic device 100: living room, dining room, study room. Finally, after the electronic device 100 obtains the position fingerprint to be located, the electronic device 100 may determine, according to the distance between the position fingerprint to be located and each cluster center, the position corresponding to the position fingerprint to be located. Fig. 8 (f) shows that the position fingerprint to be located is closest to the cluster center of "restaurant", and thus the position of the position fingerprint to be located is determined as restaurant.
The indoor positioning method provided by the embodiment of the application can have a plurality of application scenes.
The indoor positioning method can be applied to the household field. In the home field, since a plurality of electronic products, such as a plurality of smart homes, are placed in a home, a user can control the electronic products so that the electronic products provide home services for the user, wherein the electronic device 100 can acquire the behavior of the user controlling the electronic device products when the user controls the electronic products, and acquire a location fingerprint based on the behavior. For example, a user controls the electric rice cooker to start cooking through the electronic device 100. At this time, the electronic device 100 may currently acquire the strength of the Wi-Fi signal, and use the strength of the Wi-Fi signal as a location fingerprint of the location where the current user is located.
In this embodiment of the present application, the electronic device 100 may use a clustering algorithm to cluster collected position fingerprints, so as to obtain a clustering result capable of indicating a corresponding relationship between a position fingerprint and a position, and finally, when the electronic device 100 obtains a position fingerprint to be located, the position corresponding to the position fingerprint to be located may be determined according to the relationship between the position fingerprint to be located and the position fingerprint to be clustered collected before the clustering of the fingerprints, through the clustering result.
Fig. 9 and 10 exemplarily show two application scenarios of the indoor positioning method in the home field.
Along with the popularization of intelligent voice equipment, intelligent voice equipment is arranged in many families, a user does not need to touch the equipment, remote control equipment can be realized through voice, and convenience is provided for life of the user. However, in a home with a plurality of intelligent voice devices, the voice command of the user may simultaneously cause the plurality of intelligent voice devices to respond to the voice command of the user, causing confusion of voice remote control of the devices, and reducing the sense of life experience of the user. By using the indoor positioning method provided by the embodiment of the application, the clustering result of the indoor environment where the household is located can be determined, when a user initiates a voice instruction to control the intelligent voice equipment, the current position of the user can be determined according to the current position fingerprint of the user and the clustering result, so that the intelligent equipment which belongs to the same room or area as the user can be controlled to respond to the voice instruction of the user, the corresponding operation is completed, the user is helped to better control the intelligent equipment in the household, and more intelligent living experience is provided for the user.
As shown in fig. 9, fig. 9 illustrates a living room and a restaurant environment in a home, wherein the living room and the restaurant are respectively provided with an intelligent air conditioner, a user is currently located in the living room, and when the user uses a voice command of "small art and small art to turn on the air conditioner", the intelligent voice device 001 is controlled to turn on the intelligent air conditioner in the home. The mobile phone of the user, namely the electronic device 002, determines that the user is currently located in the living room by collecting the position fingerprint, so as to control the intelligent voice device 001 to only pertinently start the air conditioner in the living room. And, the intelligent voice device 001 outputs a voice response "good, the air conditioner of the living room has been turned on" after the air conditioner of the living room is turned on. Therefore, when a plurality of air conditioners exist in the home, the intelligent voice equipment inquires the user which air conditioner is started again, and more intelligent life experience is provided for the user. In this embodiment of the present application, the intelligent air conditioner in the living room and the restaurant may also be referred to as a third device, the start-up operation of the air conditioner may also be referred to as a third operation, and the living room where the user is located may also be referred to as a first location.
In addition, when the user is at home, in order to combine the position of the user, targeted home service is provided for the user, for example, the mobile phone of the user can push the targeted service of the area of the user for the user according to the movement of the user, so that the home experience of the user is improved. Therefore, by using the method provided by the embodiment of the application, the clustering result of the indoor environment of the household can be determined, the position of the user is determined according to the position fingerprint moment of the user after the clustering result is determined, and when the user enters a certain area, one or more items of information such as information, application, schedule activity or notice in links, pictures, icons, videos, audios or texts related to the area are pushed to the user.
As shown in fig. 10, fig. 10 illustrates a kitchen environment in a home, when a user enters the kitchen environment, the user enters the kitchen for cooking generally, so that when detecting that the position of the user is switched to the kitchen, the mobile phone, i.e. the electronic device 003 of the user can display information related to the position, for example, recommending some information, including: "teach you to cook eggs", "recommend menu", etc., the user can click on this information to view the corresponding content in detail, or recommend some cooking applications that the user can enter to query the recipe.
It can be understood that the indoor positioning method provided by the embodiment of the application is not limited to the home field, but can be applied to the office field, the learning field, the tourism field and the like, for example, in the office field, the positioning of the indoor environment of a company, the learning field, the positioning of the indoor environment of a teaching building, the positioning of the environment of a museum, an indoor exhibition and the like, and the application field of the indoor positioning method is not limited.
Fig. 11 shows a flow chart of an indoor positioning method according to an embodiment of the present application.
As shown in fig. 11, the indoor positioning method includes:
s601, the electronic device 100 acquires a position fingerprint to be clustered.
Location fingerprints can relate locations in an actual environment to some sort of "fingerprint" that the electronic device 100 can utilize to characterize an actual location, e.g., the electronic device 100 can characterize an actual location as a location fingerprint with access point or base station information, signal strength, signal round trip time or delay time, and so forth. The description of the location fingerprint may be referred to in the foregoing, and will not be repeated here.
The electronic device 100 may trigger to acquire a location fingerprint when receiving an operation of a user, or may periodically trigger to acquire a location fingerprint, or may trigger to acquire a location fingerprint when a specific application is running, or may trigger to acquire a location fingerprint when the electronic device 100 is in a bright screen state. It can be appreciated that the triggering time of the electronic device 100 to acquire the location fingerprint is not limited in the embodiments of the present application, and the description of the electronic device 100 to acquire the location fingerprint to be clustered can be referred to the foregoing, which is not repeated here.
In the embodiment of the present application, the electronic device 100 may also be referred to as a first device.
S602, the electronic device 100 marks special fingerprints from the position fingerprints to be clustered, and performs first clustering on the special fingerprints to obtain N categories and a center point of each category.
The special fingerprint refers to a position fingerprint strongly correlated with a position among the position fingerprints. The electronic device 100 may combine the behavior of the user when acquiring the location fingerprint to determine whether the location fingerprint is a special fingerprint.
Specifically, the electronic device 100 may determine whether the location fingerprint belongs to a special fingerprint according to whether the behavior corresponding to the location fingerprint is a behavior strongly related to the location. Among them, actions strongly related to location may include, but are not limited to, the following three classes of actions: interaction control class behavior, application management class behavior, and scene perception class behavior. For a specific description of these three types of behavior, reference may be made to the foregoing, and no further description is given here.
In other words, the electronic device 100 may determine whether the location fingerprint belongs to a specific fingerprint according to whether the behavior corresponding to the location fingerprint is a specific behavior preset by the electronic device 100. In addition, the electronic device 100 may preset an actual position corresponding to the specific behavior, and when the electronic device 100 acquires a specific fingerprint, where the specific fingerprint is a position fingerprint acquired when the user performs the specific behavior, the electronic device 10 determines the position corresponding to the specific fingerprint as the actual position corresponding to the specific behavior.
The electronic device 100 may cluster the special fingerprint using a mean shift clustering algorithm to obtain N categories and a center for each category. The electronic device 100 may determine the tag of each category from the locations corresponding to the special fingerprint included in each category, where the frequency of occurrence is highest. For a specific description of the mean shift clustering algorithm, refer to fig. 5 and related content, and are not described herein.
In embodiments of the present application, the location fingerprints to be clustered may include one or more, for example, a second location fingerprint, a fourth location fingerprint, a fifth location fingerprint, and so on. The second location fingerprint and the fourth location fingerprint may be special fingerprints acquired by the electronic device 100 when the first behavior and the second behavior occur, respectively. The fifth location fingerprint may be a normal fingerprint taken by the electronic device 100 when no specified actions, such as the first and second actions, occur.
S603, the electronic device 100 uses the center point obtained by the first clustering as an initial clustering center of the second clustering, clusters the position fingerprints to be clustered, and obtains N categories and clustering centers of each category.
The electronic device 100 may use a k-means clustering algorithm to cluster all the position fingerprints to be clustered, where the electronic device 100 may use the central points of the N categories obtained by the first clustering as the initial clustering center of the present clustering, so as to obtain N categories and the clustering center of each category, where the labels of the N categories are labels determined during the first clustering. The description of the k-means clustering algorithm can be found in fig. 6 and related content, and is not repeated here.
After the two clustering, the electronic device 100 may obtain a clustering result determined by the position fingerprints to be clustered, where the clustering result divides the position fingerprints to be clustered into N categories, each category corresponds to one tag and has one clustering center. After the electronic device 100 obtains the position fingerprint to be located, the actual position corresponding to the position fingerprint to be located can be determined according to the category to which the position fingerprint to be located belongs.
S604, the electronic device 100 acquires a position fingerprint to be positioned.
The electronic device 100 may trigger to obtain a position fingerprint to be located after receiving an operation that a user triggers to locate, or automatically obtain the position fingerprint to be located after the electronic device 100 obtains a clustering result. The triggering time of the electronic device 100 to trigger the acquisition of the position fingerprint to be located may refer to the description related to the acquisition of the position fingerprint to be clustered by the electronic device 100 in step S601, which is not repeated herein.
S605, the electronic device 100 determines the distance between the position fingerprint to be positioned and the clustering center of each category, and determines the category to which the clustering center with the minimum distance belongs as the actual position corresponding to the position fingerprint to be positioned.
The electronic device 100 may determine, according to the clustering result, a category to which the position fingerprint to be located belongs, where the electronic device 100 may determine, according to the clustering of the position fingerprint to be located and each cluster center, the category to which the position fingerprint to be located belongs, specifically, a category corresponding to a cluster center closest to the position fingerprint to be located is a category to which the position fingerprint to be located belongs, and the electronic device 100 may determine, according to a tag of the category, an actual position corresponding to the position fingerprint to be located.
After the electronic device determines the location corresponding to the location fingerprint to be located, the electronic device may perform a corresponding operation based on the location, to provide a targeted service for the user, for example, after the electronic device 100 determines that the first location fingerprint to be located corresponds to the first location, the electronic device may perform a first operation, after the electronic device 100 determines that the third location fingerprint to be located corresponds to the second location, the electronic device may perform a second operation, where the first operation and the second operation may be different operations performed when the electronic device 100 is located in different indoor locations, i.e., the first location and the second location.
In some embodiments, the electronic device 100 may trigger to obtain the location fingerprint to be located after receiving the operation of controlling the plurality of intelligent devices by the user's voice, and control the intelligent devices in the same room or area as the user to respond to the voice command of the user after the electronic device 100 determines the actual location corresponding to the location fingerprint to be located, so as to provide services for the user. In this way, the electronic device 100 can help the user better control devices around the user in combination with the location of the user, and provide a more intelligent service experience for the user. A related scene description specifically regarding this embodiment can be seen in fig. 9 and its related content.
In some embodiments, the electronic device 100 may display information related to an actual location, such as articles, videos, information, pictures, links, applications, etc., after determining the actual location corresponding to the location fingerprint to be located. Thus, the electronic device 100 can provide more targeted application services for users, and users can view the related information of the current actual position without additional searching or searching operation, so that the operation of the users is simplified, and the experience of the users is improved. A related scene description specifically regarding this embodiment can be seen in fig. 10 and its related content.
In some embodiments, after the electronic device 100 completes indoor positioning, the electronic device 100 may further continuously obtain a position fingerprint (for example, a sixth position fingerprint) to be clustered, update the clustering result according to the position fingerprint, and improve the accuracy of the clustering result, thereby improving the accuracy of indoor positioning.
It should be noted that, what is not mentioned in the related description of fig. 11 may refer to the related contents of fig. 3 to fig. 7.
The embodiment of the application also provides a map generation method, which can generate a map containing the mapping relation between the room equipment and the room, so that when a user initiates a behavior to the room equipment, the electronic equipment 100 can determine the position of the user initiating the behavior, namely the room in which the room equipment is located, according to the map. Then, as long as the electronic device 100 detects the behavior acting on the room device, the designated position corresponding to the position fingerprint collected during the behavior can be determined, so that the accuracy of the clustering result obtained during the clustering by using the position fingerprint in the indoor positioning method is further improved, the accuracy of indoor positioning and indoor environment recognition is improved, and targeted service is provided for the user.
Fig. 12 shows a schematic structural diagram of a communication system 1000 according to an embodiment of the present application.
As shown in fig. 12, the communication system 1000 may include: mobile device 1001, control device 1002. The number of mobile devices 1001 may be one or more, and the number of control devices 1002 may be one or more.
The mobile device 1001 is a mobile electronic device configured with a camera. The mobile device 1001 may move within the detection area, and during the movement, acquire environment information and position information of a room device existing within the detection area, and transmit the environment information and the position information of the room device to the control device 1002.
The control device 1002 is an electronic device with a high computing capability. The control device 1002 may be configured to obtain environmental information and location information of a room device sent by a mobile device, obtain a plurality of areas and a room type of each area using a neural network classification model according to the environmental information, determine a mapping relationship between the room device and the room type by combining the location information of the room device, and obtain a map of a detection area including the mapping relationship.
The electronic device in the embodiment of the present application may be a portable terminal device on which iOS, android, microsoft or other operating systems are mounted, for example, a mobile phone, a tablet computer, a wearable device, or the like, and may also be a non-portable terminal device such as a laptop computer (laptop) with a touch-sensitive surface or a touch panel, or a desktop computer with a touch-sensitive surface or a touch panel. For example, the mobile device 1001 is a floor sweeping robot, an intelligent service robot, a mobile inspection robot, an intelligent shopping guide robot, and the like, and the control device 1002 is a cell phone, a tablet, a computer, and the like. In the example shown in fig. 12, the mobile device 1001 is a floor sweeping robot, and the control device 1002 is a mobile phone.
In addition, the mobile device 1001 and the control device 1002 may establish a communication connection, enabling data transmission and reception between the mobile device 1001 and the control device 1002. The communication training stage can be wired connection or wireless connection.
In some embodiments, the wireless connection may be a high fidelity wireless communication (Wi-Fi) connection, a bluetooth connection, an infrared connection, an NFC connection, a ZigBee connection, or the like. The mobile device 1001 may send the environment information and the location information of the room device directly to the control device 1002 through the close range connection. The control device 1002 may generate a map from the received location information of the room and the room device.
In other embodiments, the wireless connection may also be a long range connection including, but not limited to, a mobile network supporting 2g,3g,4g,5g and subsequent standard protocols. Alternatively, the communication system 1000 may further include a server, and the mobile device 1001 and the control device 1002 may log in to the same user account (for example, a Hua account) and then make a remote connection through the server.
Optionally, the communication system 1000 shown in fig. 12 may further include a room device 1003 (not shown), the room device 1003 being an electronic device present within a mobile area of the mobile device 1001. The mobile device 1001 may detect the position of the room device in the detection area by bluetooth positioning, WIFI positioning, RFID positioning, UWB positioning, and the like. The control device 1002 may also send a map of the detection area including the mapping relationship between the room device and the room type to the room device 1003, so that the room device 1003 provides targeted service for the user according to the room type corresponding to itself.
Fig. 13 shows a software structure schematic diagram of a map generating system provided in an embodiment of the present application.
As shown in fig. 13, the map generation system includes: the system comprises an environment information acquisition module, a device information acquisition module, a room type operation module, a device type operation module and a map generation module.
Wherein the environmental information collection module is operable to collect environmental information, which refers to information related to the type of room, including, but not limited to, one or more of the following: image data, perception data, and a moving route. The description of the environment information may be referred to the foregoing, and will not be repeated here.
The device information acquisition module may be used to acquire location information of room devices present in the detection area. Specifically, the device information acquisition module can detect the position of the room device in the detection area in a positioning mode such as Bluetooth positioning, WIFI positioning, RFID positioning, UWB positioning and the like.
The room type operation module may obtain the type of each room contained in the detection area using the trained classification model.
Optionally, the map generation system may further comprise a model training module operable to train the room classification model using a neural network classification algorithm (e.g., a convolutional neural network) based on a number of known room types, and obtain a trained room classification model, which is sent to the room type calculation module.
The device type operation module may be configured to determine a mapping relationship between the room device and the room type according to the room type of each room included in the detection area and the location information of the room device.
The map generation module may be configured to determine a map of the detection area based on a mapping relationship of the room device and the room type.
It should be noted that, in the embodiment of the present application, the environmental information collection module and the device information collection module may be software modules included in the mobile device, and the room type operation module, the device type operation module and the map generation module may be software modules included in the control device. In other embodiments of the present application, all of the software modules described above may be software modules included in only one electronic device, which is not limited in this embodiment of the present application.
Fig. 14 shows a flowchart of a map generating method according to an embodiment of the present application.
As shown in fig. 14, the method includes:
s701, the mobile device acquires environment information in the moving process and sends the environment information to the control device.
The mobile device may move within the detection area and acquire environmental information during the movement. The environmental information includes, but is not limited to, one or more of the following: image data, perception data, and a moving route. The image data is an object image acquired by the mobile device through the camera, and in a home scene, the object can be furniture, a wall, a lamp, a window, an electric appliance and the like. For example, when the acquired image includes a sofa, the area where the acquired image is located is likely to be referred to as a living room. The sensing data are data of sound, humidity, temperature, brightness and the like acquired by the mobile equipment through hardware such as a sensor and a microphone. For example, when the collected humidity data is large, the area where the collected data is located is likely to be referred to as a bathroom. The moving route is route data acquired by the mobile device in the moving process, and the control device can calculate and obtain data such as the number of rooms contained in the detection area, the area of each room, the position of each room in the detection area and the like according to the moving route. Wherein, can confirm different room types according to the area size of different rooms, when the area of room is less, this room probably is the bathroom, and when the area of room is great, this room probably is the living room. The rules for determining these possibilities can be derived by the device training the model from a number of sample information carrying the type of room, the training of which is described in detail later, and is not developed earlier.
When the computing capability of the mobile device is weak, the mobile device may transmit the collected environment information to the control device, and the control device determines the room division of the detection area and the room types of the divided rooms according to the environment information.
In the embodiment of the present application, the mobile device may be a mobile device 1001 in a communication system 1000 as shown in fig. 12, and a hardware structure thereof may be referred to in the description related to the electronic device 100 shown in fig. 1.
In some embodiments, after obtaining the moving route, the mobile device may further perform processing on the moving route, calculate, according to the moving route, data such as the number of rooms included in the detection area, the area of each room, and the position of each room in the detection area, and then send the calculated data to the control device, so that the control device determines the type of each room according to the data.
In the embodiment of the application, the mobile device may also be referred to as a second device, and the control device may also be referred to as a first device.
S702, the control equipment inputs the environment information into a trained room classification model, and a plurality of rooms contained in the detection area and room types of each room are obtained.
That is, the control device may determine the area of the detection area, the number of rooms contained in the detection area, the area of each room, the type of room, and the position of each room in the detection area according to the environmental information transmitted from the mobile device.
Specifically, after the control device receives the environmental information, the control device may calculate each room included in the detection area and the room area of each room according to the movement route, and input the room area of each room, the obstacle information, and the image data and the perception data in the environmental information into the trained room classification model, thereby obtaining each room type included in the detection area.
The room type is a label corresponding to a plurality of areas divided by the detection area. Illustratively, in a home scenario, the room types may include: kitchen, bedroom, living room, dining room, study room, bathroom, etc.
The trained room classification model can be a room classification model preset in advance for the control equipment, the room classification model can be a room classification model obtained by training on other equipment through carrying a large amount of sample information of room types for developers, and can also be a classification model obtained by training through carrying a large amount of sample information of room types for the control equipment. The initial room classification model may be trained, for example, by a convolutional neural network algorithm, resulting in the trained room classification model. The training process of the model is to extract a judgment rule through a large number of sample information of known room types, so that when the environment information of the unknown room types is obtained, the corresponding room types among different rooms can be obtained through the judgment rule. For example, in a home scene, it is possible to derive from a large number of images taken in a living room containing a sofa: the image containing the sofa is a living room, and when the image containing the sofa is acquired, the area for acquiring the image can be determined to be located in the living room.
In the embodiment of the present application, the control device may be a control device 1002 in the communication system 1000 shown in fig. 12, and the hardware structure thereof may be referred to in the description related to the electronic device 100 shown in fig. 1.
In some embodiments, training and prediction of the model may be performed using only room area and image data. The perception data and the obstacle information can be optional data, and when training and predicting the model, the model can be trained and predicted more accurately by further adding the perception data and the obstacle information. Then, when the mobile device transmits the environment information to the control device, the environment information may include only the image acquired by the mobile device during the movement of the detection area, and the movement route. The control device may obtain the area of the detection area, the number of rooms contained in the detection area, the area of each room, and the position of each room in the detection area from the movement route.
S703, the mobile device acquires the position information of the room device in the moving process and sends the position information of the room device to the control device.
The room equipment refers to electronic equipment located in a detection area, and the mobile equipment can detect the position of the room equipment in the detection area through positioning modes such as Bluetooth positioning, WIFI positioning, RFID positioning and UWB positioning.
For example, M room devices may be included in the detection area, and the control device may obtain the positions of the M room devices in the detection area, which are transmitted by the mobile device.
The procedure of locating the room device by the mobile device is exemplarily described below in a bluetooth location manner. Fig. 15 illustrates a schematic diagram of a mobile device locating a room device by bluetooth location. As shown in fig. 15, point B1 is a location where a room device is located, a mobile device may send broadcast information to a nearby room device during movement, the room device returns the broadcast information after receiving the broadcast information, and the mobile device may respectively receive the broadcast information sent by the room device through bluetooth at three points, for example, point A1, point A2 and point A3 in fig. 15, calculate distances from point A1, point A2 and point A3 to point B1 by using strength of broadcast signals, and calculate the location where point B1 is located by using a triangulation algorithm.
It should be noted that S701 and S703 may be performed simultaneously, that is, the mobile device may collect the environmental information and determine the position of the room device simultaneously during the movement, and after the environmental information is collected, send the environmental information to the control device, and after the position information of the room device is obtained, send the position information to the control device.
S704, the control device determines the mapping relation between the room device and the room types according to the position information of the room device and the room types of the plurality of rooms contained in the detection area.
The control device can divide the detection area into a plurality of areas by inputting the environment information into the trained model, and determine the room types corresponding to different areas in the plurality of areas. And then, the control equipment can combine the position information of the room equipment to determine the room type of the room where the room equipment is located, and the mapping relation between the room equipment and the room type is obtained.
S705, the control device generates a map including a mapping relationship between the room device and the room type.
The control device may generate a map of the detection area according to a mapping relationship between the room device and the room type, and a plurality of rooms included in the detection area. The map indicates the area of the detection area, the number of rooms contained in the detection area, the area, type of each room, the location of each room in the detection area, and the room in which the plurality of room devices are located.
Fig. 16 exemplarily shows a map generation process in a home scene. As in fig. 16 (a) shows the detection area that the mobile device can reach during movement. After the control device inputs the environmental information to the trained room classification model, the room types of the plurality of rooms contained in the detection area, that is, the plurality of room partitions shown in (a) of fig. 16, which divide the detection area into a plurality of areas, and the different areas correspond to one room type, as shown in (a) of fig. 16, may be acquired: bedroom 1, bathroom 2, bedroom 2, bathroom 2, bedroom 3, kitchen, living room. Fig. 16 (b) shows a map generated by the control apparatus. After the control device obtains the position information of the room device, the control device can correspond the room device to the corresponding positions of different areas, namely, the area where the room device is located can be determined, and therefore the room type corresponding to the room device is obtained. As shown in fig. 16 (b), the room apparatus includes: refrigerator, intelligent closestool, router, intelligent audio amplifier, computer, air conditioner. The refrigerator is located in a kitchen, the intelligent sound box and the router are located in a living room, the computer is located in a bedroom 3, the intelligent closestool is located in a bathroom, and the air conditioner is located in a bedroom 2.
After the control device generates the map, the control device may send the map to all devices in the detection area, including: cell phones, room equipment, etc. of individual family members. Thus, the family members can more clearly distribute the room devices in the family according to the map, the user can conveniently manage and control the room devices, and the room devices can better cooperate with other devices after obtaining the room types of the family members and the room types of other devices, so that the service provided for the user can be better completed.
In general, the map generation method provided by the embodiment of the invention can utilize the movable characteristic of the mobile device, acquire the environment information related to the detection area in the movement process of the detection area through the mobile device, reduce the operation of a user, avoid the trouble of frequent update of the environment information by the user when the position of the room device changes, increase the accuracy of map generation and provide richer intelligent experience for the user.
After the control device obtains a map of the detection area, there may be a variety of application scenarios. Fig. 17 to 20 exemplarily show four scene examples of the map generation method in a home scene, and at this time, the detection area is the area where the home is located.
Scene 1: the control device can visually display the map in the user interface after the map is generated, and a user can know the indoor house type and the distribution condition of each device in the house by looking up the map, so that the user can know the condition of the residence better.
Illustratively, the control device may display a first user interface comprising a map, wherein the first user interface may be the user interface 10 as shown in fig. 17.
Scene 2: after the control device obtains the map of the detection area, the map can be provided for a family application program (such as an intelligent living application program), and a user can add and manage intelligent home in the family through the family application program to check and control the working state of the intelligent home, so that convenience is provided for the family life of the user. In the process that the user adds the intelligent home by using the home application program, the home application program can automatically add the room type for the intelligent home according to the intelligent home added by the user.
For example, the control device may display a second user interface provided by the first application, where the second user interface may include one or more device options indicating a room in which one of the devices included in the detection area is located.
The first application may be a smart life application, the second user interface may be the user interface 20 as shown in fig. 18, one of the device options included in the second user interface may be the option 201, the option 201 may include a tag 201A, and the type of room in the tag 201A indicates that the room in which the smart door lock corresponding to the option 201 is located is a hall.
Further, after the control device displays the second user interface provided by the first application, the first device may further receive a user operation of a user on one device option, control the room device corresponding to the device option to execute the first operation, and simultaneously, automatically execute the second operation with the device belonging to the same room as the room device. That is, the user may control the operation of device A through a home-like application that may be automatically associated with device B in the room to control the operation of device B in conjunction with device A.
Therefore, the user does not need to further manually add the room types of the intelligent home, the operation of the user is simplified, the user can conveniently classify the intelligent home according to the room types, the intelligent home of different types can be controlled, and convenience is brought to the user for managing and controlling the intelligent home in the home.
Scene 3: in a home scenario, a user may wake up a smart home using a wake-up word to provide home services to the user, e.g., the user may wake up an air conditioner by voice to cause the air conditioner to begin cooling. As shown in fig. 19, when there are multiple devices in the home, i.e., device 1, device 2, … …, device N, the map generated by the controlling device may be combined to determine the device that should ultimately wake up, which may be one or more of devices 1-N.
For example, when the control device detects that the user triggers the device included in the detection area to perform the third operation, the control device may control the device located in the first room to perform the third device. For example, when a plurality of air conditioners exist in a home, when a user wakes up the air conditioners to perform cooling, the electronic device, for example, the control device, can wake up the air conditioners in a room co-located with the user to start cooling according to the generated map and in combination with the current position of the user. In addition, the control device may determine which room the device controlling works in conjunction with where the user is located, e.g., the first room may be the room where the user is located or the room closest to the user.
Scene 4: when the user opens the TV 002 in the living room through the equipment 001, the equipment 001 can combine the map, and the intelligent audio amplifier 003 that belongs to the living room with the TV 002 is opened in the linkage, provides more comfortable viewing experience for the user, also avoids the user to open the operation of intelligent audio amplifier once more, provides more intelligent experience at home for the user.
For example, the control device may detect that the user triggers the fifth device in the detection area to perform the fourth operation, and the control device may control the sixth device belonging to the same room as the fifth device to perform the fifth operation. The control device may be a device 001 as shown in fig. 20, the fifth device may be a television 002, the sixth device may be a smart speaker 003, and the fourth operation and the fifth operation may be operations of the start device.
It can be understood that the map generation method provided in the embodiment of the present application is not limited to the above-mentioned application scenario, and in the embodiment of the present application, the map generation method may also be applied to an office scenario, obtain a mapping table relationship between office equipment and each area of a company, and obtain a map of the company.
The embodiments of the present application may be arbitrarily combined to achieve different technical effects.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
In summary, the foregoing description is only exemplary embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made according to the disclosure of the present invention should be included in the protection scope of the present invention.

Claims (40)

1. An indoor positioning method, comprising:
the first device acquires a first location fingerprint;
the first device determines a first location where the first device is located according to a second location fingerprint, the second location fingerprint comprises a location fingerprint acquired when a first action occurs, the first action comprises an action with probability of occurring at the first location being greater than a first value, and the second location fingerprint comprises a feature of the first location.
2. The method of claim 1, wherein the first location fingerprint or the second location fingerprint comprises one or more of: signal identification, signal strength, signal round trip time, signal delay time of one or more communication signals in a wireless network, a base station, bluetooth or ZigBee, information acquired by a sensor, base station information and access point information.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the first device acquires a third location fingerprint;
the first device determines a second position according to the fourth position fingerprint, wherein the fourth position fingerprint comprises position fingerprints acquired when second behaviors occur, the second behaviors comprise behaviors with probability of occurring at the second position being larger than a second value, and the fourth position fingerprint comprises characteristics of the second position.
4. A method according to claim 3, wherein after the first device determines the first location, the method further comprises:
the first device performs a first operation;
after the first device determines the second location, the method further comprises:
The first device performs a second operation;
wherein the first operation and the second operation are different operations performed when the first device is in a different indoor location.
5. The method according to any one of claims 1-4, wherein the first device determines the first location from the second location fingerprint, in particular comprising:
the first device determines the first position according to the second position fingerprint and a fourth position fingerprint, wherein the fourth position fingerprint comprises position fingerprints acquired when second behaviors occur, the second behaviors comprise behaviors with probability of occurring in the second position being larger than a second value, and the fourth position fingerprint comprises characteristics of the second position;
the first position is the most position corresponding to partial position fingerprints in the second position fingerprint and the fourth position fingerprint, and the distance between the partial position fingerprints is smaller than a third value.
6. The method of claim 5, wherein the partial location fingerprint is a location fingerprint included in a first cluster of the N clusters after the second location fingerprint and the fourth location fingerprint are clustered into N clusters, the first location fingerprint being closest to a center point of the first cluster or the first location fingerprint being closest to a location fingerprint in the first cluster.
7. The method of claim 6, wherein a center point of the first cluster is a mean of location fingerprints in the first cluster.
8. The method according to claim 6 or 7, wherein the number of location fingerprints comprised in the first cluster is greater than a fourth value.
9. The method of any of claims 6-8, further comprising a fifth location fingerprint in the first cluster, the fifth location fingerprint comprising a location fingerprint collected when the first or second activity did not occur;
the first device determines the first position according to the second position fingerprint and the fourth position fingerprint, and specifically includes:
the first device determines the first position according to the second position fingerprint, the fourth position fingerprint and the fifth position fingerprint.
10. The method according to any of claims 6-9, wherein the first location fingerprint is less than a fifth value from any of the location fingerprints in the first cluster.
11. The method according to any of claims 6-10, wherein after the first device determines the first location, the method comprises:
The first device acquires a sixth location fingerprint;
the first device updates the N clusters according to the sixth location fingerprint.
12. The method of any one of claims 1-11, wherein the first behavior comprises: the user uses the behavior of the first device or the second device, the user triggers the behavior of the first device to execute a first operation, and the first device automatically executes a second operation; the second device is a device that establishes a communication connection with the first device.
13. The method of claim 12, wherein the first device is a distance from the second device less than a sixth value.
14. The method of any of claims 1-13, wherein the designated location comprises a living room when the first behavior comprises controlling a smart television; the first behavior comprises sleep detection or early morning alarm clock, and the designated position comprises a bedroom; the first behavior includes controlling a cooking appliance, the designated location including a kitchen; the first behavior comprises washing or controlling a smart toilet, and the designated location comprises a toilet; the first behavior includes controlling a smart lock, wearing shoes or slippers, and the designated location includes a lobby.
15. The method of any of claims 1-14, wherein prior to the first device acquiring the first location fingerprint, the method further comprises:
the first device receives an instruction for triggering a third operation of a third device;
after the first device determines the first location according to the second location fingerprint, the method further includes:
the first device controls the third device located at or near the first location to perform the third operation.
16. The method of any of claims 1-15, wherein after the first device determines the first location from the second location fingerprint, the method further comprises:
the first device displays a user interface containing one or more of links, pictures, icons, video, audio, or text related to the first location.
17. An indoor positioning method, comprising:
the first device obtains a first location fingerprint, and the first device performs a first operation based on the first location fingerprint;
the first device acquires a third position fingerprint, the second device executes a second operation based on the third position fingerprint, the position fingerprint comprises the characteristic of an indoor position, and the first operation and the second operation are different operations executed when the first device is in different indoor positions.
18. The method of claim 17, wherein the first location fingerprint or the third location fingerprint comprises one or more of: signal identification, signal strength, signal round trip time, signal delay time of one or more communication signals in a wireless network, a base station, bluetooth or ZigBee, information acquired by a sensor, base station information and access point information.
19. The method of claim 17 or 18, wherein after the first device obtains the first location fingerprint, the method further comprises:
the first device determines a first location where the first device is located according to a second location fingerprint, the second location fingerprint comprises a location fingerprint acquired when a first action occurs, the first action comprises an action with probability of occurring at the first location being greater than a first value, and the second location fingerprint comprises a feature of the first location.
20. The method according to claim 19, wherein the first device determines the first location from the second location fingerprint, in particular comprising:
the first device determines the first position according to the second position fingerprint and a fourth position fingerprint, wherein the fourth position fingerprint comprises position fingerprints acquired when second behaviors occur, the second behaviors comprise behaviors with probability of occurring in the second position being larger than a second value, and the fourth position fingerprint comprises characteristics of the second position;
The first position is the most position corresponding to partial position fingerprints in the second position fingerprint and the fourth position fingerprint, and the distance between the partial position fingerprints is smaller than a third value.
21. The method of claim 20, wherein the partial location fingerprint is a location fingerprint included in a first cluster of the N clusters after the second location fingerprint and the fourth location fingerprint are clustered into N clusters, the first location fingerprint being closest to a center point of the first cluster or the first location fingerprint being closest to a location fingerprint in the first cluster.
22. The method of claim 20 or 21, further comprising a fifth location fingerprint in the first cluster, the fifth location fingerprint being a location fingerprint taken when the first or second activity is not occurring,
the first device determines a first position according to the second position fingerprint and the fourth position fingerprint, and specifically includes:
the first device determines the first position according to the second position fingerprint, the fourth position fingerprint and the fifth position fingerprint.
23. The method of any one of claims 20-22, wherein the first act comprises: the user uses the behavior of the first device or the second device, the user triggers the behavior of the first device to execute a third operation, and the first device automatically executes a fourth operation; the second device is a device that establishes a communication connection with the first device.
24. The method of any of claims 20-23, wherein the designated location comprises a living room when the first behavior comprises controlling a smart television; the first behavior comprises sleep detection or early morning alarm clock, and the designated position comprises a bedroom; the first behavior includes controlling a cooking appliance, the designated location including a kitchen; the first behavior comprises washing or controlling a smart toilet, and the designated location comprises a toilet; the first behavior includes controlling a smart lock, wearing shoes or slippers, and the designated location includes a lobby.
25. The method of any of claims 19-24, wherein prior to the first device acquiring the first location fingerprint, the method further comprises:
the first device receives an instruction for triggering a third operation of a third device;
After the first device determines the first location according to the second location fingerprint, the method further includes:
the first device controls the third device located at or near the first location to perform the third operation.
26. The method of any one of claims 17-25, wherein the first operation comprises: a user interface is displayed that includes one or more of a link, a picture, an icon, video, audio, or text information related to a location corresponding to the first location fingerprint.
27. A map generation method, the method comprising:
the first device obtains environmental information of a first area, wherein the environmental information comprises: images acquired by the second equipment in the process of moving the first area and/or a moving route;
the first device determines the area of the first area, the number of rooms contained in the first area, the areas and the types of a plurality of rooms and the positions in the first area according to the environment information;
the first device obtains the positions of M devices contained in the first area;
the first device generates a map indicating the rooms in which the M devices are located.
28. The method of claim 27, wherein the environmental information further comprises one or more of the following: obstacle information, temperature, humidity, brightness, and audio, wherein the obstacle information is obtained from the moving route.
29. The method of claim 27 or 28, wherein after the first device generates a map, the method further comprises:
the first device displays a first user interface including the map.
30. The method of any of claims 27-29, wherein after the first device generates a map, the method further comprises:
the first device displays a second user interface provided by the first application, wherein the second user interface comprises one or more device options, and the device options indicate a room in which one device of the M devices is located.
31. The method of claim 30, wherein the one or more device options include a first device option, the first device option corresponding to a fourth device of the M devices,
after the first device displays the second user interface provided by the first application, the method further includes:
The first device detects the operation of a user on the first device option, and in response to the operation, the first device controls the fourth device to execute a first operation and controls devices which belong to the same room as the fourth device to execute a second operation.
32. The method of any of claims 27-31, wherein after the first device generates a map, the method further comprises:
the first device sends the map to one or more of the M devices.
33. The method of any of claims 27-32, wherein after the first device generates a map, the method further comprises:
the first device detects that a user triggers the device contained in the first area to execute a third operation;
the first device controls a device located in a first room among the M devices to perform the third operation.
34. The method of claim 33, wherein the first room is a room in which the user is located or a room in which the user is closest.
35. The method of any of claims 27-34, wherein after the first device generates a map, the method further comprises:
The first device detects that a user triggers a fifth device in the M devices to execute a fourth operation;
the first device controls a sixth device belonging to one room together with the fifth device to perform a fifth operation.
36. The method of any one of claims 27-35, wherein,
the first device uses a room classification model to determine the area of the first area, the number of rooms contained in the first area, the type of each room, and the location in the first area based on the environmental information.
37. The method of any one of claims 27-36, wherein the second device is a sweeping robot.
38. An electronic device comprising a memory, one or more processors, and one or more programs; the one or more processors, when executing the one or more programs, cause the electronic device to implement the method of any of claims 1-16, 17-26, 27-37.
39. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 16, 17 to 26, 27 to 37.
40. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method of any one of claims 1 to 16, 17 to 26, 27 to 37.
CN202111667028.7A 2021-12-30 2021-12-30 Indoor positioning method and electronic equipment Pending CN116419159A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111667028.7A CN116419159A (en) 2021-12-30 2021-12-30 Indoor positioning method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111667028.7A CN116419159A (en) 2021-12-30 2021-12-30 Indoor positioning method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116419159A true CN116419159A (en) 2023-07-11

Family

ID=87049993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667028.7A Pending CN116419159A (en) 2021-12-30 2021-12-30 Indoor positioning method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116419159A (en)

Similar Documents

Publication Publication Date Title
US10446145B2 (en) Question and answer processing method and electronic device for supporting the same
US12015670B2 (en) Using in-home location awareness
US11132840B2 (en) Method and device for obtaining real time status and controlling of transmitting devices
US9781570B2 (en) Method and apparatus for estimating location of electronic device
US10217349B2 (en) Electronic device and method for controlling the electronic device
US10551839B2 (en) Mobile electronic device and navigation method thereof
US11736555B2 (en) IOT interaction system
KR102439746B1 (en) Apparatus and method for location determining of electronic device
KR102561572B1 (en) Method for utilizing sensor and electronic device for the same
KR20150140021A (en) Method and apparatus for providing location information
CN110263611A (en) Context-aware positioning, mapping and tracking
US10897687B2 (en) Electronic device and method for identifying location by electronic device
KR20160028321A (en) Method for estimating a distance and electronic device thereof
US20160110372A1 (en) Method and apparatus for providing location-based social search service
CN116419159A (en) Indoor positioning method and electronic equipment
CN110839205A (en) WiFi-based resource recommendation method and device
CN116415061A (en) Service recommendation method and related device
US10057877B2 (en) Location estimation method and electronic device for performing the same
EP4344139A1 (en) Wearable electronic device for controlling plurality of internet-of-things devices, operation method thereof, and storage medium
CN114077366A (en) Application control method, intelligent terminal and storage medium
CN117014805A (en) Positioning method and device
CN117876926A (en) Video playing correlation analysis method, system, device and storage medium
CN114117256A (en) Information processing method, intelligent terminal and storage medium
KR20190027593A (en) Method for providing information and electronic device supporting the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination