CN117991185A - Positioning method and electronic equipment - Google Patents

Positioning method and electronic equipment Download PDF

Info

Publication number
CN117991185A
CN117991185A CN202211376773.0A CN202211376773A CN117991185A CN 117991185 A CN117991185 A CN 117991185A CN 202211376773 A CN202211376773 A CN 202211376773A CN 117991185 A CN117991185 A CN 117991185A
Authority
CN
China
Prior art keywords
fov
positioning device
target object
positioning
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211376773.0A
Other languages
Chinese (zh)
Inventor
龙星宇
蓝元皓
许耀仁
洪伟评
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211376773.0A priority Critical patent/CN117991185A/en
Publication of CN117991185A publication Critical patent/CN117991185A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephone Function (AREA)

Abstract

The application provides a positioning method and electronic equipment, which are applied to the technical field of terminals and can prompt a user of the position of a target object according to the dynamically adjusted FOV, so that the positioning accuracy and the positioning efficiency of the positioning equipment are improved. The method comprises the following steps: determining a second FOV of the positioning device; the second FOV is the actual FOV of the pointing device; determining a first FOV of the positioning device according to the second FOV and the first parameter; the first parameter comprises a pose of the positioning device; and prompting the user of the position of the target object according to the first FOV.

Description

Positioning method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a positioning method and an electronic device.
Background
With the development of technology, the positioning functions of electronic devices such as mobile phones are becoming more and more abundant. In the related art, when a mobile phone locates a target device or object, a mobile phone screen typically displays a fixed field of view (FOV), which is also called a visual range/usable range. When the target device or article is located in the direction range indicated by the FOV, the direction of the target device or article is reminded to the user, and the user is guided to search for the target device or article. However, in some cases, there may be erroneous guidance due to inaccurate positioning results, or the guidance may be inaccurate, and thus the user experience may be poor.
Disclosure of Invention
The application provides a positioning method and electronic equipment, which can prompt a user of the position of a target object according to a dynamically adjusted FOV, and improve the positioning accuracy and the positioning efficiency of the positioning equipment.
In order to achieve the above purpose, the following technical scheme is adopted in the embodiment of the application.
In a first aspect, the present application provides a positioning method, applied to a positioning device, the method comprising: determining a second FOV of the positioning device; the second FOV is the actual FOV of the pointing device; determining a first FOV of the positioning device according to the second FOV and the first parameter; the first parameter comprises a pose of the positioning device; and prompting the user of the position of the target object according to the first FOV.
That is, the pointing device prompts the user of the position of the target object based on the first FOV, which is determined based on the second FOV and the pose of the pointing device. The second FOV is the actual FOV of the pointing device. When the second FOV changes, the first FOV of the positioning device also changes dynamically, so that the positioning device can prompt the user of the position of the target object according to the dynamically adjusted first FOV. Compared with the prior art that the position of the target object of the user is prompted according to the fixed first FOV, the positioning accuracy and the positioning efficiency of the positioning equipment can be improved.
For example, when the scene where the positioning device is located is changed, the second FOV of the positioning device may change, if the second FOV is smaller, the first FOV is also smaller, and the positioning accuracy of the positioning device may be improved according to the smaller first FOV. If the second FOV is enlarged, the first FOV is enlarged, and the positioning efficiency of the positioning device can be improved according to the enlarged first FOV.
In one possible implementation, the value of the first FOV varies with the variation of the first parameter.
Optionally, the first parameter includes a pose of the positioning device, and the value of the first FOV dynamically changes when the pose of the positioning device changes. The user can adjust the gesture of locating device, increases locating device's first FOV, and the position of user's target object is prompted according to the first FOV after increasing, improves locating device's positioning efficiency.
In one possible implementation, determining the second FOV of the positioning device includes: determining said second FOV of the positioning device according to a second parameter, the second parameter comprising one or more of: hardware information of the positioning equipment, environment information of the positioning equipment and network frequency bands of the positioning equipment.
Optionally, the hardware information of the positioning device includes antenna information of the positioning device. The environment information of the positioning equipment comprises a simple environment and a complex environment; the simple environment is an environment when the channel complexity is smaller than the threshold value, and the complex environment is an environment when the channel complexity is larger than the threshold value. The network frequency band of the positioning device comprises frequency band information used by the network to which the positioning device is connected.
In one possible implementation, the value of the second FOV varies with the variation of the second parameter.
Optionally, when the second parameter of the positioning device changes, the second FOV of the positioning device also changes. For example, when the environment information in which the positioning device is located is changed from a simple environment to a complex environment, the second FOV of the positioning device becomes smaller. Reference is made in particular to the examples section of the application.
In one possible implementation, the method is applied to a positioning device with a display screen; prompting a user of the location of the target object according to the first FOV, including: the positioning device displays the first FOV on the display screen, and prompts the user to position the target object through picture prompt and/or voice prompt.
Alternatively, the user may determine the position of the target object based on the cues of the first FOV in the display screen of the pointing device. The display screen is used for prompting the user, so that the user experience is improved.
For example, when the target object is located within the direction indicated by the first FOV as in (d) of fig. 6, the pointing device may indicate the direction of the target object with an arrow in the display screen, or change the color of the first FOV area in the display screen, prompting the user that there is a target object within the direction indicated by the first FOV.
In one possible implementation, the method further includes: and prompting a user to adjust the gesture of the positioning equipment through at least one of a picture prompt, a voice prompt and a signal lamp prompt.
Optionally, the gesture of the positioning device is adjusted according to the prompt of the user of a plurality of methods, so that the user experience is improved. After the posture of the positioning equipment is adjusted, the value of the first FOV of the positioning equipment can be increased, and the positioning efficiency is improved.
In one possible implementation, the pose of the positioning device comprises an upright pose, a tilted pose, a horizontal pose.
Optionally, when the pose of the positioning device is an upright pose, the first FOV of the positioning device is larger; the first FOV of the pointing device is smaller when the pose of the pointing device is a tilted pose and smaller when the pose of the pointing device is a horizontal pose. By way of example, the user can adjust the posture of the positioning device from the inclined posture to the vertical posture, so that the value of the first FOV of the positioning device is increased, and the positioning efficiency is improved.
The second aspect. The application provides a positioning method, which comprises the following steps: determining a second FOV of the positioning device based on the second parameter; the second parameter includes one or more of: hardware information of the positioning equipment, environment information of the positioning equipment and network frequency bands of the positioning equipment; the second FOV is the actual FOV of the pointing device; and prompting the user of the position of the target object according to the second FOV.
Optionally, the hardware information of the positioning device includes antenna information of the positioning device. The environment information of the positioning equipment comprises a simple environment and a complex environment; the simple environment is an environment when the channel complexity is smaller than the threshold value, and the complex environment is an environment when the channel complexity is larger than the threshold value. The network frequency band of the positioning device comprises frequency band information used by the network to which the positioning device is connected. The value of the second FOV varies with the variation of the second parameter.
According to the application, the position of the target object is automatically positioned according to the dynamically changed second FOV of the positioning equipment, so that the positioning accuracy and efficiency can be improved.
For example, in the prior art of scenario 2, the user needs to manually set the location of the target device in the electronic map, especially when there are more target devices, which is inefficient and may set errors. According to the application, the position of the target object is automatically positioned according to the second FOV of the positioning equipment, and the electronic map is generated, so that a great deal of manpower is saved, and the positioning efficiency is improved.
In one possible implementation manner, the method is applied to a positioning device with a display screen, and prompts a user of the position of the target object according to the second FOV, and includes: the second FOV is displayed on the pointing device by augmented reality AR technology, and the position of the target object is prompted to the user by visual cues and/or voice cues. The picture displayed by the AR technology is more real and visual, the user experience is improved, and the user can determine the position of the target object more quickly.
For example, as shown in fig. 12B, an indoor scene and a virtual second FOV may be displayed in the cell phone interface to guide the user to determine the position of the target object.
In one possible implementation manner, the method is applied to a positioning device with a display screen, and prompts a user of the position of the target object according to the second FOV, and includes: the positioning device determines the position of at least one target object according to the second FOV; and generating and displaying an electronic map according to the position of at least one target object so as to facilitate a user to control the target object positioned at the target position according to the electronic map, wherein the electronic map is used for prompting the user of the position of the target object.
For example, when in the scene 2, if the positioning device is a mobile phone with a display screen, the mobile phone may display a second FOV, determine the positions of the plurality of target devices according to the second FOV, and generate an electronic map according to the positions of the plurality of target devices, where the electronic map is used to prompt the user about the positions of the target devices.
In one possible implementation manner, the method is applied to a positioning device without a display screen, and prompts the user of the position of the target object according to the second FOV, and includes: and displaying a second FOV of the positioning device on other devices with display screens through the Augmented Reality (AR) technology, so that the other devices with display screens can determine the position of at least one target object according to the second FOV, generate and display an electronic map according to the position of the at least one target object, and enable the electronic map to prompt a user of the position of the target object and enable the user to control the target object at the target position according to the electronic map. The second FOV of the positioning device without the display screen can be displayed in the display screen through the AR technology, and the position of the target object is positioned through the positioning device with the display screen, so that the user experience is improved, and the user can quickly and intuitively determine the position of the target object.
Exemplary, as shown in fig. 18, the AR interface of the mobile phone displays the current scene and virtualizes the current second FOV of the speaker. When the user moves the sound box, the second FOV direction of the sound box changes, and the virtual second FOV direction in the mobile phone AR interface also changes. Wherein the mobile phone may be used to locate the target position within the virtual second FOV direction range.
In one possible implementation manner, the method is applied to a positioning device without a display screen, and prompts the user of the position of the target object according to the second FOV, and includes: the positioning device determines the position of at least one target object according to the second FOV; the positioning device sends the position of at least one target object to other devices with display screens so that the other devices with display screens can generate and display an electronic map according to the position of the at least one target object, so that a user can control the target object at the target position according to the electronic map, and the electronic map is used for prompting the user of the position of the target object.
Exemplary, as shown in fig. 13B, the scene includes a sound box, a smart lamp, a smart television, a smart door lock, and other devices. The user may position other devices through the second FOV of the remote control enclosure. And then, the sound box can generate an electronic map according to the positioned position of the target equipment. The user can conveniently remotely control equipment such as a sound box, an intelligent lamp, an intelligent television, an intelligent door lock and the like through the electronic map.
In one possible implementation, the method further includes: adjusting the position, direction or posture of the positioning device according to the second FOV and the first parameter so as to determine the position of at least one target object according to the adjusted second FOV of the positioning object; the first parameter comprises the position relation between the positioning equipment and the target object; the number of target objects determined by the second FOV of the positioning device after adjustment is greater than or equal to the number of target objects determined by the second FOV of the positioning device before adjustment.
Optionally, the user adjusts the position, direction or pose of the positioning device, and the number of target objects in the second FOV direction range may be increased. In this way, the positioning device can position more target objects.
In one possible implementation manner, when the target objects are plural, the positional relationship between the positioning device and the target objects includes: positional relationship between the center positions of the plurality of target objects and the position of the positioning device.
Alternatively, the center positions of the plurality of target devices may be obtained by calculating the median, standard deviation range, or by a clustering algorithm or the like of the plurality of target devices.
In one possible implementation, the method further includes: and prompting a user to adjust the position, direction or posture of the positioning equipment through at least one of picture prompt, voice prompt and signal lamp prompt so as to adjust the position relation between the positioning equipment and the target object.
Optionally, the user is prompted in a plurality of ways to adjust the positional relationship between the positioning device and the target object. The user experience can be improved, and the interestingness is increased.
For example, as shown in fig. 15, the user may be prompted by means of a signal light.
In one possible implementation, the positional relationship of the positioning device and the target object includes the positioning device being relatively horizontal to the target object, the target object being offset upward relative to the positioning device, and the target object being perpendicular relative to the positioning device.
In a third aspect, the present application provides an electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, when read from the memory by the processor, cause the electronic device to perform the method according to any one of the first aspects or the second aspect.
In a fourth aspect, the present application provides a computer storage medium comprising computer instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects or to perform the method of any of the second aspects.
In a fifth aspect, the present application provides a chip system comprising at least one processor and at least one interface circuit, the at least one interface circuit being adapted to perform a transceiving function and to send instructions to the at least one processor, the at least one processor performing the method according to any of the first aspects or the method according to any of the second aspects when the at least one processor executes the instructions.
In a sixth aspect, the application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method according to any of the first aspects or to perform the method according to any of the second aspects.
The technical effects corresponding to the third aspect to the sixth aspect and any implementation manner of the third aspect to the sixth aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, and are not repeated here.
Drawings
FIG. 1A is a schematic diagram of a positioning scenario provided in an embodiment of the present application;
FIG. 1B is a diagram of a mobile phone positioning interface according to an embodiment of the present application;
FIG. 1C is a schematic view of a FOV according to an embodiment of the present application;
FIG. 2 is a schematic diagram of another positioning scenario provided in an embodiment of the present application;
FIG. 3A is a schematic diagram of a system according to an embodiment of the present application;
FIG. 3B is a schematic diagram of another system according to an embodiment of the present application;
fig. 4A is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4B is a schematic structural diagram of another electronic device according to an embodiment of the present application;
FIG. 5 is a flowchart of a positioning method according to an embodiment of the present application;
FIG. 6 is a diagram of another mobile phone positioning interface according to an embodiment of the present application;
FIG. 7 is a diagram of another mobile phone positioning interface according to an embodiment of the present application;
FIG. 8 is a diagram of another mobile phone positioning interface according to an embodiment of the present application;
fig. 9 is a schematic diagram of channel complexity provided by an embodiment of the present application;
fig. 10 is a schematic diagram of a mobile phone gesture according to an embodiment of the present application;
FIG. 11 is a schematic diagram of another mobile phone gesture according to an embodiment of the present application;
Fig. 12A is a schematic diagram of another mobile phone gesture according to an embodiment of the present application;
FIG. 12B is a schematic diagram of another positioning scenario provided by an embodiment of the present application;
FIG. 13A is a flowchart of another positioning method according to an embodiment of the present application;
FIG. 13B is a schematic diagram of another positioning scenario provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of another positioning scenario provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of a prompting method according to an embodiment of the present application;
FIG. 16 is a schematic diagram of another positioning scenario provided by an embodiment of the present application;
FIG. 17 is a schematic diagram of another mobile phone interface according to an embodiment of the present application;
FIG. 18 is a schematic diagram of another positioning scenario provided by an embodiment of the present application;
FIG. 19 is a schematic diagram of another positioning scenario provided by an embodiment of the present application;
fig. 20 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 21 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
The following describes a positioning method and an electronic device provided by the embodiments of the present application in detail with reference to the accompanying drawings.
The terms "comprising" and "having" and any variations thereof, as used in the description of embodiments of the application, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
Along with the development of internet technology, the electronic device with the positioning function can position the target object through a positioning algorithm.
Taking angle-of-arrivel (AOA) algorithm as an example, the AOA algorithm is a positioning algorithm based on the angle of arrival of the signal. The electronic equipment can realize the positioning of the target object through the AOA algorithm.
Specifically, an antenna array in the electronic device receives incoming wave signals from a target object, distances between different antennas in the antenna array and the target object are different, and phase differences can be generated among the incoming wave signals received by the different antennas. The electronic device calculates the direction or angle of the target object from the phase difference.
The electronic equipment can receive incoming wave signals at all angles around the electronic equipment serving as the center, and AOA positioning is performed. When incoming wave signals are injected from two opposite sides of an antenna array of the electronic device, a phase difference between the incoming wave signals received by the antennas can cause an AOA detection error to be too large. Therefore, a smaller angular range of error is typically specified as the FOV. If the signal sent by the target object is in the FOV, the electronic equipment can accurately position the target object according to the signal, and the reliability of the positioning result is high, so that the FOV is also called as a trusted range.
Some application scenarios of the embodiments of the present application are described below.
Scene 1, the electronic device provides a FOV guiding the user for finding the target object.
In this scenario, an electronic tag may be installed in the target object (target device or target article), and the electronic tag may emit an incoming wave signal. The electronic equipment receives the incoming wave signals to locate. When the incoming wave signal sent by the target object is located in the direction range indicated by the FOV of the electronic equipment, the electronic equipment can accurately determine the position of the target object. Thus, the electronic device can guide the user to find the target object by prompting the user whether the target object is located within the FOV.
The electronic device with the positioning function may be referred to as a positioning device, and the target object is a sought article or a positioned device.
Exemplary, as shown in fig. 1A, the scene includes objects and devices such as a mobile phone, a water cup, a smart watch, and the like. In this scenario, taking the case of finding a cup by a mobile phone, the mobile phone may be regarded as a positioning device, and the cup may be regarded as a target object. Or take the example of positioning the smart watch by a mobile phone, the mobile phone can be regarded as a positioning device, and the smart watch can be regarded as a target object. The mobile phone can be used as positioning equipment to display the FOV on the display screen, prompt whether the water cup or the intelligent watch is positioned in the FOV, and guide a user to find the water cup or the intelligent watch.
Taking a positioning device as an example of a mobile phone, if the mobile phone antenna is located at the center position of the back surface of the mobile phone, the FOV of the mobile phone may be a conical range area right in front of the center position of the back surface of the mobile phone as a starting point. And the mobile phone performs AOA positioning according to the incoming wave signals received in the FOV to search for a target object.
Illustratively, as shown in FIG. 1B, the FOV may be displayed in the display interface of the cell phone, which appears as a fan in the cell phone interface. The angular range of the sector is the angular range of the FOV. The target object 1 is located in the direction range indicated by the FOV of the mobile phone, and the direction of the target object 1 can be displayed in the fan-shaped range in the display interface of the mobile phone, and the direction of the target object 1 is indicated by an arrow. And if the target object 2 is positioned outside the direction range indicated by the FOV of the mobile phone, the direction of the target object 2 is not displayed in the fan-shaped range of the display interface of the mobile phone.
In the related art, manufacturers set the FOV in the display interface of the mobile phone according to the test result or experience. For example, the FOV is a preset fixed range.
In one case, when the scene in which the handset is located changes, the actual FOV angle of the handset becomes smaller (smaller than the FOV of the fixed angle set displayed in the handset interface), and the handset can only accurately locate the target object located within the actual FOV. The FOV displayed in the mobile phone interface is still at a preset fixed angle. Thus, for objects that are within the FOV of the display, but outside the actual FOV, the phone may not be able to be positioned accurately. Therefore, when the mobile phone still guides the user through the displayed FOV at the fixed angle, there may be erroneous guidance or the guidance may be inaccurate, resulting in poor user experience.
For example, when the handset is in a scene with a simple channel environment, as shown in fig. 1C (a), the angle of the actual FOV of the handset is 60 degrees, and the angle of the FOV displayed in the interface of the handset is also 60 degrees. The handset directs the user to find the target object through the displayed FOV of 60 degrees. When the mobile phone moves from a scene with a simple channel environment to a scene with a complex channel environment, the incoming wave signal is greatly interfered by the complex channel, and the actual FOV of the mobile phone is reduced. As in (b) of fig. 1C, the actual FOV angle of the handset is 40 degrees, and the handset can only accurately determine the target object that is within the 40 degree FOV. While the angle of the FOV displayed in the cell phone interface is still 60 degrees. When the handset still guides the user through a fixed angle FOV of 60 degrees, there may be misguidance or guidance may be inaccurate, resulting in a poor user experience.
In one case, when the scene in which the handset is located changes, the actual FOV of the handset becomes larger (larger than the FOV of the set fixed angle displayed in the handset interface), the handset can accurately determine the target object located within the actual FOV. The FOV displayed in the mobile phone interface is still at a preset fixed angle. The angle of the FOV displayed is smaller than the angle of the actual FOV. At this time, when the mobile phone still guides the user through the displayed FOV at the fixed angle, there may be erroneous guidance, or the guiding efficiency is low, resulting in poor user experience.
For example, when the handset is located in a scene with a complex channel environment, as shown in fig. 1C (C), the actual FOV angle of the handset is 60 degrees, and the FOV angle displayed in the interface of the handset is also 60 degrees. The handset directs the user to find the target object through the 60 degree FOV displayed. When the mobile phone moves from a scene with a complex channel environment to a scene with a simple channel environment, the incoming wave signal is less interfered by the channel, and the actual FOV of the mobile phone is increased. As in (d) of fig. 1C, the actual FOV angle of the handpiece is 90 degrees, and the handpiece is able to accurately determine the target object within the 90 degrees FOV. While the FOV angle displayed in the cell phone interface is still 60 degrees. When the mobile phone still guides the user through the display FOV of a fixed angle of 60 degrees, there may be erroneous guidance or low guidance efficiency, resulting in poor user experience.
And the scene 2 and the electronic equipment generate an electronic map, so that a user can conveniently remotely control a target object through the electronic map.
In the scene, the electronic device generates an electronic map, and the electronic map contains the positions of a plurality of objects or devices within the range of the electronic map. The user can determine a target object at a specific position according to the electronic map displayed in the electronic equipment, and further remotely control the target object.
For example, in an intelligent home scenario, an electronic device (such as a mobile phone, a gateway, etc.) may generate an electronic map according to the locations of a plurality of home devices, and a user may remotely control the home devices according to the electronic map. As shown in fig. 2, the electronic map generated in the electronic device can represent the home equipment in the house and the position where the home equipment is located, and taking a study as an example, a ceiling lamp is arranged in the study in the electronic map, and a desk lamp is also arranged on a desk of the study. When the user is not in the study room, such as the user is in a bedroom, the desk lamp in the electronic map can be remotely turned on by clicking the desk lamp.
In the related art, an electronic device acquires a plan view of a house, and a user manually inputs positions of a plurality of home devices in the plan view displayed by the electronic device according to a specific position of the home device in the house. And the electronic equipment generates an electronic map of the house according to the plan of the house and the household equipment of a plurality of positions manually input by the user. The user can remotely control the household equipment according to the electronic map. For example, taking a study as an example, if there is a ceiling lamp in the center of the ceiling of the study and a desk lamp is arranged on the desk, the user needs to set a ceiling lamp in the center of the study of the house plan, and set a desk lamp at the desk position. As shown in fig. 2, the dome lamp and the desk lamp may be graphically displayed in a house plan view. However, when there are hundreds of home devices in a house, a user needs to manually set a position of each home device in an electronic map, which is inefficient and prone to error.
In order to solve the above problems, the embodiment of the application provides a positioning method. In the method, the electronic equipment with the positioning function (the embodiment of the application is described as the positioning equipment) can prompt the position of the target object of the user according to the dynamically adjusted FOV, and the positioning accuracy and the positioning efficiency of the positioning equipment are improved.
In one possible implementation, embodiments of the present application may be applied in a system of a positioning device and a plurality of articles or devices. The system may be as shown in fig. 3A, the system comprising a positioning device 100 and a target object 200.
Alternatively, the positioning device may be an electronic device having a positioning function. The positioning device may be a personal computer (personal computer, PC), a mobile phone (mobi le phone), a tablet (Pad), a notebook, a desktop, a notebook, a computer with transceiving functions, a virtual reality (virtual real ity, VR) terminal device, an augmented reality (augmented real ity, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned (SELF DRIVING), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid (SMART GRID), a wireless terminal in transportation security (transportation safety), a wireless terminal in smart city (SMART C ITY), a gateway in smart home (smart home), an access point (access point) device, a home device, a wearable device, a vehicle-mounted device, or the like. The embodiment of the application does not limit the specific form of the electronic equipment.
Alternatively, the target object may be an article or a device. An electronic tag is arranged in the target object and can emit an incoming wave signal. Optionally, the incoming wave signal includes, but is not limited to, a bluetooth signal, a wireless fidelity signal, a wireless carrier communication signal, a wireless communication signal, and the like. The target device can be a personal computer, a mobile phone, a tablet computer, a notebook computer, a desktop computer, a notebook computer, a computer with a receiving and transmitting function, a virtual reality terminal device, an augmented reality terminal device, a wireless terminal in industrial control, a wireless terminal in unmanned driving, a wireless terminal in telemedicine, a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home, a wearable device, a vehicle-mounted device and other terminal devices. The target object can be a cup, an artwork, a book and the like.
The target object may be a water cup, a smart watch, etc. shown in fig. 1A, or may be an electronic device such as a desk lamp, a dome lamp, etc. shown in fig. 2.
The embodiment of the application does not limit the specific form of the target object.
Alternatively, the positioning device may use positioning technologies such as AOA to position the target object.
Optionally, the positioning device includes at least one antenna array for receiving incoming wave signals from the target object. Optionally, the incoming wave signals include, but are not limited to, bluetooth signals, wireless fidelity signals, wireless carrier communication signals, wireless communication signals, and the like, the positioning device can determine the phase difference of the received incoming wave signals according to the incoming wave signals received by different antenna arrays, and the positioning device calculates the direction or angle of the target object according to the phase difference.
Alternatively, common phase difference extraction methods may include beamforming (beamforming), phase-difference-of-arrival (PDOA), multiple signal classification algorithms (multiple signal classification, MUSIC), and the like.
In one possible implementation, as shown in (B) of fig. 3B, the positioning device may include a Receive (RX) module for receiving an incoming wave signal; and the calculating (calculator) module is used for calculating the position of the target object by adopting a positioning algorithm such as AOA. Optionally, the positioning device may further include a User Interface (UI) module for displaying FOV and prompt information, etc.
In one possible implementation, as shown in fig. 3B (a), the target object may include a Transmit (TX) module for transmitting an incoming wave signal. Accordingly, the locating device may receive the incoming wave signal transmitted by the target object.
Fig. 4A is a schematic hardware structure of a positioning device according to an embodiment of the present application. The positioning device comprises at least one processor 101, communication lines 102, a memory 103 and at least one communication interface 104. Wherein the memory 103 may also be comprised in the processor 101.
The processor 101 may be a central processing unit (central process ing unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application Specific Integrated Circuits (ASICs), off-the-shelf programmable gate arrays (field programmable GATE ARRAY, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Communication line 102 may include a pathway to transfer information between the aforementioned components.
A communication interface 104 for communicating with other devices. In the embodiment of the present application, the communication interface may be a module, a circuit, a bus, an interface, a transceiver, or other devices capable of implementing a communication function, for communicating with other devices. Alternatively, when the communication interface is a transceiver, the transceiver may be a separately provided transmitter that is operable to transmit information to other devices, or a separately provided receiver that is operable to receive information from other devices. The transceiver may also be a component that integrates the functions of transmitting and receiving information, and embodiments of the present application are not limited to the specific implementation of the transceiver.
The memory 103 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA RATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM) and direct memory bus random access memory (direct rambus RAM, DR RAM) or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory may be stand alone and be coupled to the processor 101 via a communication line 102. Memory 103 may also be integrated with processor 101.
The memory 103 is used for storing computer-executable instructions for implementing the scheme of the present application, and is controlled to be executed by the processor 101. The processor 101 is configured to execute computer-executable instructions stored in the memory 103, thereby implementing a carrier wave transmission method provided in the following embodiments of the present application.
Alternatively, the computer-executable instructions in the embodiments of the present application may be referred to as application code, instructions, computer programs, or other names, and the embodiments of the present application are not limited in detail.
In a particular implementation, as one embodiment, processor 101 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 4A.
In a particular implementation, as one embodiment, the positioning device may include multiple processors, such as processor 101 and processor 105 in FIG. 4A. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The positioning device may be a general purpose device or a special purpose device, and the embodiment of the application is not limited to the type of electronic device.
In one possible implementation, a positioning device is taken as an example of a mobile phone. Fig. 4B is a schematic structural diagram of a mobile phone according to an embodiment of the present application. The handset may include a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charge management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an ear-headphone interface 270D, a camera 293, and a display 294, among others. Optionally, the handset may also include a mobile communication module 250 or the like.
It should be understood that the structure illustrated in this embodiment is not limited to a specific configuration of the mobile phone. In other embodiments, the handset may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can be a neural center and a command center of the mobile phone. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a USB interface, etc.
The charge management module 240 is configured to receive a charge input from a charger. The charging management module 240 may also supply power to the mobile phone through the power management module 241 while charging the battery 242. The power management module 241 is used for connecting the battery 242, and the charge management module 240 and the processor 210. The power management module 241 may also receive input from the battery 242 to power the handset.
The wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
When the handset includes the mobile communication module 250, the mobile communication module 250 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the handset. The mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), or the like. The mobile communication module 250 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 250 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be provided in the same device as at least some of the modules of the processor 210.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 270A, receiver 270B, etc.), or displays images or video through display screen 294. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 250 or other functional module, independent of the processor 210.
The wireless communication module 260 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wi-Fi network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared (IR), etc. applied to a cell phone. The wireless communication module 260 may be one or more devices that integrate at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, the mobile phone's antenna 1 and mobile communication module 250 are coupled, and the antenna 2 and wireless communication module 260 are coupled, so that the mobile phone can communicate with a network and other devices through wireless communication technology. The wireless communication techniques can include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation SATELLITE SYSTEM, GLONASS), a beidou satellite navigation system (beidou navigation SATELLITE SYSTEM, BDS), a quasi zenith satellite system (quasi-zenith SATELLITE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS).
The cell phone implements display functions through the GPU, the display 294, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The display 294 is used to display images, videos, and the like. The display 294 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the handset may include 1 or N displays 294, N being a positive integer greater than 1.
The cell phone may implement a photographing function through an ISP, a camera 293, a video codec, a GPU, a display 294, an application processor, and the like. In some embodiments, the cell phone may include 1 or N cameras 293, N being a positive integer greater than 1.
The external memory interface 220 may be used to connect to an external memory card to extend the memory capabilities of the handset. The external memory card communicates with the processor 210 through an external memory interface 220 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
Internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the cellular phone and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the handset (e.g., audio data, phonebook, etc.), etc. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The handset may implement audio functions through an audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone interface 270D, and an application processor, etc. Such as music playing, recording, etc.
It will be appreciated that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the positioning apparatus. In other embodiments of the application, the positioning device may include more or less components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The following describes in detail the positioning method provided by the embodiment of the present application with reference to fig. 3A, 3B, 4A, and 4B.
In the embodiment of the application, the positioning equipment can prompt the position of the target object of the user according to the dynamically adjusted FOV, and the positioning accuracy and the positioning efficiency of the positioning equipment are improved.
In an embodiment of the present application, an effective FOV of the positioning apparatus, which refers to a FOV that can be used to suggest the position of the target object, is described as a first FOV. For example: for a pointing device with a display screen, where the pointing device is capable of displaying a FOV on the display screen, the FOV displayed by the pointing device is described as the first FOV. The first FOV may be derived from the actual FOV of the pointing device, and embodiments of the present application describe the actual FOV of the pointing device as the second FOV.
As shown in fig. 5, the method provided by the embodiment of the application includes the following steps S101 to S103:
S101, the positioning device determines a second FOV.
Wherein the second FOV is the actual FOV of the pointing device.
Optionally, the positioning device determines the first FOV according to a second parameter, the second parameter comprising one or more of: hardware information of the positioning equipment, environment information of the positioning equipment and network frequency bands of the positioning equipment.
Optionally, the hardware information of the positioning device includes a combination of antennas used by the positioning device.
Optionally, the environment information in which the positioning device is located includes a complex environment and a simple environment. Optionally, the positioning device determines whether the environment is a complex environment or a simple environment according to the channel estimation result. The channel estimation result of the environment is complex, and when the channel estimation result is larger than the channel threshold value, the environment is complex; and when the channel estimation result of the environment is simple and is smaller than the channel threshold value, the environment is a simple environment.
Optionally, the network frequency band of the positioning device includes a frequency band used by a network to which the positioning device is connected.
In the present application, the positioning device determines the second FOV based on one or more parameters. When the value of the parameter changes, the determined second FOV also changes. Thus, the second FOV determined in the embodiments of the present application is dynamically adjusted, rather than fixed.
In one possible implementation, the hardware information of the positioning device affects the size of the second FOV. Different hardware information in the positioning device determines that the resulting second FOV is different. For example, a different second FOV is determined based on a different antenna combination in the pointing device. When the antenna combination in the positioning equipment is the antenna combination A, the positioning performance of the positioning equipment is good, and the second FOV of the positioning equipment is large. When the antenna combination in the positioning equipment is the antenna combination B, the positioning performance of the positioning equipment is poor, and the second FOV of the positioning equipment is smaller. Reference is made in particular to the following detailed description.
In one possible implementation, the environmental information in which the positioning device is located affects the size of the second FOV. And when the environmental information of the positioning equipment is different, determining that the obtained second FOV is different. For example, when the positioning device is in a complex environment, the positioning error of the positioning device is large, and the second FOV of the positioning device is smaller. When the positioning equipment is in a simple environment, the positioning error of the positioning equipment is small, and the second FOV of the positioning equipment is larger.
In one possible implementation, the network frequency band of the positioning device affects the size of the second FOV. The second FOV determined by the different network bands used by the positioning device is different. For example, when the positioning device uses band 36 and band 163 of the Wi-Fi5G network, the second FOV is different. When the positioning device uses the frequency band 163, the second FOV of the positioning device is larger due to the wider bandwidth of the frequency band 163. When the positioning device uses band 36, the second FOV of the positioning device is smaller due to the narrower bandwidth of band 163.
In other possible implementations, the positioning device determines the second FOV based on a combination of parameters, such as the pose of the positioning device, hardware information, and the environmental information network band in which the positioning device is located.
In one possible implementation, the pointing device may prompt the user to adjust the position, angle, etc. of the pointing device to obtain a second FOV of a larger range of directions.
Optionally, the positioning device may prompt the user to adjust the position, angle, direction, etc. of the positioning device by means of voice, text, signal light prompt, dynamic effect prompt, etc. to obtain the second FOV with a larger direction range.
S102, the positioning equipment determines a first FOV according to the second FOV and the first parameter.
Wherein the first parameter comprises: the pose of the device is located.
In general, the target object is located above the horizontal ground, and there are few cases of being located underground. The attitude of the pointing device can thus be represented by its positional relationship to the ground.
The pose of the positioning device may include: upright posture, inclined posture, horizontal posture. The vertical posture is that the pointing device is approximately vertical to the horizontal plane, and the included angle between the FOV side of the pointing device and the horizontal ground is generally in the range of 60 degrees to 100 degrees. The inclined posture is that an inclined angle exists between the pointing device and the horizontal plane, and the included angle between the FOV side of the pointing device and the horizontal ground is generally in the range of 30 degrees to 60 degrees. The horizontal posture is that the pointing device is approximately parallel to the horizontal plane, and the angle between the FOV side of the pointing device and the horizontal ground is generally within the range of-10 degrees to 30 degrees.
In the application, the positioning device determines the first FOV according to the second FOV and the first parameter. When the value of the first parameter is changed or the second FOV is changed, the determined first FOV is also changed. Thus, the first FOV determined in the embodiments of the present application is dynamically adjusted, rather than fixed.
In one possible implementation, the pose of the pointing device affects the size of the first FOV. The first FOV determined when the positioning device is in different poses is different. For example, when the positioning device is in an upright position, the second FOV of the positioning device may cover a maximum range of directional angles, and thus the first FOV derived by the positioning device from the second FOV is the largest; when the positioning device is in an inclined posture, the second FOV of the positioning device can cover a smaller range of direction angles, so that the first FOV obtained by the positioning device according to the second FOV is smaller; when the positioning device is in a horizontal posture, the second FOV of the positioning device can only cover a minimum range of direction angles, so that the first FOV obtained by the positioning device according to the second FOV is minimum.
S103, the positioning device prompts the user of the position of the target object according to the first FOV.
Wherein the target object may be an article or a device. Such as the smart watch, water cup of fig. 1A, dome lamp, desk lamp of fig. 2, etc.
In one possible implementation, if the positioning device is provided with a display screen, the positioning device displays the first FOV on the display screen prompting the user for the location of the target object. For example: when the positioning device is applied to the scene 1, the mobile phone is provided with a display screen, and the mobile phone can display the first FOV, for example, an arrow is displayed in the display screen of the mobile phone in the scene 1, so as to prompt the user of the direction of the target object. The direction pointed by the arrow is the direction of the target object. Further, the user can find the target object based on the guidance of the FOV.
Or when the positioning device is applied to the scene 2, if the positioning device is a mobile phone with a display screen, the mobile phone can display a first FOV, determine the positions of a plurality of target devices according to the first FOV, and generate an electronic map according to the positions of the plurality of target devices, wherein the electronic map is used for prompting the positions of the target devices of a user. For example, in the scene 2, when the desk lamp is located in the direction range indicated by the first FOV of the mobile phone, the mobile phone may determine the position of the desk lamp, generate an electronic map according to the position of the desk lamp, and display the position of the desk lamp in the generated electronic map. Then, the user can click on the desk lamp in the electronic map to control the desk lamp in practice.
According to the positioning method, the position of the target object is automatically positioned according to the first FOV of the dynamic change of the positioning equipment, and the positioning accuracy and efficiency can be improved.
For example, in the prior art of scenario 1, since the direction range indicated by the first FOV is fixed when the direction range indicated by the second FOV of the pointing device is changed, the positioning may be inaccurate, misguiding the user, or guiding inefficiently. The first FOV of the positioning device is dynamically adjusted according to one or more parameters, so that the target object can be accurately and efficiently positioned.
The positioning method provided by the embodiment of the application is described in detail below with respect to the above scenario 1.
In one embodiment, the method provided by the embodiment of the present application may be applied to scenario 1, and a positioning device is described below as an example of a mobile phone.
The embodiment of the application can be applied to application programs in mobile phones. Taking an application as a positioning application, as shown in fig. 6 (a), the user may click on an icon of the positioning application in the mobile phone to trigger the mobile phone to jump to the interface 30 shown in fig. 6 (b). Interface 30 displays a plurality of items or devices that may be sought. Such as smart televisions, smart lights, smart watches, and other devices and cups. These items or devices may be pre-connected to the handset. Illustratively, the user may click on the smart watch desired to be positioned in the interface 30 to trigger the handset to jump to the interface 31 shown in fig. 6 (c) or fig. 6 (d), which interface 31 may display the first FOV. Optionally, the mobile phone establishes a connection with the object or the device in advance, so that when the object or the device is located in the direction range indicated by the first FOV, the mobile phone can know the name of the object or the device in the direction range indicated by the first FOV. As shown in fig. 6 (c), only the smart cup is in the direction indicated by the first FOV, and the smart watch is not included, and the phone displays a prompt text "please rotate the phone to search for objects" in the interface 31 to prompt the user to adjust the posture of the phone to search for the smart watch. The user can be prompted by voice to 'please rotate the mobile phone to search things' so as to prompt the user to adjust the gesture of the mobile phone to search the intelligent watch. For example, when the user holds the mobile phone to face eastern, the FOV of the mobile phone faces eastern, the user rotates the body according to the prompt, and when the user faces north, the FOV of the mobile phone faces north. As shown in fig. 6 (d), when the smart watch is located in the direction indicated by the first FOV, the mobile phone may prompt the user for the direction of the smart watch with an arrow, and display the prompt text "please advance in the direction of the arrow" to prompt the user to find the smart watch, and also may prompt the user to "please advance in the direction of the arrow" to prompt the user to find the smart watch with a voice.
Alternatively, the color of the first FOV may be different from the background color of the phone interface 31, for example, the color of the first FOV is green, and the background color of the phone interface 31 is gray, so as to prompt the user about the direction range indicated by the first FOV. When the smart watch is not located in the direction range indicated by the first FOV, the first FOV is unchanged in color; when the smart watch is located within the range of directions indicated by the first FOV, the first FOV may change color to prompt the user that the smart watch is now located within the range of directions indicated by the first FOV. For example, when the smart watch is not located within the range of directions indicated by the first FOV, the first FOV color is always green; when the smart watch is located in the direction range indicated by the first FOV, the first FOV is changed from green to red so as to prompt the user that the smart watch is located in the direction range indicated by the first FOV.
Alternatively, the first FOV displayed by the handset interface 31 may be dynamically changed.
In one possible implementation, the hardware condition of the positioning device affects the size of the first FOV. When the hardware condition in the handset changes, the handset can dynamically determine a different first FOV. For example, when the combination of the mobile phone antennas changes, the mobile phone can dynamically determine a different first FOV.
For example, if there is an antenna 1, an antenna 2, and an antenna 3 in the mobile phone. When the mobile phone uses the antennas 1 and 2 (antenna combination A), the performance of the mobile phone is good, and when the mobile phone receives an incoming wave signal in a large FOV to perform AOA positioning, the AOA positioning error is still smaller than the error threshold. Therefore, when the mobile phone uses the antenna combination a, the second FOV of the mobile phone is larger, and the first FOV of the mobile phone is correspondingly larger. When the mobile phone uses the antennas 1, 3 (antenna combination B), the performance of the mobile phone is poor. The mobile phone can only receive the incoming wave signal in the smaller second FOV to perform AOA positioning, and the AOA positioning error is smaller than the error threshold. Therefore, when the mobile phone uses the antenna combination B, the second FOV of the mobile phone is smaller, and the first FOV of the mobile phone is correspondingly smaller. Alternatively, other antenna combinations may be available in the mobile phone, which is not limited in the embodiment of the present application.
As shown in fig. 7, when the antenna combination in the handset changes from antenna combination a to antenna combination B, the first FOV of the handset decreases.
In one possible implementation, the network frequency band of the positioning device affects the size of the first FOV. When the network to which the handset is connected changes, the handset may dynamically determine a different first FOV. For example, when the frequency band of a network (e.g., wi-Fi, cellular) in which the handset is using changes, the handset may dynamically determine a different first FOV.
For example, when the handset antenna is connected to different frequency bands of the Wi-Fi 5G network, the first FOV of the handset is also different.
For example, the frequency band 36 of Wi-Fi 5G network is a lower frequency 5.18GHz narrowband 20MHz frequency band, and the frequency band 163 of Wi-Fi 5G network is a high frequency 5.815GHz broadband 160MHz frequency band, when the mobile phone antenna is switched from the Wi-Fi 5G network frequency band 36 to the Wi-Fi 5G network frequency band 163, the two frequency bands are far apart, so the second FOV of the mobile phone is different, resulting in the first FOV of the mobile phone also being different.
In another possible implementation, the environmental information in which the positioning device is located affects the size of the first FOV. Therefore, when the environment of the mobile phone changes, the mobile phone can dynamically determine different first FOVs.
Alternatively, the environment in which the mobile phone is located may be classified into a complex environment and a simple environment. When the mobile phone is in the environment and the channel complexity is high, the mobile phone is a complex environment. In a complex environment, when the mobile phone is positioned by utilizing the AOA, the error is larger, and the second FOV of the mobile phone is smaller, so that the first FOV of the mobile phone is correspondingly smaller; when the complexity of the environment channel where the mobile phone is located is low, the environment is a simple environment. In a simple environment, when the mobile phone is positioned by utilizing the AOA, the error is smaller, and the second FOV of the mobile phone is larger, so that the first FOV of the mobile phone is correspondingly larger.
As shown in fig. 8, the first FOV of the handset decreases when the environment in which the handset is located changes from a simple environment to a complex environment.
Optionally, the mobile phone can determine the channel complexity of the environment where the mobile phone is located through the channel estimation result of the environment where the mobile phone is located, so as to determine whether the environment where the mobile phone is located is a complex environment or a simple environment.
Optionally, the mobile phone can determine the multipath complexity of the environment according to the channel estimation result, and determine whether the environment of the mobile phone is a complex environment or a simple environment according to the multipath complexity of the environment.
The more divergent the channels in the scene where the mobile phone is located, the more complex the environment is, namely the complex environment; the more concentrated the channels, the simpler the environment is, which is a simple environment. The larger the channel difference among a plurality of antennas of the mobile phone is, the more complex the environment is, which is a complex environment; the smaller the channel difference, the simpler the environment is, which is a simple environment. The larger the difference among a plurality of sub-carriers of the mobile phone is, the more complex the environment is, which is a complex environment; the smaller the difference, the simpler the environment is, which is a simple environment.
As shown in fig. 9, channels in the environment where the mobile phone is located may be decomposed into channel models of multiple paths by a multiple signal classification (multiple signal classification) algorithm, a beamforming (beamforming) algorithm, or the like. Alternatively, the radio signals of the channel model may be divided into line of sight (LOS) transmissions and non-line of sight (not line of sight, NLOS) transmissions. The environment is simple when the wireless signal LOS is transmitted, and the environment is complex when the wireless signal NLOS is transmitted. And the more concentrated the channel model distribution, the more pure and more decentralized the environment.
Generally, when a mobile phone is located indoors, a channel estimation result is generally complex because objects in an indoor environment such as a smart tv, a smart lamp, a smart door lock, etc. are relatively many. Therefore, the indoor space can be considered as a complex environment. When the mobile phone is located outdoors, the channel estimation result is simple because fewer objects are in the outdoor environment. Therefore, the outdoor environment can be regarded as a simple environment.
Therefore, alternatively, the mobile phone can determine whether the environment in which the mobile phone is located is a complex environment or a simple environment by judging whether the mobile phone is located indoors or outdoors.
Alternatively, the mobile phone may determine whether the mobile phone is located in an indoor environment or an outdoor environment through a positioning module such as a global positioning system (global positioning system, GPS).
In another possible implementation, the pose of the pointing device affects the size of the first FOV. Thus, the handset may determine a different first FOV according to a different pose of the handset.
Illustratively, the second FOV is a conical range region directly in front of the handset back center antenna position as the starting point. The mobile phone detects a target object in the horizontal direction above the ground, and the first FOV of the mobile phone is the FOV above the ground right in front of the user. So when the handset is in an upright position, the first FOV of the handset is maximum; when the mobile phone is in an inclined posture, the first FOV of the mobile phone is smaller; when the mobile phone is in a horizontal posture, the FOV of the mobile phone is the first smallest.
Illustratively, as shown in fig. 10 (a), the second FOV of the handset is 60 degrees. When the mobile phone is upright, the second FOV is positioned right in front of the mobile phone antenna and parallel to the horizontal ground, and the first FOV of the mobile phone is 60 degrees.
As shown in fig. 10 (b), the second FOV of the handset is 60 degrees. When the mobile phone is tilted downwards by 30 degrees, the direction of the second FOV is the direction of the front face of the mobile phone antenna tilted downwards by 30 degrees, and the tangential angle between the second FOV and the horizontal ground is smaller than 60 degrees, such as 50 degrees. At this time, the first FOV with respect to the direction directly in front of the cellular phone antenna is reduced, excluding the second FOV below the ground. Therefore, the first FOV of the handset is reduced to 50 degrees.
As shown in fig. 10 (c), the second FOV of the handset is 60 degrees. When the mobile phone is tilted downwards by 80 degrees, the direction of the second FOV is the direction of the front face of the mobile phone antenna tilted downwards by 80 degrees, and the tangential angle between the second FOV and the horizontal ground is smaller than 60 degrees, for example, 30 degrees. At this time, the first FOV with respect to the direction directly in front of the cellular phone antenna is reduced, excluding the second FOV below the ground. Therefore, the first FOV of the handset is reduced to 30 degrees.
Alternatively, the handset may determine the attitude of the handset through an inertial measurement unit (inertial measurement unit, IMU), such as determining whether the attitude of the handset is an upright attitude, a tilted attitude, or a horizontal attitude.
The mobile phone comprehensively determines a first FOV of the mobile phone according to the gesture of the mobile phone, the antenna condition and the complexity of the environment. Illustratively, the first FOV may be as shown in table 1 below.
TABLE 1
For example, as shown in table 1, when the included angle between the back of the mobile phone and the ground is 60 ° -100 ° (including 60 ° -100 °), the posture of the mobile phone is an upright posture, the antenna group a is combined with the antenna, and when the surrounding environment is a complex environment, the first FOV of the mobile phone is 120 degrees. The gesture of cell-phone is upright gesture, and antenna combination is antenna group A, and when the surrounding environment was simple environment, the first FOV of cell-phone was 160 degrees. The gesture of cell-phone is upright gesture, and antenna combination is antenna group B, and when the surrounding environment was the complex environment, the first FOV of cell-phone was 100 degrees. The gesture of cell-phone is upright gesture, and antenna combination is antenna group B, and when the surrounding environment was simple environment, the first FOV of cell-phone was 120 degrees.
When the included angle between the back of the mobile phone and the ground is 30-60 degrees (including 30 degrees), the posture of the mobile phone is an inclined posture, the antenna combination is an antenna group A, and when the surrounding environment is a complex environment, the first FOV of the mobile phone is 90 degrees. The gesture of cell-phone is the slope gesture, and antenna combination is antenna group A, and when the surrounding environment was simple environment, the first FOV of cell-phone was 140 degrees. The gesture of cell-phone is the slope gesture, and antenna combination is antenna group B, and when the surrounding environment was the complex environment, the first FOV of cell-phone was 60 degrees. The gesture of cell-phone is the slope gesture, and antenna combination is antenna group B, and when the surrounding environment was simple environment, the first FOV of cell-phone was 90 degrees.
When the included angle between the back of the mobile phone and the ground is between-10 degrees and 30 degrees (including-10 degrees), the posture of the mobile phone is a horizontal posture, the antenna combination is an antenna group A, and when the surrounding environment is a complex environment, the first FOV of the mobile phone is 60 degrees. The gesture of cell-phone is the horizontal gesture, and antenna combination is antenna group A, and when the surrounding environment was simple environment, the first FOV of cell-phone was 100 degrees. The gesture of cell-phone is the horizontal gesture, and antenna combination is antenna group B, and when the surrounding environment was the complex environment, the first FOV of cell-phone was 30 degrees. The gesture of cell-phone is the horizontal gesture, and antenna combination is antenna group B, and when the surrounding environment was simple environment, the first FOV of cell-phone was 90 degrees.
The division of FOV in the embodiment of the present application is not particularly limited. In practice, the FOV may be divided into more or less levels as desired.
Optionally, the mobile phone can prompt the user to adjust the gesture of the mobile phone, so that the direction range indicated by the first FOV of the mobile phone is larger, and the positioning efficiency is improved. For example, when the mobile phone is in a tilted posture, the mobile phone prompts the user to change the mobile phone into an upright posture. The direction range indicated by the first FOV of the mobile phone is larger, and the positioning efficiency is improved.
Illustratively, as in (a) of fig. 11, the first FOV of the handset is maximum when the handset is in an upright position. At this time, the posture of the mobile phone does not need to be adjusted. As in (b) of fig. 11, the first FOV of the handset is smaller when the handset is in the tilted posture. At this time, the mobile phone can remind the user to lift the mobile phone, so that the mobile phone is upright. For example, the mobile phone interface may display a prompt text "please lift the mobile phone", and the mobile phone may also voice prompt the user "please lift the mobile phone" to prompt the user to lift the mobile phone. Therefore, the first FOV of the mobile phone covers the target object, and the positioning efficiency is improved.
Further, the mobile phone can prompt the user to adjust the direction of the mobile phone, so that the first FOV displayed by the mobile phone is larger, the user is guided to find the target object as soon as possible, and the positioning efficiency is improved.
For example, the mobile phone may prompt the user to move the mobile phone at a speed that ensures that the first FOV of the mobile phone covers the entire environment. Alternatively, when the user turns 360 degrees in place, the mobile phone can prompt the user of the speed of rotation. For example, as shown in fig. 12A (a), the user holds the mobile phone in place for 360 degrees, and when the first FOV of the mobile phone is smaller or the user rotates at too high a speed, as shown in fig. 12A (b), the mobile phone interface displays a text prompt "slow down a bit" or a voice prompt "slow down a bit" to remind the user to rotate slowly, so as to ensure that the first FOV of the mobile phone covers the whole environment range.
Optionally, when the user holds the mobile phone to forward, the mobile phone can prompt the user of the forward speed. The embodiment of the application does not limit how the user moves the mobile phone.
In one possible implementation, the positioning device may not only position the target object, but also position the target user.
Optionally, the target user may carry a target object, and the positioning device positions the target user according to the target object carried by the target user. For example: the user wears the smart watch, and the user can locate the user by locating the smart watch.
In one possible implementation, the handset may display a second FOV of the handset in the handset interface using augmented reality (augmented reality, AR) or the like. As shown in fig. 12B, a picture of the current scene is displayed in the AR interface of the mobile phone, and the current second FOV of the mobile phone is virtually displayed. When the user moves the mobile phone, the second FOV direction of the mobile phone will change, and the virtual second FOV direction in the AR interface of the mobile phone will also change. The user may move the handset until the target object is within the virtual second FOV of the handset.
Optionally, when the mobile phone searches for the target object, the mobile phone may prompt the user to move the mobile phone by means of voice prompt or picture prompt (such as text prompt, picture color change prompt) and the like, so as to search for the target object. Specifically, the prompting method when the first FOV is displayed by referring to the mobile phone interface can be referred to.
In another embodiment, the method provided by the embodiment of the application can be applied to the scene 2.
As shown in fig. 13A, another method provided by the embodiment of the present application includes the following steps S201 to S202:
s201, the positioning equipment determines a second FOV according to the second parameter.
Optionally, the second parameter includes hardware information of the positioning device, environment information of the positioning device, and a network frequency band of the positioning device. Reference is made specifically to the content of step S101.
S202, the positioning device prompts the user of the position of the target object according to the second FOV.
Wherein the target object may be an article or a device. Such as the smart watch, water cup of fig. 1A, dome lamp, desk lamp of fig. 2, etc.
In one possible implementation, if the positioning device (e.g., speaker, gateway) does not have a display screen, the positioning device determines the positions of the plurality of target devices according to the second FOV, and sends the positions of the plurality of target devices to other positions having a display screen, and further, the other electronic devices having display screens generate and display an electronic map according to the positions of the target devices, where the electronic map is used to prompt the user for the positions of the target devices. For example, in scenario 2, when the user is located outdoors, the location of the intelligent lamp, the intelligent television, the intelligent door lock, and other devices is located through the second FOV of the indoor sound box, and an electronic map is generated and displayed according to the locations of the target devices, where the electronic map is used to prompt the user of the location of the target device. The user can then control these target devices through the electronic map.
According to the positioning method, the position of the target object is automatically positioned according to the dynamically changed second FOV of the positioning equipment, and the positioning accuracy and efficiency can be improved.
For example, in the prior art of scenario 2, the user needs to manually set the location of the target device in the electronic map, especially when there are more target devices, which is inefficient and may set errors. According to the application, the position of the target object is automatically positioned according to the second FOV of the positioning equipment, and the electronic map is generated, so that a great deal of manpower is saved, and the efficiency is improved.
The positioning method provided by the embodiment of the application is respectively described in detail in the above scenario 2.
In the scene 2, the related art needs a user to manually set the position of the target object in the electronic map, and the application can automatically generate the electronic map according to the FOV of the positioning equipment, so that the user can conveniently remotely control the target equipment through the electronic map.
As shown in fig. 13B, the scene includes a sound box, an intelligent lamp, an intelligent television, an intelligent door lock, and other devices. The user may position other devices through the second FOV of the remote control enclosure. Such as smart lights, smart televisions, smart door locks, etc. Then, the sound box can be regarded as a positioning device, and the intelligent lamp, the intelligent television, the intelligent door lock and the like can be regarded as target devices. And then, the sound box can generate an electronic map according to the positioned position of the target equipment. The user can conveniently remotely control equipment such as a sound box, an intelligent lamp, an intelligent television, an intelligent door lock and the like through the electronic map.
For example, when a user is located outdoors, an indoor target device may be located by a locating device such as an indoor sound, an indoor electronic map may be generated according to the location of the indoor target device, and the map may be displayed by a mobile phone. For example, the position of the intelligent lamp, the intelligent television, the intelligent door lock and other devices is positioned through the indoor sound box, and the position of the device is arranged in the electronic map.
Further exemplary, when the user is located in the living room, the plurality of devices in the study room can be located through the positioning device such as the sound of the study room, and according to the positions of the plurality of devices in the study room, an electronic map of the study room can be generated, and the map can be displayed through a mobile phone or a television of the living room.
Optionally, in the scene 2, the positioning device may also be a mobile phone, and the mobile phone may display a second FOV, determine the positions of the multiple target devices according to the second FOV, and generate an electronic map according to the positions of the multiple target devices, where the electronic map is used to prompt the user about the positions of the target devices. For example, when the target device is located within the range of directions indicated by the second FOV of the mobile phone, the mobile phone may determine the location of the target device, and generate the electronic map according to the location of the target device.
Then, the user can remotely control the target device in the electronic map through the electronic map. For example, the user mobile phone interface displays the electronic map, and clicks a desk lamp of a study room in the electronic map, so that the desk lamp can be turned on. In this way, the location device can automatically locate the location of the target device and display it in the electronic map without requiring the user to manually input the location of the target device in the map. And a user can conveniently remotely control the target equipment through the electronic map.
The following description uses a positioning device as a sound box, a target device as an intelligent home device, and a position where the sound box can position the intelligent home device as an example.
As shown in fig. 13B, when the positioning apparatus is a speaker, the second FOV of the speaker may be a conical range area right in front of the center antenna position of the front of the speaker as a starting point. Alternatively, the target device may be a smart light bulb, a smart television, a smart door lock, or the like.
Optionally, when the sound box starts to locate, the sound box sends an indication message to each device to inform the target device to enter the locating mode. After each device receives the indication message, the indication message is responded to and a periodic signal is sent to the sound box, and the periodic signal can inform the sound box to collect data of the target device. Optionally, the data of the target device may include a distance, a direction between the sound box and the target device, the number of target devices, and the like. And the sound box determines the position relation between the sound box and the target equipment according to the collected data of the target equipment.
The positional relationship between the sound box and the target device may include: the sound box is relatively horizontal to the target device, the target device is relatively upward to the sound box, and the target device is relatively vertical to the sound box. The relative level of the sound box and the target object means that the sound box and the target object are approximately level, namely, the included angle between the connecting line of the sound box and the target object and the level ground is within the range of-10 degrees to 30 degrees. The object is slightly higher than the sound box, namely, the included angle between the connecting line of the sound box and the object and the horizontal ground is in the range of 30 degrees to 60 degrees. The object is vertical to the sound box, namely the object is higher than the sound box, namely the included angle between the connecting line of the sound box and the object and the horizontal ground is in the range of 60 degrees to 90 degrees.
Alternatively, the periodic signal sent by the target device may be a bluetooth signal, a Wi-Fi signal, an Ultra Wide Band (UWB) signal, or the like.
In one possible implementation, when the positional relationship of the speaker and the target device changes, the direction indicated by the second FOV of the speaker may change, resulting in a change in the number of target devices covered by the range of directions indicated by the second FOV. Thereby affecting the number of target devices contained in the electronic map generated from the second FOV.
Optionally, when there are a plurality of target devices, the positional relationship between the sound box and the plurality of target devices is determined according to the center positions of the plurality of target devices.
For example, as shown in fig. 14, the target device 1 is located 2 meters in front of the sound box level, and the target device 2 is located 20 degrees to the left and 2 meters in front of the sound box level, and then the center positions of the two target devices are the positions of 10 degrees to the left and 2 meters in front of the sound box. Alternatively, the center positions of the plurality of target devices may be obtained by calculating the median and standard deviation ranges of the plurality of target devices, or by a clustering algorithm, etc., which is not particularly limited by the implementation of the present application.
Optionally, the positional relationship of the speaker and the target device may affect the number of target devices covered by the directional range indicated by the second FOV of the speaker, thereby affecting the number of target devices contained in the electronic map generated from the second FOV. Illustratively, since the second FOV is the conical range region directly in front of the speaker center antenna position as the starting point. And the sound box detects the target equipment in the horizontal direction above the ground. So when the sound box is level relative to the target device, the second FOV of the sound box indicates that the direction range covers more target devices; when the target equipment is upwards relative to the sound box, the direction range indicated by the second FOV of the sound box covers less target equipment; the second FOV of the enclosure indicates a range of directions that cover less target devices when the target devices are perpendicular relative to the enclosure.
In one possible implementation, when a hardware condition in the enclosure changes, the second FOV of the enclosure changes, resulting in a change in the number of target devices covered by the directional range indicated by the second FOV of the enclosure.
Illustratively, the antenna combination in the enclosure changes, resulting in a change in the second FOV of the enclosure. The method can be particularly used for looking at the change condition of the second FOV of the mobile phone when the hardware condition in the mobile phone is changed.
In one possible implementation, when the network frequency band in the enclosure changes, the second FOV of the enclosure changes, resulting in a change in the number of target devices covered by the directional range indicated by the second FOV of the enclosure.
Illustratively, the enclosure may dynamically determine a different second FOV when the network frequency band to which the enclosure antenna is connected (e.g., the 5G network frequency band of Wi-Fi) changes. The method can be particularly used for looking at the change condition of the second FOV of the mobile phone when the network frequency band in the mobile phone is changed.
In one possible implementation, when the environment in which the enclosure is located changes, the second FOV of the enclosure changes, resulting in a change in the number of target devices covered by the directional range indicated by the second FOV of the enclosure.
Illustratively, the enclosure goes from a complex environment to a simple environment, resulting in a larger second FOV of the enclosure, which indicates an increased number of target devices covered by the directional range. The specific reference can be made to the change condition of the second FOV of the mobile phone when the environment of the mobile phone changes.
For example, if 16 target devices are located near the speaker, the number of target devices covered by the direction range indicated by the second FOV of the speaker determined according to the positional relationship between the speaker and the target devices, the antenna condition of the speaker, and the complexity of the environment may be as shown in table 2 below.
TABLE 2
For example, as shown in table 2, when the included angle between the front of the sound box and the connection line of the target device and the sound box is between-10 ° and 30 ° (including-10 °), the positional relationship between the sound box and the target device is a horizontal relationship, the antennas are combined into an antenna group a, and when the surrounding environment is a complex environment, the number of target devices covered by the direction range indicated by the second FOV of the sound box is 12. The position relation of the sound box and the target equipment is a horizontal relation, the antenna combination is an antenna group A, and when the surrounding environment is a simple environment, the number of the target equipment covered by the direction range indicated by the second FOV of the sound box is 16. The position relation of the sound box and the target equipment is a horizontal relation, the antenna combination is an antenna group B, and when the surrounding environment is a complex environment, the number of target equipment covered by the direction range indicated by the second FOV of the sound box is 10. The position relation of the sound box and the target equipment is a horizontal relation, the antenna combination is an antenna group B, and when the surrounding environment is a simple environment, the number of the target equipment covered by the direction range indicated by the second FOV of the sound box is 12.
When the included angle between the front of the sound box and the connecting line of the target equipment and the sound box is between 30 degrees and 60 degrees (including 30 degrees), the target equipment is deviated relative to the position of the sound box, the antenna is combined into an antenna group A, and when the surrounding environment is a complex environment, the number of target equipment covered by the direction range indicated by the second FOV of the sound box is 9. The target equipment is located on the upper side relative to the loudspeaker box, the antenna combination is an antenna group A, and when the surrounding environment is a simple environment, the number of the target equipment covered by the direction range indicated by the second FOV of the loudspeaker box is 14. The target equipment is located on the upper side relative to the loudspeaker box, the antenna combination is an antenna group B, and when the surrounding environment is a complex environment, the number of target equipment covered by the direction range indicated by the second FOV of the loudspeaker box is 6. The target equipment is located on the upper side relative to the loudspeaker box, the antenna combination is an antenna group B, and when the surrounding environment is a simple environment, the number of target equipment covered by the direction range indicated by the second FOV of the loudspeaker box is 9.
When the included angle between the front of the sound box and the connecting line of the target equipment and the sound box is between 60 degrees and 100 degrees (including 60 degrees and 100 degrees), the target equipment is vertical to the position of the sound box, the antenna combination is an antenna group A, and when the surrounding environment is a complex environment, the number of target equipment covered by the direction range indicated by the second FOV of the sound box is 6. The target equipment is vertical relative to the loudspeaker box, the antenna combination is an antenna group A, and when the surrounding environment is a simple environment, the number of the target equipment covered by the direction range indicated by the second FOV of the loudspeaker box is 10. The target equipment is vertical relative to the loudspeaker box, the antenna combination is an antenna group B, and when the surrounding environment is a complex environment, the number of the target equipment covered by the direction range indicated by the second FOV of the loudspeaker box is 3. The target equipment is vertical relative to the loudspeaker box, the antenna combination is an antenna group B, and when the surrounding environment is a simple environment, the number of target equipment covered by the direction range indicated by the second FOV of the loudspeaker box is 9.
The number of target devices covered by the direction range indicated by the second FOV in the embodiment of the present application is not specifically limited.
The sound box can only accurately locate the target equipment in the direction range indicated by the second FOV of the sound box, so that the automatic positioning efficiency of the sound box is improved, the number of the target equipment in the direction range indicated by the second FOV of the sound box is required to be increased, the sound box can prompt a user to move the position of the sound box, and the direction range indicated by the second FOV of the sound box can cover more target equipment. Therefore, the number of target devices contained in the electronic map generated according to the second FOV of the sound box is larger.
In one possible implementation, after determining the position of the target device relative to the positioning device, the positioning device may prompt the user to move the position of the positioning device, so that the direction range indicated by the second FOV of the positioning device covers more target devices, and positioning efficiency is improved.
Optionally, the sound box can prompt the number of target devices covered by the direction range indicated by the second FOV of the sound box through the signal lamp and/or the direction lamp, and prompt the user to move the position of the sound box, so that the number of target devices covered by the direction range indicated by the second FOV of the sound box is increased. As shown in fig. 15, the upper surface of the sound box comprises three signal lamps and two direction lamps.
Alternatively, the colors and the number of the three signal lamps which are lightened can be determined according to the ratio of the current second FOV of the sound box to the maximum FOV which can be obtained by the sound box. Illustratively, when the ratio is less than 100% and greater than or equal to 80%, the three signal lights are lit to green. At this time, the second FOV of the sound box is larger, and the covered target devices are more. When the ratio is less than 80% and greater than or equal to 50%, the two signal lamps are turned on to orange, and the other signal lamp is not turned on. At this time, the second FOV of the speaker is smaller, and the covered target devices are smaller. When the ratio is less than 50%, one signal lamp is lighted to be red, and the other two signal lamps are not lighted. At this time, the FOV of the sound box is minimum, and the covered target devices are fewer.
Alternatively, the colors and the number of the three signal lights can be determined according to the ratio of the target devices in the direction range indicated by the second FOV of the sound box to all the target devices in the scene. Illustratively, when the ratio is less than 100% and greater than or equal to 80%, the three signal lights are lit to green. At this time, the second FOV of the speaker indicates that there are more target devices in the direction range, and the position of the speaker may not be moved. When the ratio is less than 80% and greater than or equal to 50%, the two signal lamps are turned on to orange, and the other signal lamp is not turned on. At this time, the second FOV of the speaker indicates that there are fewer target devices in the direction range, and the position of the speaker can be moved. For example, at this time, the multiple target devices are located at the left side of the FOV of the speaker, and the speaker may be rotated to the left, so that the direction range indicated by the second FOV of the speaker covers more target devices. When the ratio is less than 50%, one signal lamp is lighted to be red, and the other two signal lamps are not lighted. At this time, the target devices in the direction range indicated by the second FOV of the sound box are too few, so that the position of the sound box can be moved. For example, when the plurality of target devices are located at the left side of the second FOV of the sound box, the sound box can be rotated leftwards, so that the direction range indicated by the second FOV of the sound box covers more target devices.
Optionally, when the target device has multiple target devices, the direction from the center position of the target device to the speaker box, and the angle of the boundary line closest to the second FOV of the target device from the speaker box is greater than a preset value, the direction light is turned on, so that the user is prompted to rotate the speaker box towards the center position of the target device, and the second FOV of the speaker box covers more target devices. For example, as shown in fig. 16, when the center position of the target device is in the direction of the speaker box and the left boundary line angle closest to the second FOV of the speaker box is greater than 10 degrees, the left direction light is turned on, and the user is prompted to rotate the speaker box to the left.
In one possible implementation, the signal lights and/or direction lights described above may also be displayed in a cell phone interface. For example, as shown in FIG. 17, the signal lights and/or direction lights may be displayed in a location display interface 32 of the cell phone application. Optionally, a text prompt may also be displayed in the mobile phone to prompt the user to move the speaker location. Such as displaying a text prompt to "rotate the speaker to the left".
In one possible implementation, the second FOV of the speaker is displayed in the handset using augmented reality (augmented reality, AR) or the like. As shown in fig. 18, the AR interface of the mobile phone displays the picture of the current scene, and virtualizes the current second FOV of the sound box. When the user moves the sound box, the second FOV direction of the sound box changes, and the virtual second FOV direction in the mobile phone AR interface also changes. The user may move the loudspeaker until the virtual second FOV of the loudspeaker covers more target devices.
In another possible scenario, the positioning device may not only position the target object, but also position the target user. Thus, the positioning device can control the intelligent household device according to the position of the target user. For example, when the locating device detects that the target user enters the bedroom, the intelligent light of the bedroom is turned on. And when the target user is detected to leave the bedroom, the intelligent lamp of the bedroom is automatically turned off.
Optionally, the positioning device may be a sound box, and the sound box may position the target user.
Alternatively, the speaker may emit a detection signal that reflects off of the target user. The sound box can locate the target user according to the reflected detection signal, so that the moving path of the target user can be known. Therefore, the sound box can monitor the moving path of the user, and the center position of the target user is determined according to the moving path of the target user. For example, as shown in fig. 19, the center position may be located at the center point of the area of the target user movement path, with a height of one-half the height of the target user. Alternatively, the center position may be determined as the position of the target user. Then, the user can move the sound box, so that the position of the target user is positioned at the center of the direction range indicated by the second FOV of the sound box, and the positioning accuracy is improved.
One or more of the interfaces described above are exemplary, and in other embodiments, other interface designs are possible.
Optionally, some operations in the flow of the method embodiments described above are optionally combined, and/or the order of some operations is optionally changed. The order of execution of the steps in each flow is merely exemplary, and is not limited to the order of execution of the steps, and other orders of execution may be used between the steps. And is not intended to suggest that the order of execution is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In addition, it should be noted that details of processes involved in a certain embodiment herein are equally applicable to other embodiments in a similar manner, or may be used in combination between different embodiments.
Moreover, some steps in method embodiments may be equivalently replaced with other possible steps. Or some steps in method embodiments may be optional and may be deleted in some usage scenarios. Or other possible steps may be added to the method embodiments.
Moreover, the method embodiments described above may be implemented alone or in combination.
Further embodiments of the application provide an apparatus, which may be the second electronic device or the first electronic device or a component in the second electronic device (such as a chip system) as described above.
The apparatus may include: a display screen, a memory, and one or more processors. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device may refer to the structure of the electronic device shown in fig. 4B.
The core structure of the electronic device may be represented as the structure shown in fig. 20, and the electronic device includes: a processing module 201, an input module 202, a storage module 203, a display module 204 and a communication module 205.
The processing module 201 may include at least one of a Central Processing Unit (CPU), an application processor (Application Processor, AP), or a communication processor (Communication Processor, CP). The processing module 201 may perform operations or data processing related to control and/or communication of at least one of the other elements of the consumer electronic device. Optionally, the processing module 201 is configured to support the first electronic device 100 to execute S102-S103 in fig. 5; and/or for supporting the first electronic device 200 to execute S201-S202 in fig. 13A.
The input module 202 is configured to obtain an instruction or data input by a user, and transmit the obtained instruction or data to other modules of the electronic device. Specifically, the input mode of the input module 202 may include touch, gesture, proximity screen, and the like, and may be voice input. For example, the input module may be a screen of the electronic device, acquire an input operation of a user, generate an input signal according to the acquired input operation, and transmit the input signal to the processing module 201. Optionally, the input module 202 is configured to obtain the name of the target object that the user wants to locate, and reference may be made to the mobile phone interface schematic shown in fig. 6 (b).
The storage module 203 may include volatile memory and/or nonvolatile memory. The storage module is used for storing at least one relevant instruction or data in other modules of the user terminal equipment. Optionally, the storage module 203 is configured to store information of an article or device to which the first electronic device 100 is connected.
The display module 204 may include, for example, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, an Organic Light Emitting Diode (OLED) display, a microelectromechanical system (MEMS) display, or an electronic paper display. For displaying user viewable content (e.g., text, images, video, icons, symbols, etc.). Optionally, the display module 204 is configured to display the first electronic device 100 to display an interface as shown in fig. 6.
A communication module 205 for supporting the personal terminal to communicate with other personal terminals (via a communication network). For example, the communication module may be connected to a network via wireless communication or wired communication to communicate with other personal terminals or network servers. The wireless communication may employ at least one of cellular communication protocols, such as Long Term Evolution (LTE), long term evolution-advanced (LTE-a), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), universal Mobile Telecommunications System (UMTS), wireless broadband (WiBro), or global system for mobile communications (GSM). The wireless communication may include, for example, short-range communication. The short-range communication may include at least one of wireless fidelity (Wi-Fi), bluetooth, near Field Communication (NFC), magnetic Stripe Transmission (MST), or GNSS. Optionally, the communication module 205 is configured to support the first electronic device to communicate with the second electronic device, and for example, reference may be made to the system schematic shown in fig. 4A.
The apparatus shown in fig. 20 may also include more, fewer, or split portions of the components, or have other arrangements of components, as embodiments of the application are not limited in this respect.
The embodiment of the present application also provides a chip system, as shown in fig. 21, which includes at least one processor 211 and at least one interface circuit 212. The processor 211 and the interface circuit 212 may be interconnected by wires. For example, interface circuit 212 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, interface circuit 212 may be used to send signals to other devices (e.g., processor 211). Illustratively, the interface circuit 212 may read instructions stored in the memory and send the instructions to the processor 211. The instructions, when executed by the processor 211, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic equipment, the electronic equipment is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The embodiment of the application also provides a computer program product which, when run on a computer, causes the computer to execute the functions or steps executed by the mobile phone in the above method embodiment.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. The specific working processes of the above-described systems, devices and modules may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and the division of modules or units, for example, is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (25)

1. A positioning method, applied to a positioning device, the method comprising:
Determining a second field of view FOV of the pointing device; the second FOV is an actual FOV of the positioning device;
determining a first FOV of the positioning device according to the second FOV and the first parameter; the first parameter comprises a pose of a positioning device;
And prompting the position of the target object of the user according to the first FOV.
2. The method of claim 1 wherein the value of the first FOV varies as the first parameter varies.
3. The method of claim 1 or 2, wherein the determining a second FOV of the positioning device comprises:
Determining the second FOV of the positioning device according to a second parameter, the second parameter comprising one or more of: hardware information of the positioning equipment, environment information of the positioning equipment and network frequency bands of the positioning equipment.
4. A method according to claim 3 wherein the value of the second FOV varies as the second parameter varies.
5. The method according to any one of claims 1-4, wherein the method is applied in a positioning device provided with a display screen; the prompting the user of the position of the target object according to the first FOV includes:
the positioning device displays the first FOV on the display screen, and prompts the position of the target object to the user through picture prompt and/or voice prompt.
6. The method according to any one of claims 1-5, further comprising:
And prompting a user to adjust the gesture of the positioning equipment through at least one of a picture prompt, a voice prompt and a signal lamp prompt.
7. The method of any of claims 1-6, wherein the attitude of the positioning apparatus comprises an upright attitude, a tilted attitude, a horizontal attitude.
8. The method according to any of claims 1-7, wherein the hardware information of the positioning device comprises antenna information of the positioning device.
9. The method according to any one of claims 1-8, wherein the environmental information in which the positioning device is located includes a simple environment and a complex environment; the simple environment is an environment when the channel complexity is smaller than a threshold value, and the complex environment is an environment when the channel complexity is larger than the threshold value.
10. The method according to any of claims 1-9, wherein the network frequency band of the positioning device comprises frequency band information used by a network to which the positioning device is connected.
11. A method of positioning, the method comprising:
determining a second field of view FOV of the pointing device according to the second parameter; the second parameter includes one or more of: hardware information of the positioning equipment, environment information of the positioning equipment and network frequency bands of the positioning equipment; the second FOV is an actual FOV of the positioning device;
and prompting the position of the target object of the user according to the second FOV.
12. The method of claim 11 wherein the value of the second FOV varies as the second parameter varies.
13. The method according to claim 11 or 12, wherein the method is applied in a positioning device provided with a display screen, and wherein the prompting the user of the position of the target object according to the second FOV comprises:
displaying the second FOV on the positioning device through an Augmented Reality (AR) technology, and prompting the position of the target object of the user through picture prompt and/or voice prompt.
14. The method according to claim 11 or 12, wherein the method is applied in a positioning device provided with a display screen, and wherein the prompting the user of the position of the target object according to the second FOV comprises:
the positioning device determines the position of at least one target object according to the second FOV;
and generating and displaying an electronic map according to the position of the at least one target object so as to facilitate a user to control the target object positioned at the target position according to the electronic map, wherein the electronic map is used for prompting the user of the position of the target object.
15. The method according to claim 11 or 12, wherein the method is applied in a positioning device without a display screen, and wherein the prompting the user of the position of the target object according to the second FOV comprises:
and displaying a second FOV of the positioning device on other devices with display screens through an Augmented Reality (AR) technology, so that the other devices with display screens can determine the position of at least one target object according to the second FOV, and generating and displaying an electronic map according to the position of the at least one target object, wherein the electronic map is used for prompting a user of the position of the target object, and the user can control the target object positioned at the target position according to the electronic map.
16. The method according to claim 11 or 12, wherein the method is applied in a positioning device without a display screen, and wherein the prompting the user of the position of the target object according to the second FOV comprises:
the positioning device determines the position of at least one target object according to the second FOV;
The positioning device sends the position of the at least one target object to other devices with display screens so that the other devices with display screens can generate and display an electronic map according to the position of the at least one target object, so that a user can control the target object located at the target position according to the electronic map, and the electronic map is used for prompting the user of the position of the target object.
17. The method according to any one of claims 11-16, further comprising:
Adjusting the position, direction or posture of the positioning device according to the second FOV and the first parameter so as to determine the position of at least one target object according to the adjusted second FOV of the positioning device; the first parameter comprises the position relation between the positioning equipment and the target object;
And the number of the target objects determined by the second FOV of the positioning equipment after adjustment is larger than or equal to the number of the target objects determined by the second FOV of the positioning equipment before adjustment.
18. The method according to any one of claims 11-17, wherein when the target object is a plurality of, the positional relationship of the positioning device with the target object comprises: positional relationship between the central positions of the plurality of target objects and the position of the positioning device.
19. The method according to any one of claims 11-18, further comprising:
and prompting a user to adjust the position, direction or posture of the positioning equipment through at least one of picture prompt, voice prompt and signal lamp prompt so as to adjust the position relation between the positioning equipment and the target object.
20. The method of any of claims 11-19, wherein the positional relationship of the positioning device to the target object comprises the positioning device being relatively horizontal to the target object, the target object being offset relative to the positioning device, and the target object being vertical relative to the positioning device.
21. The method according to any of claims 11-20, wherein the hardware information of the positioning device comprises antenna information of the positioning device.
22. The method according to any one of claims 11-21, wherein the environmental information in which the positioning device is located includes a simple environment and a complex environment; the simple environment is an environment when the channel complexity is smaller than a threshold value, and the complex environment is an environment when the channel complexity is larger than the threshold value.
23. The method according to any of claims 11-22, wherein the network frequency band of the positioning device comprises frequency band information used by the network to which the positioning device is connected.
24. An electronic device, comprising: a processor and a memory for storing computer program code comprising computer instructions which, when read from the memory by the processor, cause the electronic device to perform the positioning method of any of claims 1-10 or 11-23.
25. A computer readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform the positioning method of any of claims 1-10 or 11-23.
CN202211376773.0A 2022-11-04 2022-11-04 Positioning method and electronic equipment Pending CN117991185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211376773.0A CN117991185A (en) 2022-11-04 2022-11-04 Positioning method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211376773.0A CN117991185A (en) 2022-11-04 2022-11-04 Positioning method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117991185A true CN117991185A (en) 2024-05-07

Family

ID=90895984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211376773.0A Pending CN117991185A (en) 2022-11-04 2022-11-04 Positioning method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117991185A (en)

Similar Documents

Publication Publication Date Title
CN108075784B (en) Electronic equipment and method for transmitting wireless signal thereof
US20160119770A1 (en) Method for scanning neighboring devices and electronic device thereof
KR20170058763A (en) Method and electronic device regarding wireless communication using wireless local area network
KR20180098077A (en) Electronic device and method for determining a sutable location of access point device thereof
WO2021098442A1 (en) Positioning interaction method and apparatus
CN115706601A (en) Transmission control method and related device in satellite communication system
EP4171135A1 (en) Device control method, and related apparatus
US20230026812A1 (en) Device Positioning Method and Related Apparatus
CN112134995A (en) Method, terminal and computer readable storage medium for searching application object
CN112400346A (en) Server apparatus and method for collecting location information of other apparatus
CN113676238A (en) Method for determining angle of arrival and related product
CN116156417A (en) Equipment positioning method and related equipment thereof
CN113242349B (en) Data transmission method, electronic equipment and storage medium
WO2017210971A1 (en) Thermal imaging method and device based on mobile terminal and mobile terminal
KR20180121178A (en) Method for wireless connection and electronic device thereof
CN114095542B (en) Display control method and electronic equipment
CN114201738B (en) Unlocking method and electronic equipment
WO2022228059A1 (en) Positioning method and apparatus
KR101549027B1 (en) Mobile device and method for controlling the mobile device
CN117991185A (en) Positioning method and electronic equipment
CN116806013A (en) Message transmission method and corresponding terminal
CN115150646B (en) Method for displaying control window of second electronic equipment and first electronic equipment
CN115016298A (en) Intelligent household equipment selection method and terminal
WO2023024873A1 (en) Display method, electronic device, and system
US20240171802A1 (en) First electronic device and method for displaying control window of second electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination