CN118244273A - Multi-device cooperation method and electronic device - Google Patents

Multi-device cooperation method and electronic device Download PDF

Info

Publication number
CN118244273A
CN118244273A CN202211667722.3A CN202211667722A CN118244273A CN 118244273 A CN118244273 A CN 118244273A CN 202211667722 A CN202211667722 A CN 202211667722A CN 118244273 A CN118244273 A CN 118244273A
Authority
CN
China
Prior art keywords
positioning result
user
positioning
signals
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211667722.3A
Other languages
Chinese (zh)
Inventor
孙渊
杨玉涛
朱孟波
蔡双林
程力
惠少博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211667722.3A priority Critical patent/CN118244273A/en
Publication of CN118244273A publication Critical patent/CN118244273A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

A multi-device cooperation method and electronic device relate to the technical field of positioning, and can realize the cooperation positioning of objects to be positioned by a plurality of devices and improve the positioning precision. Receiving a plurality of signals associated with an object to be positioned, and determining a first positioning result and/or a second positioning result of the object to be positioned according to the plurality of signals; acquiring a third positioning result; the third positioning result is determined from the first positioning result and the second positioning result.

Description

Multi-device cooperation method and electronic device
Technical Field
The present application relates to the field of positioning technologies, and in particular, to a multi-device collaboration method and an electronic device.
Background
With the rapid development of smart home, the spatial perception capability of smart devices becomes an important development direction of smart home. After the space perception capability of the intelligent equipment is enabled, the use experience of the intelligent home can be greatly improved through space interaction between the user and the intelligent equipment and between the intelligent equipment and the intelligent equipment.
In a smart home, an ultrasonic positioning method can be used to position each device. Ultrasound localization may include active localization and passive localization. In one active positioning scheme, as shown in fig. 1, a device (e.g., a mobile phone) may actively transmit ultrasonic waves through an ultrasonic wave transmitting device, and the ultrasonic waves are received by the device after being reflected by a reflecting object (e.g., a sound box). The device can determine parameters such as distance and angle between itself and the reflector according to the characteristics of the transmitted ultrasonic wave and the received ultrasonic wave. In one passive positioning scheme, a device may receive ultrasound waves from other devices and position the other devices based on characteristics of the received ultrasound waves (e.g., cross-correlation, phase difference, etc.).
In smart home scenarios, there are more and more devices that may have an impact on the accuracy of ultrasound positioning. For example, after the transmitting device transmits the ultrasonic wave, the direction, intensity, and the like of the reflected wave may be affected due to the shielding by the intermediate device, so that the transmitting device may deviate when positioning other devices according to the reflected wave having the deviation. Therefore, due to the influence of multipath effect, non-line-of-sight propagation and other factors, positioning deviation is easy to generate during the current ultrasonic positioning, and the positioning precision is low.
Disclosure of Invention
According to the multi-device cooperation method and the electronic device, the object to be positioned can be positioned through cooperation of the plurality of devices, and therefore positioning accuracy is improved.
In order to achieve the above object, the present application provides the following technical solutions:
in a first aspect, a multi-device cooperation method is provided, applied to a first device or a component (such as a chip system) capable of implementing a function of the first device, the method includes:
receiving a plurality of signals associated with an object to be positioned, and determining a first positioning result and/or a second positioning result of the object to be positioned according to the plurality of signals;
Acquiring a third positioning result; the third positioning result is determined from the first positioning result and the second positioning result.
Therefore, the first device can determine the first positioning result and/or the second positioning result of the object to be positioned according to the signals associated with the object to be positioned, and can correct the positioning result of the object to be positioned according to the first positioning result and the second positioning result to obtain a more accurate third positioning result, so that the positioning precision can be improved.
In one possible design, the plurality of signals includes a reflected wave signal of the first device after the transmission signal is reflected by the object to be positioned and/or a reflected wave signal of the second device after the transmission signal is reflected by the object to be positioned.
In one possible design, determining the first positioning result and/or the second positioning result of the object to be positioned according to the plurality of signals includes:
According to the information carried by the signals, determining that the signals are reflected wave signals of the first equipment after the transmission signals of the first equipment are reflected by the object to be positioned and/or reflected wave signals of the second equipment after the transmission signals of the second equipment are reflected by the object to be positioned;
and determining the first positioning result and/or the second positioning result according to the reflected wave signal of the first equipment after the transmitting signal of the first equipment is reflected by the object to be positioned and/or the reflected wave signal of the second equipment after the transmitting signal of the second equipment is reflected by the object to be positioned.
Illustratively, taking the first device as a television and the second device as a speaker, the signal is an ultrasonic signal, the television may transmit the ultrasonic signal (e.g., the coordinates of the carrying television, the device identification of the television). In some examples, after the ultrasonic signal is reflected by the object to be positioned, the reflected wave signal is received by a television, and the television may calculate the position of the object to be positioned according to the reflected wave signal of the object to be positioned. In some examples, after an ultrasonic signal sent by a television is reflected by an object to be positioned, the reflected wave signal is received by a sound box, and the sound box knows that the reflected wave signal is a reflected wave signal corresponding to the ultrasonic signal sent by the television according to television coordinates carried by the reflected wave signal and a television equipment identifier. The sound box can calculate the position of the object to be positioned according to the coordinates of the television and the reflected wave signals.
It can be seen that, for the first device, the signal received by the first device may be a reflected wave signal obtained by reflecting the signal sent by the first device by the object to be positioned. The signal received by the first device may also be a reflected wave signal obtained by reflecting a signal sent by another device (such as the second device) by the object to be positioned. The first device may determine a plurality of positioning results according to a plurality of reflected wave signals associated with the object to be positioned, and correct the plurality of positioning results to improve positioning accuracy. In addition, the plurality of reflected wave signals can be reflected wave signals corresponding to the transmitted signals of different devices, so that the positioning accuracy of the positioning result can be further improved.
In one possible design, a first signal of the plurality of signals is a reflected wave signal of the first device after the transmission signal of the first device is reflected by the object to be positioned, and the first signal carries at least one of the following information: coordinates of the first device, an identification of the first device;
And/or, a second signal in the plurality of signals is a reflected wave signal of the second device after the transmitting signal of the second device is reflected by the object to be positioned, and the second signal carries at least one of the following information: coordinates of the second device, an identification of the second device.
Illustratively, taking the first device as a television and the second device as a speaker, the signal is an ultrasonic signal, the television may transmit the ultrasonic signal (e.g., the coordinates of the carrying television, the device identification of the television). In some examples, after the ultrasonic signal is reflected by the object to be positioned, a reflected wave signal (first signal) is received by the television, where the reflected wave signal (first signal) carries coordinates of the television and a device identifier of the television. The television may calculate the position of the object to be positioned from the reflected wave signal (first signal) of the object to be positioned.
Therefore, the first device can know that the reflected wave signal is the reflected wave signal corresponding to the transmitted signal of the first device or the reflected wave signal corresponding to the transmitted signal of the second device according to the information carried by the reflected wave signal, and accordingly, the object to be positioned is cooperatively positioned, and positioning accuracy is improved.
In one possible design, the plurality of signals are further used to determine at least one fourth positioning result in addition to the first positioning result and the second positioning result;
Obtaining a third positioning result, comprising: and determining the third positioning result according to the first positioning result, the second positioning result and the at least one fourth positioning result.
Optionally, the plurality of signals include a reflected wave signal of the third device after the transmission signal of the third device is reflected by the object to be positioned.
Illustratively, taking the to-be-positioned object as an example of the user a, the sound box a measures the position of the user a at the position 1 (one example of the first positioning result), the television measures the position of the user a at the position 2 (one example of the second positioning result), and the sound box B measures the position of the user a at the position 4 (one example of the fourth positioning result). Then, the sound box a, the sound box B and the television can determine the position 3' (an example of the third positioning result) of the corrected user a according to the coordinate information of the position 1, the position 2 and the position 4 so as to improve the positioning accuracy.
In one possible design, after the third positioning result is obtained, the method further includes:
Transmitting a first message to a second device, the first message being used to instruct the second device to transmit a signal;
Receiving a reflected wave signal of the second equipment after the transmitting signal of the second equipment is reflected by the object to be positioned;
when the reflected wave signal meets a first condition, determining that the third positioning result passes the verification;
the first condition includes any one or more of the following: the direction of the reflected wave signal is within a first range of directions and the angle of the reflected wave signal is within a first range of angles.
Therefore, the accuracy of the positioning result can be improved by checking the third positioning result.
In one possible design, determining the first positioning result and/or the second positioning result of the object to be positioned according to the plurality of signals includes: determining the first positioning result according to the plurality of signals;
the method further comprises the steps of: the second positioning result is received from a second device.
In the method, the first device can position the object to be positioned by itself to obtain a preliminary first positioning result (such as obtaining the direction angle and distance of the object to be positioned relative to itself), and can obtain a second positioning result of the object to be positioned from other devices (second devices). The first device may calibrate its own determined first positioning result according to the second positioning results of the other devices, so as to obtain a third positioning result that is more accurate.
In one possible design, determining the first positioning result and/or the second positioning result of the object to be positioned according to the plurality of signals includes: and determining the first positioning result and the second positioning result according to the plurality of signals.
Illustratively, the first device is a television, the second device is a speaker, and the signal is an ultrasonic signal, and in some examples, the television may transmit an ultrasonic signal (e.g., carrying the coordinates of the television, the device identification of the television). After the ultrasonic signal is reflected by the object to be positioned, the reflected wave signal (first signal) is received by the television, and the reflected wave signal (first signal) carries the coordinates of the television and the equipment identifier of the television. The television may calculate a first positioning result of the object to be positioned from the reflected wave signal (first signal) of the object to be positioned. In some examples, the sound box may transmit an ultrasonic signal (e.g., carrying coordinates of the sound box, device identification of the sound box). After the ultrasonic signal is reflected by the object to be positioned, the reflected wave signal can be received by a television, and the reflected wave signal carries the coordinates of the sound box and the equipment identifier of the sound box. The television can calculate a second positioning result of the object to be positioned according to the reflected wave signal of the object to be positioned and the coordinates of the sound box.
In one possible design, after the third positioning result is obtained, the method further includes:
and presenting first prompt information, wherein the first prompt information is used for prompting the third positioning result of the object to be positioned.
For example, the first device may prompt the positioning result of the object to be positioned by means such as an interface or voice, so as to assist the user to quickly find the object to be positioned.
In one possible design, the object to be positioned comprises a mobile device; after the third positioning result is obtained, the method further comprises:
And sending the third positioning result to the movable equipment so that the movable equipment adjusts a moving path.
Thus, according to the co-location result, the accuracy of the travel route of the movable equipment (such as a robot) can be improved.
In one possible design, the object to be positioned comprises a mobile device; after the third positioning result is obtained, the method further comprises:
And sending the third positioning result to the movable equipment so that the movable equipment displays a control interface associated with the third positioning result.
By way of example, the mobile device may be a hand-held device such as a cell phone, a wristwatch, a Bluetooth button, or the like. The handheld device may switch and display the device control interface corresponding to the living room according to the third positioning result (in the living room), so as to facilitate the user to quickly control the device in the corresponding space.
In one possible design, the object to be positioned includes a useful sound source or noise source; after the third positioning result is obtained, the method further comprises:
According to the third positioning result, adjusting the coverage range of the pickup beam;
wherein, the coverage area of the pickup beam satisfies any of the following conditions: and aiming at the area where the useful sound source is located and avoiding the area where the noise source is located.
The coverage area of the sound pick-up beam being directed at the area where the useful sound source is located may mean that the coverage area of the sound pick-up beam is exactly equal to the area where the useful sound source is located, or the coverage area of the sound pick-up beam may be larger than the area where the useful sound source is located. The coverage area of the pickup beam can be said to be aligned with the area where the useful sound source is located, as long as the pickup beam picks up most or all of the useful audio of the useful sound source and meets the listening needs of the user.
The coverage area of the pickup beam avoiding the area where the noise source is located may refer to the coverage area of the pickup beam completely avoiding the area where the noise source is located, or the coverage area of the pickup beam may partially overlap with the area where the noise source is located. As long as the pickup beam cannot pick up the audio of the noise source or picks up less audio of the noise source, the listening requirement of the user can be met, and the coverage area of the pickup beam can be said to avoid the area where the noise source is located.
In one possible design, the object to be positioned is a first user, the method further comprising:
Receiving a device control instruction of the first user;
And determining a response device for responding to the device control instruction according to the device control instruction and the third positioning result, wherein the response device is the device closest to the first user in a plurality of selectable response devices, and the plurality of selectable response devices comprise the first device.
Illustratively, a television (one example of a first device) cooperates with other devices to locate a user. Assuming that according to the co-location result of the multiple devices for the user object, in the multiple devices capable of receiving and responding to the voice command, the distance between the sound box A and the user is nearest, the sound box A responds to the voice command of the user and controls the lamp to be turned on. Therefore, the user is accurately positioned through cooperation of the multiple devices, the intelligent home control instruction of the user can be responded by the positioning device (such as a sound box) closest to the user, the multiple positioning devices do not need to respond to the intelligent home control instruction, the response accuracy can be improved, and the power consumption of the multiple positioning devices caused by responding to the intelligent home control instruction is avoided.
In one possible design, the method further comprises:
receiving a device control instruction of a second user;
And if the distance between the second user and the first device is greater than a threshold value, not responding to the device control instruction of the second user.
In one possible design, the object to be positioned is a first user; after the third positioning result is obtained, the method further comprises:
And presenting second prompt information according to the third positioning result of the first user and the audio preference information of the first user, wherein the second prompt information is used for recommending the listening position associated with the audio preference information.
In this manner, the first user may listen to audio at the respective seating locations of the respective listening areas to enhance the listening effect.
In one possible design, the object to be positioned is a first user; after a third positioning result is obtained; the method further comprises the steps of:
determining that a fourth device near the third positioning result provides a first service for the first user;
obtaining a fifth positioning result of the first user;
and streaming the first service from the fourth device to a fifth device near the fifth positioning result.
In the method, when the position of the user is detected to be changed, the fifth equipment nearby the user can be nearby called to connect the first service, so that seamless circulation of the first service is realized, and the man-machine interaction efficiency is improved.
In one possible design, the first service includes any one or more of the following: audio service, video service, telephony service.
In one possible design, streaming the first traffic from the fourth device to a fifth device in the vicinity of the fifth location result includes:
acquiring a circulation intention parameter; the circulation intention parameter comprises at least one of the following parameters: the speed of the first user, the acceleration of the first user and the face orientation of the first user;
And if the circulation intention parameter indicates to circulate the first service, the first service is circulated to the fifth device.
For example, the first device may determine whether the user stays at a certain position according to at least one circulation intention parameter such as a position, an acceleration, a speed, and the like of the user. When it is determined that the user is staying at a certain position, the first device can continue the content that the user previously viewed/listened to. When it is determined that the user has passed through only a certain location and does not want to stay at the location, the first device does not continue the content previously viewed/listened to by the user, so as to reduce power consumption due to the connection.
In one possible design, the circulation intent parameter is determined from a plurality of positioning results between the first user moving from the third positioning result to the fifth positioning result.
In one possible design, the signals include at least one of the following: ultrasonic signals, bluetooth signals, wireless fidelity Wi-Fi signals.
In this way, in the multi-device co-location method, the accuracy of the location result can be improved through co-location among a plurality of devices. In addition, in the multi-equipment co-location method, the co-location can be completed through an ultrasonic device in the location equipment (such as intelligent home equipment) without additional deployment of a sensor and the like. In addition, the positioning result does not need to be obtained from the sensor, so that the communication time delay can be reduced, and the positioning efficiency is improved.
In a second aspect, the present application provides a multi-device cooperation apparatus for use with a first device or a component (such as a chip system) capable of implementing a function of the first device, the apparatus comprising:
A communication module for receiving a plurality of signals associated with an object to be positioned;
the processing module is used for determining a first positioning result and/or a second positioning result of the object to be positioned according to the plurality of signals;
The processing module is also used for acquiring a third positioning result; the third positioning result is determined from the first positioning result and the second positioning result.
In one possible design, the plurality of signals includes a reflected wave signal of the first device after the transmission signal is reflected by the object to be positioned and/or a reflected wave signal of the second device after the transmission signal is reflected by the object to be positioned.
In one possible design, determining the first positioning result and/or the second positioning result of the object to be positioned according to the plurality of signals includes:
According to the information carried by the signals, determining that the signals are reflected wave signals of the first equipment after the transmission signals of the first equipment are reflected by the object to be positioned and/or reflected wave signals of the second equipment after the transmission signals of the second equipment are reflected by the object to be positioned;
and determining the first positioning result and/or the second positioning result according to the reflected wave signal of the first equipment after the transmitting signal of the first equipment is reflected by the object to be positioned and/or the reflected wave signal of the second equipment after the transmitting signal of the second equipment is reflected by the object to be positioned.
In one possible design, a first signal of the plurality of signals is a reflected wave signal of the first device after the transmission signal of the first device is reflected by the object to be positioned, and the first signal carries at least one of the following information: coordinates of the first device, an identification of the first device;
And/or, a second signal in the plurality of signals is a reflected wave signal of the second device after the transmitting signal of the second device is reflected by the object to be positioned, and the second signal carries at least one of the following information: coordinates of the second device, an identification of the second device.
In one possible design, the plurality of signals are also used to determine at least one fourth positioning result in addition to the first positioning result and the second positioning result
Obtaining a third positioning result, comprising: and determining the third positioning result according to the first positioning result, the second positioning result and the at least one fourth positioning result.
In one possible design, the communication module is further configured to:
After the third positioning result is obtained, a first message is sent to the second device, wherein the first message is used for indicating the second device to send a signal;
Receiving a reflected wave signal of the second equipment after the transmitting signal of the second equipment is reflected by the object to be positioned;
the processing module is further used for determining that the third positioning result passes the verification when the reflected wave signal meets a first condition;
the first condition includes any one or more of the following: the direction of the reflected wave signal is within a first range of directions and the angle of the reflected wave signal is within a first range of angles.
In one possible design, determining the first positioning result and/or the second positioning result of the object to be positioned according to the plurality of signals includes: determining the first positioning result according to the plurality of signals;
the communication module is further configured to: the second positioning result is received from a second device.
In one possible design, the apparatus further comprises a display module;
The display module is further used for presenting first prompt information after the third positioning result is obtained, wherein the first prompt information is used for prompting the third positioning result of the object to be positioned.
In one possible design, the object to be positioned comprises a mobile device;
And the communication module is further used for sending the third positioning result to the movable equipment after the third positioning result is obtained, so that the movable equipment adjusts the moving path.
In one possible design, the object to be positioned comprises a mobile device;
And the communication module is also used for sending the third positioning result to the movable equipment after the third positioning result is obtained, so that the movable equipment displays a control interface associated with the third positioning result.
In one possible design, the object to be positioned includes a useful sound source or noise source;
The processing module is also used for adjusting the coverage range of the pickup beam according to the third positioning result after the third positioning result is obtained;
wherein, the coverage area of the pickup beam satisfies any of the following conditions: and aiming at the area where the useful sound source is located and avoiding the area where the noise source is located.
In one possible design, the object to be positioned is a first user, the apparatus further comprising an input module;
The input module is used for receiving the equipment control instruction of the first user;
And the processing module is further used for determining a response device for responding to the device control instruction according to the device control instruction and the third positioning result, wherein the response device is the nearest device to the first user in a plurality of selectable response devices, and the plurality of selectable response devices comprise the first device.
In one possible design, the input module is further configured to receive a device manipulation instruction of the second user;
and the processing module is also used for not responding to the equipment control instruction of the second user if the distance between the second user and the first equipment is larger than a threshold value.
In one possible design, the object to be positioned is a first user;
and the display module is also used for presenting second prompt information according to the third positioning result of the first user and the audio preference information of the first user after the third positioning result is obtained, and the second prompt information is used for recommending the listening position associated with the audio preference information.
In one possible design, the object to be positioned is a first user;
the processing module is further used for:
After a third positioning result is obtained, determining that a fourth device nearby the third positioning result provides a first service for the first user;
obtaining a fifth positioning result of the first user;
and streaming the first service from the fourth device to a fifth device near the fifth positioning result.
In one possible design, the first service includes any one or more of the following: audio service, video service, telephony service.
In one possible design, streaming the first traffic from the fourth device to a fifth device in the vicinity of the fifth location result includes:
acquiring a circulation intention parameter; the circulation intention parameter comprises at least one of the following parameters: the speed of the first user, the acceleration of the first user and the face orientation of the first user;
And if the circulation intention parameter indicates to circulate the first service, the first service is circulated to the fifth device.
In one possible design, the circulation intent parameter is determined from a plurality of positioning results between the first user moving from the third positioning result to the fifth positioning result.
In one possible design, the signals include at least one of the following: ultrasonic signals, bluetooth signals, wireless fidelity Wi-Fi signals.
In a third aspect, there is provided an apparatus for inclusion in a first device, the apparatus having functionality to implement the first device behaviour in any one of the methods of the above aspects and possible implementations. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the functions described above. Such as a receiving module or unit, a measuring module or unit, a transmitting module or unit, etc.
A fourth aspect provides an apparatus for inclusion in a second device, the apparatus having functionality to implement the second device behaviour in any of the methods of the above aspects and possible implementations. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the functions described above. Such as a receiving module or unit, a measuring module or unit, a transmitting module or unit, a computing unit, etc.
In a fifth aspect, there is provided a computer readable storage medium comprising computer instructions which, when run on a first device, cause the first device to perform the method as described in the above aspects and any one of the possible implementations.
In a sixth aspect, there is provided a computer readable storage medium comprising computer instructions which, when run on a second device, cause the second device to perform the method as described in the above aspects and any one of the possible implementations.
A seventh aspect provides a computer program product which, when run on a computer, causes the computer to perform the method as described in any one of the possible implementations of the aspects described above.
An eighth aspect provides a system on a chip comprising a processor which, when executing instructions, performs the method as described in any one of the possible implementations of the aspects above.
It will be appreciated that the advantages achieved by the method, the apparatus, the computer readable storage medium and the computer program product provided in the second to eighth aspects are referred to as the advantages provided in the first aspect and any possible implementation manner, and are not repeated here.
Drawings
Fig. 1 is a schematic view of a scenario of an ultrasonic positioning method according to an embodiment of the present application;
FIG. 2A is a schematic diagram of a positioning system according to the present application;
fig. 2B is a schematic structural diagram of a first device according to the present application;
FIG. 3 is a schematic diagram of a full house coordinate system provided by the present application;
fig. 4A and fig. 4B are schematic diagrams of the ultrasonic positioning principle provided by the present application;
fig. 5 and fig. 6 are schematic diagrams of a scenario of a co-location method provided by the present application;
Fig. 7 is a schematic view of a scenario of a co-location method provided by the present application;
fig. 8 is a schematic view of a scenario of a co-location method provided by the present application;
fig. 9A, fig. 9B, and fig. 9C are schematic flow diagrams of a multi-device collaboration method provided by the present application;
Fig. 9D is a schematic view of a scenario of the co-location method provided by the present application;
FIG. 10 is a schematic diagram of some of the object coordinates in a full house coordinate system provided by the present application;
Fig. 11 is a schematic view of a scenario in which a path of a mobile device is adjusted according to a co-location result provided by the present application;
Fig. 12 is a schematic diagram of a method for controlling interface switching of a mobile device according to a co-location result provided by the present application;
fig. 13 and fig. 14 are schematic views of a scenario in which device manipulation is performed according to a co-location result provided by the present application;
fig. 15 and fig. 16 are schematic diagrams of a method for avoiding noise sources according to co-location results provided by the present application;
fig. 17 and fig. 18 are schematic diagrams of a scenario of a noise reduction method provided by the present application;
fig. 19-21 are schematic views of a scenario in which listening positions are recommended according to co-location results provided by the present application;
Fig. 22-23 are schematic diagrams of scenes for co-space detection according to co-location results provided by the present application;
Fig. 24 to fig. 27 are schematic diagrams of a scenario in which service circulation is performed according to a co-location result provided by the present application;
fig. 28 is a schematic structural diagram of an electronic device according to the present application;
Fig. 29 is a schematic diagram of a chip system according to the present application.
Detailed Description
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
The following describes in detail the technical solution provided by the embodiments of the present application with reference to the accompanying drawings.
In the related art, the device can realize positioning of other devices through bluetooth, ultrasound, wi-Fi and other technologies, but with the increase of electronic devices, the wireless environment is more and more complex, and the non-line-of-sight propagation and multipath effect of wireless signals are caused by obstacles between the devices, so that the current positioning precision is poor.
In order to improve the positioning accuracy of equipment, the embodiment of the application provides a multi-equipment cooperative method, wherein the positioning equipment can position an object to be positioned by itself to obtain a preliminary positioning result (such as obtaining the direction angle and distance of the object to be positioned relative to the positioning equipment) and obtain the positioning result of the object to be positioned from other positioning equipment. The positioning device can calibrate the positioning result determined by the positioning device according to the positioning results of other devices so as to obtain a more accurate positioning result. The technical scheme of the embodiment of the application can be applied to various scenes needing to be positioned, such as indoor scenes (e.g. family scenes) and outdoor scenes. Fig. 2A is a schematic diagram of a positioning system according to an embodiment of the present application. The positioning system comprises a first device 100, and a second device 200.
The first device 100 includes an ultrasonic transceiver. The ultrasonic wave transmitting-receiving device may be a device having a transmitting and receiving function, or may be a device having only an ultrasonic transmitting function, or may be a device having only an ultrasonic receiving function. Alternatively, the ultrasound transceiver device includes, but is not limited to, a microphone array. The first device 100 may calculate the position of the device or object or human body based on receiving ultrasonic signals reflected by other devices or objects or human bodies.
In some scenarios, the first device 100 may be referred to as a positioning device for cooperating with the second device 200 for accurate positioning of other devices.
The first device 100 may also respond to control instructions of a user and perform corresponding actions according to the control instructions. In this case, the first device 100 may be referred to as a responding device.
For example, the first device 100 may be a smart speaker, a smart television, an air purifier, a humidifier, a smart light (such as a ceiling lamp, a desk lamp, a fragrance lamp, etc.), a desktop computer, a router, a smart jack, a water dispenser, a refrigerator, a smart switch, a smart door lock, a customer premise equipment (Customer Premise Equipment, CPE), a tablet computer, a mobile phone, etc., and the embodiment of the present application does not limit the specific form of the first device 100.
Referring to fig. 2B, a schematic structural diagram of the first device 100 is shown.
As shown in fig. 2B, the first device 100 may include a processor 110, a memory 120, a universal serial bus (universal serial bus, USB) interface 130, a power module 140, an ultrasound module 150, a wireless communication module 160, and the like.
It should be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the first apparatus 100. In other embodiments of the application, the first device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. In addition, the interfacing relationship between the modules illustrated in the embodiment of the present application is only schematically illustrated, and does not constitute a unique limitation on the achievable structure of the first device 100. In other embodiments of the present application, the first device 100 may also use a different interface from that of fig. 2B, or a combination of multiple interfaces.
Processor 110 may include one or more processing units, and the different processing units may be separate devices or may be integrated into one or more processors. For example, processor 210 is a central processing unit (central processing unit, CPU), may be an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be one or more integrated circuits configured to implement embodiments of the present application, such as: one or more microprocessors (DIGITAL SIGNAL processors, DSPs), or one or more field programmable gate arrays (field programmable GATE ARRAY, FPGAs).
Memory 120 may be used to store computer-executable program code that includes instructions. For example, the memory 120 may also store data processed by the processor 110, such as calculated positions, attitudes, etc., of the second device 200. In addition, memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications and data processing of the first device 100 by executing instructions stored in the memory 120 and/or instructions stored in a memory provided in the processor.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the first device 100, or may be used to transfer data between the first device 100 and a peripheral device.
The power module 140 is used to power various components of the first device 100, such as the processor 210, the memory 220, and the like.
The ultrasound module 150 may provide an ultrasound technology based solution applied on the first device 100. In an embodiment of the present application, the ultrasonic module 150 includes an antenna module for transmitting and/or receiving ultrasonic signals. The first device 100 may transmit and/or receive ultrasonic signals in accordance with the antenna module. For example, in an active positioning scheme, the first device 100 may transmit an ultrasonic signal through the antenna module and receive a reflected signal from other objects through the antenna module to position the device according to characteristics of the reflected signal (e.g., incoming wave direction). In a passive positioning scheme, the first device 100 may receive ultrasonic signals from other devices through the antenna module and position the device according to characteristics of the received ultrasonic signals.
Optionally, the first device 100 may further include a wireless communication module 160 to provide solutions for wireless communication including wireless local area networks (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) networks), bluetooth (BT), global navigation satellite systems (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (NEAR FIELD communication, NFC), infrared technology (IR), etc., for use on the first device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via an antenna, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via an antenna.
The second device 200 has an ultrasonic transceiver. The ultrasonic wave transmitting-receiving device may be a device having a transmitting and receiving function, or may be a device having only an ultrasonic transmitting function, or may be a device having only an ultrasonic receiving function. The second device 200 may calculate the position of the device or object or human body based on receiving the ultrasonic signals reflected by other devices or objects or human bodies. For example, the second device 200 may transmit an ultrasonic signal that is reflected by the first device 10, the second device 200 receiving the reflected signal and locating the first device 100 based on the characteristics of the reflected signal.
In some aspects, the ultrasound transceiver in the first device 100 and/or the second device 200 may also be replaced with other wireless signal transceivers. The first device 100 may locate the object to be located according to a plurality of wireless signals associated with the object to be located received by the wireless signal transceiver. For example, the wireless signal transceiver may be an Ultra Wideband (UWB) transceiver.
In some scenarios, the second device 200 may be referred to as a positioning device for cooperating with the first device 100 for accurate positioning of other devices.
The second device 200 may also respond to control instructions of the user and perform corresponding actions according to the control instructions. In this case, the second device 200 may be referred to as a responding device.
The second device 200 may be a mobile phone, a remote controller, a wearable electronic device (smart watch, smart bracelet, VR glasses, etc.), a tablet computer, a Personal Digital Assistant (PDA), a handle, a mouse, etc., which is not limited to the specific form of the second device 200. The first device 100 and the second device 200 may be the same or different types of devices.
In some aspects, the first device 100 and the second device 200 may be peer-to-peer role devices. The first device 100 and the second device 200 can both calculate the positioning result of the object to be positioned. Or the second device 200 may send some direction and angle information required for positioning to the first device 100, and the first device 100 calculates a positioning result of the object to be positioned. Or the first device 100 may send some direction and angle information required for positioning to the second device 200, and the second device 200 calculates a positioning result of the object to be positioned.
Alternatively, the structure of the second device 200 may be referred to as the structure of the first device 100. For example, the second device 200 may include more or fewer components of the first device 100, or may combine certain components, or split certain components, or a different arrangement of components.
Optionally, the positioning system shown in fig. 2A may further comprise a third electronic device 300 in addition to the first device 100, the second device 200. The first device 100 may perform ultrasonic positioning on the third electronic device 300, to obtain a positioning result 1. The second device 200 may perform ultrasonic positioning on the third electronic device 300, to obtain a positioning result 2. The first device 100 may correct the positioning result of the third electronic device 300 based on the positioning result 1 and the positioning result 2. In this way, the first device 100 can correct its positioning result based on the positioning results of other devices (such as the second device 200), so as to reduce the positioning deviation and obtain a more accurate positioning result. Similarly, the second device 200 may correct the positioning result of the third electronic device 300 based on the positioning result 1 and the positioning result 2.
The following describes in detail a multi-device cooperation method provided by the embodiment of the present application, taking an example that a positioning method is applied to a home scene.
In some embodiments, during installation, maintenance, and commissioning of certain devices in a whole house, an installation engineer may configure coordinates for those devices. In other embodiments, coordinates may also be configured for certain devices in the whole house by, for example, a hub device in the whole house. Alternatively, these coordinates may be configured based on a unified full house coordinate system. By way of example, fig. 3 shows a full house coordinate system and some known coordinates of the apparatus under the full house coordinate system. Wherein O is the origin, X is the X axis, Y is the Y axis, and Z is the Z axis. The coordinates of the sound box in the secondary lying position are (x 1, y1, z 1), the coordinates of the television in the primary lying position are (x 2, y2, z 2), and the coordinates of the sound box on the left side of the television in the living room are (x 6, y6, z 6). In the embodiment of the application, the devices with known coordinates can be utilized to jointly position the object with unknown coordinates. Alternatively, the coordinate systems may be established based on other rules, such as establishing respective coordinate systems in different spatial ranges (e.g., rooms), respectively, and the like.
Alternatively, the object to be positioned with unknown coordinates may be a device or an object or a human body, and the type of the object to be positioned is not limited in the embodiment of the present application.
First, the principle of ultrasound localization of a single device will be described. Taking ultrasonic passive positioning as an example, in some examples, as in fig. 4A, a television transmits a plurality of ultrasonic signals that a sound box receives. Alternatively, as shown in fig. 4B, a microphone array may be disposed in the enclosure. The microphone array includes a plurality of microphones. Each microphone may receive an ultrasonic signal. The sound box can determine the distance d and the angle theta between the sound box and the television through a positioning algorithm according to the time difference of a plurality of ultrasonic signals received by the microphone array.
As follows, the method of co-locating multiple devices is described in which an object with unknown coordinates is located by two devices with known coordinates, one device with known coordinates is a television, the other device with known coordinates is a sound box a, and the object to be located is a person.
For example, as shown in fig. 5, the sound box a locates the user a by using an ultrasonic active locating technology or ultrasonic passive locating technology, and determines that the user a is located at the position 1, the coordinates of the user a are (x 1', y1', z1 '), and the distance between the sound box a and the position 1 is c1. Similarly, the television locates the user a using an ultrasonic active location or passive location technique and determines that the user a is located at the position 2, the coordinates of the user a are (x 2', y2', z2 '), and the distance between the television and the position 2 is d3.
Sound box a may then negotiate with the television to correct the positioning results. For example, as shown in fig. 6, the television may send the sound box a own measurement coordinate information (coordinate information of the position 2) for the user a. Similarly, sound box a may send the television its own measured coordinate information (coordinate information of position 1) for user a. The distance d1 between the sound box A and the position 2 can be calculated according to the coordinates of the sound box A and the measurement coordinate information of the television to the user A. The sound box A can also calculate the deviation angle gamma according to the coordinate information of the position 1, the coordinate information of the position 2 and the coordinate information of the sound box A. The deviation angle γ is the angle between the line between the sound box a and the position 1 and the line between the sound box a and the position 2. The sound box A can also calculate the distance c2 between the position 1 and the position 2 according to the c1, the d1 and the gamma. Wherein, c2 2=c12+d12 -2 x c 1x d 1x cos gamma.
Alternatively, sound box a may directly calculate c2 from the coordinates of position 1 (an example of the first positioning result) and position 2 (an example of the second positioning result). The embodiment of the application does not limit the specific calculation method of c2.
After the distance c2 between the position 1 and the position 2 is calculated by the sound box A, the information of the distance c2 can be sent to the television, or the television can calculate the distance c2 by itself according to the coordinate information of the position 1 and the position 2. The television can calculate the distance d2 between itself and the position 1 according to the coordinate information of the position 1 and the coordinate information of itself. The television may also calculate the deviation angle α based on the distances d2, d3, and c2. Wherein the deviation angle α is the angle between the line between the television and position 1 and the line between the television and position 2.
Alternatively, the television may calculate the deviation angle α according to the following formula:
After the deviation angles gamma and alpha are calculated, the sound box A and the television can determine the final positioning result of the user A according to the coordinates of the position 1 and the position 2, the deviation angles gamma and alpha. Illustratively, as in fig. 6, the position of the user a after correction is the position 3 (one example of the third positioning result), the position 3 may be located within the hatched area. The shadow area is the overlapping area within the deviation angles gamma and alpha.
Optionally, the distance Δt between the position 1 and the position 3 satisfies the following formula 1:
Wherein, delta is a calibration parameter, delta is used for calibrating parameters of a temperature sensor and/or a humidity sensor, and theta is a distance compensation parameter; s sound is the propagation speed of the ultrasonic signal.
Or in other examples, the distance Δt between position 2 and position 3 satisfies equation 1 above.
According to the method, the plurality of devices are used for cooperatively positioning other devices, and the positioning result of the single device is corrected, so that the influence of shielding obstacles in the ultrasonic propagation process can be reduced as much as possible, and the positioning precision is improved.
The positioning of the third device is exemplified by the two known coordinate devices described above, and in other embodiments, the positioning of the third device may be performed by three or more known coordinate devices. Fig. 7 shows an example of a method of locating an object to be located by three devices in concert. The sound box A measures that the position of the user A is at a position 1, the television measures that the position of the user A is at a position 2, and the sound box B measures that the position of the user A is at a position 4. In this example, the sound box a, the television, and the sound box B may calculate the deviation angles γ, α, and β, respectively, according to the above-described scheme, and then the three may determine the corrected position 3' of the user a (one example of the third positioning result) according to the coordinate information of the position 1 (one example of the first positioning result), the position 2 (one example of the second positioning result), the position 4 (one example of the fourth positioning result), the deviation angles γ, α, and β. Wherein the position 3' may be located in the overlapping region of the deviation angles gamma, alpha and beta.
The method of co-locating a plurality of locating devices is not limited to the above-listed cases. Illustratively, as shown in FIG. 8, speaker A measures that user A is in position 1 and television measures that user A is in position 2. The distance between the sound box A and the position 1 is c1, and the distance between the television and the position 2 is d3. The angle between the line between position 1 and sound box a (line corresponding to c 1) and the line between the television and sound box a (b-edge) is assumed to be 45 °, and the angle between the line between position 2 and sound box a (line corresponding to c 2) and the line between the television and sound box a (b-edge) is assumed to be 40 °. The sound box A can acquire the positioning result of the television and calibrate the positioning result of the sound box A according to the positioning result of the television. For example, the sound box a may adjust the positioning result of the correction position 3 according to the angular relationship between the position 2 and itself and the angular relationship between the position 1 and itself, so that the angle between the line (a-side) between the correction position 3 and the sound box a and the line (b-side) between the sound box a and the television satisfies the condition (for example, less than 45 degrees and greater than 40 degrees), and the distance between the correction position 3 and the television is equal to d3.
Taking the example that the multi-device includes a television and a sound box A, FIG. 9A shows a flow of the method for co-locating the multi-device. As in fig. 9A, the method includes:
S101, the television determines the position 2 of the object to be positioned based on ultrasonic positioning.
Illustratively, taking the to-be-positioned object as a user as an example, as shown in fig. 6, the television primarily calculates that the user is positioned at the position 2 based on ultrasonic positioning.
Taking an object to be positioned as equipment, the television is positioned in an ultrasonic active mode as an example, and as a possible implementation mode, the television sends an ultrasonic signal which can be received by the equipment to be positioned and reflected back to the television by the equipment to be positioned. Alternatively, the device to be located may add its own identity information to the reflected wave signal. Thus, after the television receives the reflected wave signal, the reflected wave signal can be determined to be the reflected wave signal from the equipment to be positioned according to the identity information carried by the reflected wave signal, and the equipment to be positioned is positioned according to the reflected wave signal.
Taking an object to be positioned as a user, the television is positioned in an ultrasonic active mode as an example, and as a possible implementation mode, when the user needs to be positioned, the television can send a specific ultrasonic signal, and the specific ultrasonic signal can be reflected back to the television by the user. The television may determine that the reflected wave signal is the reflected wave signal of the user based on the characteristics of the reflected wave signal. For example, the television transmits a plurality of ultrasonic signals, the plurality of ultrasonic signals are respectively reflected back to the television through different parts of the user, and the plurality of reflected wave signals respectively have different characteristics, such as different reflection angles. In such an implementation, the identity information of the user is implicitly carried in the plurality of reflected wave signals. The television may determine the profile of the user (parameters such as height, weight, etc.) based on characteristics of the plurality of reflected wave signals (e.g., reflection angles), and thus the identity of the user. The television may then locate the user based on the reflected wave signal from the user.
In some scenes, the characteristics of the height, the fat and the thin of different users are different, the characteristics of the reflected wave signals reflected by different users to the positioning device are different, and the positioning device can distinguish the reflected wave signals from different users according to the characteristics of each reflected wave signal.
Or in some scenes, a certain distance exists between a plurality of users, the characteristics of directions, angles and the like of reflected wave signals reflected by different users to the positioning device are different, and the positioning device can distinguish the reflected wave signals from different users according to the characteristics of directions, angles and the like of the reflected wave signals.
Or in some scenarios, the positioning device may distinguish between reflected wave signals from different users in combination with other approaches, and embodiments of the present application are not limited in the manner in which reflected wave signals from different users are distinguished.
In some embodiments, the television receiving the reflected wave signal comprises: reflected waves from objects to be positioned and reflected wave signals from interfering objects are received. In this case, the television may determine the reflected wave signal from the object to be positioned from the plurality of reflected wave signals according to the identity information (the equipment identity or the user identity) carried by the plurality of reflected wave signals, so as to avoid the influence of other reflected wave signals on the positioning result.
S102, determining the position 1 of the object to be positioned based on ultrasonic positioning by the loudspeaker box A.
Illustratively, the object to be positioned is a user, and as shown in fig. 6, the sound box a primarily calculates that the user is positioned at the position 1 based on ultrasonic positioning.
The execution sequence of S103 and S201 is not limited in the embodiment of the application.
S103, the sound box A sends the relevant information of the position 1 to the television.
Optionally, the information related to the position 1 includes coordinate information of the position 1.
Alternatively, the plurality of positioning devices that co-locate may be a plurality of positioning devices that are closer together. Thus, the signal propagation delay between positioning devices can be reduced, and the influence of obstacles on the accuracy of ultrasonic positioning can be reduced.
And S104, the television determines the position 3 according to the related information of the position 2 and the related information of the position 1.
As a possible implementation, position 3 may be determined by the television and the master device in box a. The main device may be a television, a device with high computing power in the sound box a, or a device with higher positioning accuracy. Or the master device may also be selected according to other policies. For example, the positioning device with more positioning devices nearby is used as the main device, and the main device can determine the final positioning result according to the preliminary positioning result of the more positioning devices nearby, so that the positioning accuracy is further improved.
As one possible implementation, the positioning device may detect the positioning device in its vicinity, such as by scanning, and broadcast the detection result to other positioning devices. In this way, the information of the positioning devices in the vicinity of each other can be known between the positioning devices. The plurality of positioning devices may determine, as the master device, a positioning device having more positioning devices in the vicinity of the plurality of positioning devices based on a preset rule.
In the example shown in fig. 9A, the television is used as a master device, and the final positioning result (position 3) of the user is calculated by the television according to the positioning result (position 2) of the user and the positioning result (position 1) of the sound box a.
S105, the television sends the relevant information of the position 3 to the sound box A.
Optionally, the information related to the position 3 includes coordinate information of the position 3.
In this way, in the multi-equipment co-positioning method, the precision of the positioning result can be improved through the co-positioning of the television and the sound box A. In addition, in the multi-equipment co-location method, the co-location can be completed through the self-contained ultrasonic device in the location equipment (such as intelligent home equipment) without additional deployment of sensors and the like. In addition, the positioning result does not need to be obtained from the sensor, so that the communication time delay can be reduced, and the positioning efficiency is improved.
The embodiment of the present application further provides a multi-device cooperation method, as shown in fig. 9B, steps S105 and S106 shown in fig. 9A may be replaced by S201, and position 3 is negotiated and determined according to the related information of position 2 and the related information of position 1. That is, the final positioning result is not determined by a certain master device any more, but a plurality of positioning devices each calculate the final positioning result.
As shown in fig. 9B, the method includes the steps of:
S101, the television determines the position 2 of the object to be positioned based on ultrasonic positioning.
S102, determining the position 1 of the object to be positioned based on ultrasonic positioning by the loudspeaker box A.
S103, the sound box A sends the relevant information of the position 1 to the television.
And S104, the television determines the position 3 according to the related information of the position 2 and the related information of the position 1.
Specific implementation of S101-S104 can be seen in the above embodiments, and will not be described here again.
And S201, the television sends the relevant information of the position 2 to the sound box A.
The execution sequence of S103 and S201 is not limited in the embodiment of the application.
S202, determining the position 4 by the sound box A according to the related information of the position 2 and the related information of the position 1.
The loudspeaker box A corrects the preliminary positioning result according to the preliminary positioning result calculated by the television to obtain the specific implementation of the position 4, and the specific implementation of the position 3 calculated by the television can be referred to, and the description is omitted here.
Optionally, after each of the plurality of positioning devices calculates the positioning result of the object to be positioned, the respective positioning result may be reported to a selected certain computing device, and the computing device calculates a final positioning result according to the plurality of positioning results. Alternatively, the computing device may average or weight or probability the plurality of positioning results to obtain a final positioning result. Or the algorithm or method for obtaining the final positioning result may be other, which is not limited by the embodiment of the present application.
The embodiment of the application also provides a method for checking the co-positioning result, and after the plurality of positioning devices are used for positioning the object to be positioned in a co-operation manner, the positioning result can be checked. As shown in fig. 9C, the method includes the steps of:
S301, the television sends a verification notice to the sound box A.
Optionally, the verification notification (an example of the first message) carries the positioning result (coordinate information of position 3) of the object to be positioned by the television.
The television corrects the self-calculated positioning result according to the positioning result of the loudspeaker A, and sends a verification notice or a verification request to the loudspeaker A after obtaining the position 3 of the object to be positioned, so as to inform or request to verify the accuracy of the position 3.
S302, sending an ultrasonic signal by the sound box A.
Optionally, the ultrasonic signal carries identity information of the sound box a. The identity signal includes, but is not limited to, any of the following: and the equipment identifier of the sound box A and the coordinates of the sound box A.
As a possible implementation manner, after the sound box receives the verification notification or the verification request from the television, it is determined that the positioning result of the television needs to be verified, and then the sound box a may send an ultrasonic signal, and the sent ultrasonic signal carries identity information (such as an equipment ID and coordinates) of the sound box a.
As a possible implementation manner, the sound box a may send an ultrasonic signal according to a positioning result (such as coordinate information of the position 3) carried by the verification notification. For example, an ultrasound signal is transmitted at an angle towards the area in which position 3 is located.
S303, the television receives a reflected wave signal from the object to be positioned.
After the ultrasonic signal sent by the sound box A is reflected by the object to be positioned, the reflected wave signal can be received by a television. The television can determine that the reflected wave signal is the reflected wave signal corresponding to the ultrasonic signal sent by the sound box A according to the identity information of the sound box A carried by the reflected wave signal.
S304, if the reflected wave signal satisfies the first condition, the television determines that the position 3 is correct.
Optionally, the first condition includes any of the following: the angle of the reflected wave signal is within a first angle range and the direction of the reflected wave signal is within a first direction range. The first angle range and the first direction range are determined by the television according to the coordinates of the sound box A and the position 3.
For example, as shown in fig. 9D, after the television calculates the position 3 of the object to be positioned by the above co-positioning method, it may be calculated according to the coordinates of the sound box a and the position 3 that when the sound box a sends an ultrasonic signal, the ultrasonic signal reaches the first direction range and the first angle range of the television after being reflected by the object to be positioned at the position 3. Assuming that the first angle range is (43 degrees, 47 degrees) and the first direction range is the right side, the following sound box A sends an ultrasonic signal, after the ultrasonic signal is reflected by the object to be positioned and reaches the television, the television can calculate whether the direction of the reflected wave signal is the right side and whether the arrival angle of the reflected wave signal is in the range of (43 degrees, 47 degrees), if so, the position 3 is the accurate or the positioning position in the fault tolerance range, and if so, the position 3 is the position 3 or the position close to the position 3. Thus, by checking the position 3, the accuracy of the positioning result can be improved.
FIG. 10 illustrates a full house coordinate system positioned using a multi-device co-location method in accordance with an embodiment of the present application. The coordinates of the object to be positioned (user or device) can be calculated through the co-positioning method between the positioning devices.
The multi-device cooperation method can be applied to various scenes in which the object to be positioned needs to be positioned accurately. Several possible applicable scenarios are presented below.
Scene one: the robots are co-located by the plurality of locating devices, and plan moving paths according to the co-locating results
For example, multiple positioning devices may co-locate the robot. As shown in fig. 11, when the robot moves to the living room, the speaker of the living room and the router of the restaurant may cooperatively locate the robot and transmit the cooperative result to the robot. After the robot receives the co-location result, the travel route can be adjusted according to the co-location result so as to avoid larger deviation of the travel route. For example, the preset travel route of the robot is the direction indicated by the arrow in fig. 11, and moves from the kitchen to the bathroom. However, the current co-location result indicates that the robot is currently located at the position a, and the robot can move to the position B according to the co-agreement result and move from the position B to the toilet along the route indicated by the arrow. Therefore, according to the co-positioning result, the accuracy of the robot travel route can be improved.
Scene II: the multiple devices can cooperatively position the central control screen, and the central control screen switches the control interface according to the cooperative positioning result
The positioning devices can cooperatively position the central control screen and send a cooperative positioning result to the central control screen. For example, as shown in fig. 12 (a), the central control screen is located in a living room, and a plurality of positioning devices in the living room can cooperatively position the central control screen and send a cooperative positioning result to the central control screen. The central control screen can display a control interface 401 corresponding to the living room according to the co-location result (in the living room). Optionally, the control interface 401 may include an identification of at least one device in the living room. The user can control the corresponding equipment through the identification of the corresponding equipment, for example, the user clicks the identification of the ceiling light band, and the central control screen can control the ceiling light band to execute corresponding actions in response to the operation of the user.
And then, the central control screen is moved to a restaurant, and a plurality of positioning devices in the restaurant can cooperatively position the central control screen and send a cooperative positioning result to the central control screen. The central control screen can display a control interface 402 corresponding to a restaurant according to the co-location result (in the restaurant). Optionally, the control interface 402 may include an identification of at least one device in the restaurant.
The above description mainly uses a central control screen as an example, and the central control screen may be replaced by other handheld devices, such as a mobile phone, a watch, and a bluetooth button (button) device. For example, the Bluetooth button device may be provided with a plurality of buttons, and different buttons may have different functions. The user can manipulate the corresponding buttons to control execution of the corresponding functions. In some examples, when the Bluetooth button device is located in a bedroom, a button provided thereon may have a function of manipulating the device in the bedroom. When the Bluetooth button device is located in the living room, the buttons provided on the Bluetooth button device can have the function of controlling the devices in the living room. The Bluetooth button device can control and switch the function corresponding to a certain button according to the co-location result.
Scene III: after the multi-equipment is co-located, controlling the intelligent household equipment according to the co-location result
In some cases, the user needs to use and control the smart home device.
In some examples, when the user wants to use and control the smart home device, the positioning device nearest to the user can respond to the control instruction of the user and perform control on the smart home device. Taking a user controlling the smart home device through a voice command as an example, as shown in fig. 13, the user sends out a voice command "mini-art, turn on a light", and multiple devices (such as a sound box a of a living room and a sound box B of a bedroom) in the whole house can receive the voice command for controlling the device. In some examples, after speaker a and speaker B receive the voice command, the user may be located based on the multi-device collaboration method described above. For example, sound box a and sound box B may be co-located. Or speaker a may be co-located with other devices of known coordinates in the living room. The sound box B can be used for positioning a user in cooperation with other devices in a bedroom. The embodiment of the application does not limit the type and the number of the equipment which are used for carrying out the co-positioning together with the sound box A and does not limit the type and the number of the equipment which are used for carrying out the co-positioning together with the sound box B.
As shown in fig. 13, it is assumed that, according to the co-location result of the multiple devices for the user object, in the multiple devices capable of receiving and responding to the voice command, the distance between the sound box a and the user is nearest, and the sound box a responds to the voice command of the user to control the on-lamp. Therefore, the user is accurately positioned through cooperation of the multiple devices, the intelligent home control instruction of the user can be responded by the positioning device (such as a sound box) closest to the user, the multiple positioning devices do not need to respond to the intelligent home control instruction, the response accuracy can be improved, and the power consumption of the multiple positioning devices caused by responding to the intelligent home control instruction is avoided.
In addition, the nearest positioning equipment to the user responds to the intelligent home control instruction of the user, the control effect expected by the user can be met, the signal to noise ratio of picked sound is reduced, a better pickup effect can be obtained, and the man-machine interaction efficiency is improved.
The above description uses the case that the sound box a and the sound box B both receive the voice command, and needs to determine whether the sound box a or the sound box B responds to the voice command according to the distance between the sound box A, B and the user, and in other examples, the sound box a receives the voice command of the user, and if no other positioning device is near the sound box a, the sound box a can directly control to turn on the light, without calculating the distance between itself and the user.
In other examples, the location of the user may be sensed by the multi-device co-location method described above, and the device control intent of the user may be determined based on the precise location of the user. For example, after the user goes home at night and goes home, after the user is detected to enter the home through the co-location of multiple devices (such as a standby sound box A and a standby sound box B in the living room), a certain device in the multiple devices can control to turn on a lamp in the living room. Thereafter, if the movement of the user to the bedroom is detected by the multi-device co-location, a device of the multi-device may control the lights of the bedroom to be turned on.
Optionally, the control flow of the smart home device may be executed by a positioning device closest to the smart home device to be controlled among the plurality of positioning devices. Or executing the control flow of the intelligent household equipment through the positioning equipment closest to the user in the plurality of positioning equipment. Or select a positioning device for controlling the smart home device using other methods.
In other examples, multiple users want to control smart home devices during the same period of time. Under the condition, the multiple users can be respectively positioned by the multi-equipment co-positioning method, and the control flow of the intelligent home equipment is executed according to the positioning results of the multiple users. For example, as shown in fig. 14, the user A, B can send out a voice command at the same time, and the sound box A, B can receive the voice command of the user A, B. The speaker A, B may employ the co-location method described above to ultrasonically locate the user A, B. And after the sound box A is positioned closest to the user A, the sound box A responds to the voice instruction of the user A to play the song A. Alternatively, since speaker a is farther from user B (e.g., greater than a threshold), speaker a may not respond to user B's voice command. Optionally, since the sound box a can know the positions of the sound box B and the user B, in order to avoid interference with the audio played by the sound box B in the adjacent room, the sound box a can reduce the playing volume.
Similarly, if sound box B is closest to user B, sound box B plays song B in response to user B's voice command. In addition, the sound box B can also reduce the playing volume so as not to interfere the audio played by the sound box A in the adjacent room. In this example, the plurality of positioning devices may determine the accurate positions of the plurality of users by co-positioning, and the positioning device (may simply be referred to as a responding device) for responding to each user may be determined according to the accurate positions of the plurality of users. Each response device executes the control instruction of the intelligent home device, and the control instructions are not mutually interfered.
In other examples, the distance of the plurality of locating devices from the user may be similar. In this example, the responding device may be determined according to one or more policies of priority of each positioning device, whether the positioning device is on screen, relative direction information between the positioning device and the user, relative angle information between the positioning device and the user, and the like. For example, in a home theater scene, each speaker is similar in distance from a user, and a speaker that needs to respond can be determined according to a preset strategy. For another example, in a home theater scene, a speaker to be responded can be determined according to relative direction information between the speaker and a user, so as to improve the listening effect.
Scene four: positioning the pickup object through cooperation of multiple devices, and adjusting the direction and/or coverage of the pickup beam according to the result of the cooperative positioning
In general, an object/person can be classified into an object to be picked up and a noise source object according to whether the object/person is a pickup object of the current apparatus. The object to be picked up, i.e. the object the device needs to pick up sound in the current scene. Noise source objects, i.e. objects in the current scene that do not need to pick up sound but that can create sound disturbances in the pick-up scene.
Taking the K song scene as an example, the sound pickup apparatus needs to pick up the sound signal of a person. The person is an object (object to be picked up) in the scene, the sound of other objects may interfere with the sound of the person, and the other objects are objects (noise source objects) in the current scene, which do not need to pick up the sound.
Alternatively, the object to be picked up may be, but not limited to, a person to be picked up, an object to be picked up, a device to be picked up, or the like. The noise source object may be, but is not limited to, a noise source object, a noise source character, a noise source device, etc.
Taking the K song scene as an example, the sound signal of the user is useful sound, and the sound of other objects is noise. Illustratively, as shown in fig. 15 (a), the initial pickup direction of the pickup device may be 360 degrees of omnidirectional pickup, and the pickup device sound box a may locate the accurate position of the user through the above-mentioned multi-device cooperation method. As shown in fig. 15 (b), the sound box a can precisely adjust the direction and/or coverage of the pickup beam according to the position of the user, so that the sound box a can directionally pick up the sound of the user. So, can reduce audio amplifier A's power consumption, promote pickup efficiency.
Scene five: positioning a noise source through multi-equipment cooperation, and performing noise reduction treatment according to the cooperative positioning result
Taking a K song scene as an example, the sound signal of the user is useful sound, and the sound of other objects is noise. Illustratively, as in fig. 16 (a), sound pickup apparatus sound box a may locate the position of the noise source by the above-described multi-apparatus cooperative method, such as sound box a may cooperate with other sound boxes in the vicinity to determine the position of the nearby noise source. Assuming that, through co-location, the sound box a determines that the noise source sound box B is located in the area covered by the sound pickup beam of the sound box a, the sound box a can notify the sound box B to perform directional playing. The sound box B plays the audio directionally according to the instruction of the sound box A, so that the played audio is not picked up by the sound box A. In this way, the influence of the playback audio of sound box B (noise for sound box a in the current scene) on the pickup of useful sound signals by sound box a can be reduced.
In other embodiments, if the sound box a determines that the sound box B is located in the coverage area of the sound pickup beam of the sound box a through the above co-positioning method, the sound box a may adjust the direction and/or coverage area of the sound pickup beam, so that the sound box B is no longer located in the coverage area of the sound pickup beam. Illustratively, as in (a) of fig. 16, sound box a determines that noise source sound box B is located within the coverage area of its pickup beam. As in fig. 16 (B), sound box a may adjust the direction and/or coverage of the sound pickup beam such that sound box B is no longer within the coverage of the sound pickup beam.
In other embodiments, if the sound box a determines that the noise source sound box B is located in the coverage area of the sound pick-up beam of the sound box a, the sound box a may adjust the direction and/or coverage area of the sound pick-up beam so that the playing audio of the sound box B is not picked up by the sound box a. Under such circumstances, the sound box B may be located in the coverage area of the sound pick-up beam adjusted by the sound box a, but because the sound box B may directionally play audio or other factors, the sound box a cannot pick up the audio (noise) played by the sound box B, so that the receiving effect of the sound box a on the useful sound signal will not be affected.
Alternatively, as in fig. 17 (a), after sound box a detects a noise source (such as sound box B), a notification message may be sent to sound box B. The notification message is used for notifying the sound box B to send the played audio information to the sound box A. After the sound box B receives the notification message from the sound box a, as shown in (B) of fig. 17, the audio being played or the audio being played later is sent to the sound box a, and the sound box a performs noise reduction processing on the audio played by the sound box B. Thus, the noise source can be positioned through the cooperation of multiple devices, noise reduction processing is carried out according to the position of the noise source, and interference of audio frequency played by the sound box B (noise for a user positioning scene of the sound box A) on the sound box A to pick up useful sound signals can be reduced.
Optionally, the noise reduction process includes, but is not limited to, an automatic echo cancellation (acoustic echo cancelling, AEC) process. Fig. 18 shows the interaction between sound boxes A, B, and the flow of noise reduction processing by sound box a on the audio of noise source sound box B.
Optionally, clock synchronization may be performed between the pickup device and the noise source device to match the audio phase of the noise to avoid that the effective audio is mishandled by the AEC due to mismatch between the effective audio and the noise.
In the process, the noise source equipment and the pickup equipment can cooperatively realize cross-equipment AEC processing. For example, the above-mentioned sound box a can perform AEC processing as noise from the external noise of the sound box B, so as to reduce the influence of the external noise of the sound box B on the pick-up effect of the sound box B. In this way, in a scene such as voice wake-up, the voice recognition effect under signal-to-noise direction homology (the useful signal and noise are in the same direction, and interference is large) can be improved by the noise reduction processing.
The AEC processing is mainly performed by the sound pickup apparatus, and in other embodiments, the AEC processing may be performed by the sound playing apparatus (or called playing apparatus). The specific flow of AEC processing performed by the playback device may be referred to in the related art, and will not be described herein. Or the sound pickup equipment and the playing equipment cooperatively perform AEC processing so as to improve the noise reduction effect.
Scene four: after the multi-equipment is co-located, setting audio playing parameters according to the co-location result
Typically, different users have different parameter preferences for the same playing audio. For example, some users prefer to listen to audio in stereo audio and some users prefer to listen to audio in non-stereo audio. In order to improve user experience in the scene, different audio playing effects can be provided for different users in different areas so as to meet the listening requirements of the different users.
Taking a full house scene as an example, optionally, feature information of family members and audio preference information of each family member may be configured in a central device of the full house. Wherein the characteristic information of the family members is used for identifying the family members. The characteristic information includes, but is not limited to, personal information, sound information, and physical information. Optionally, the audio playing parameters include, but are not limited to: sound effect (or called sound effect), volume, language, and mood.
The audio preference information of the family members is used for representing the preference condition of the family members on the audio playing parameters. For example, family member 1 prefers stereo sound and family member B prefers non-stereo sound. Such as children, preference for children's voices or parent's voices, etc. For another example, elderly persons or children prefer non-stereo sound effects to meet the hearing protection needs. For another example, family member 1 prefers audio in English and family member 2 prefers audio in Chinese.
Illustratively, table 1 shows audio preference information for each family member:
TABLE 1
Family members Audio preference information
Grandpa (grandpa) Non-stereo sound effect, low volume
Milk Non-stereo sound effect, low volume
Father (father) Stereo sound effect
Mother's mother Stereo sound effect
Child's body Non-stereo sound effect, low volume
In some embodiments, the playing device may locate the family member by cooperating with other devices, and the playing device may also obtain the characteristic information of the family member. The playing device can determine the viewing position suitable for the family member according to the position and the characteristic information of the family member, and prompt the viewing position information to the family member.
Illustratively, as shown in fig. 19 (a), the television detects that the family member dad wants to watch the video played by the television, and locates dad through the multi-device cooperative method described above, for example, the television and the sound box a cooperatively locate dad. Assuming co-located, the television determines that dad is seated in position 1 of the sofa. The television may also determine, according to the audio preference information of dad, that dad prefers to listen to audio of a stereo audio, while current position 1 is suitable for listening to audio of a normal audio. Then the television may prompt dad to be seated in a position where stereo audio can be heard (e.g., position 2 of the sofa).
Alternatively, as shown in fig. 19 (a), the television may voice dad to move the seated position, for example, the television plays voice "current seated position corresponds to normal effect, please seat to right position 2 to hear stereo sound. As shown in fig. 19 (b), after determining that dad is seated in position 2 of the sofa by co-locating with other devices (such as speaker a), the television may prompt that dad "is seated in the correct position, please listen to the stereo audio. In this way, the television can locate the family member by cooperating with other devices, and guide the family member to sit at a corresponding position according to the audio preference information of the family member, so that the listening effect of the family member is improved.
Optionally, the tv may also prompt dad to move the seated position through the interface, as in fig. 20 (a), where the tv displays the interface 101 (an example of the second prompt information), and the position of the tv, sofa, and family member dad is schematically displayed on the interface 101. Optionally, before dad moves to position 2, the television may display a prompt 102 "please continue to move right" on interface 101, directing dad to move exactly to position 2. After dad moves to position 2, as in fig. 20 (b), the television may display a prompt 103 "already seated in the correct position, please listen to the stereo sound on interface 101.
Alternatively, the television may support multiple audio playback parameters and multi-channel playback. In some embodiments, the television may control audio parameters such as the direction, volume, etc. of audio played by the speakers according to the seating position of the family member, so as to form different listening areas, where different listening areas have different listening effects or audio playing effects. For example, a listening area for stereo sound effects and a listening area for normal sound effects (non-stereo sound effects) are formed. For example, the volume of the partial hearing area can be increased, and the volume of the partial hearing area can be reduced, so that the hearing protection requirement of the old or children is met. For another example, the partial listening area normally plays Chinese audio, and the partial listening area plays English audio. The family member may listen to the audio at the respective seating locations of the respective listening areas to enhance the listening effect. Optionally, the family member may also wear headphones to listen to the audio. In this way, audio can be listened to immersively, so as not to be disturbed by the audio of other listening areas.
The above description mainly uses the television to prompt the family member to adjust the sitting position of the listening sound through an interface, voice, or the like as an example, and in other embodiments, the television may prompt the family member to adjust the sitting position through other electronic devices. For example, a portable device of a family member, such as a mobile phone, a tablet and the like, can be provided with corresponding applications, and the family member can adjust the listening position directly through the applications so as to be at the most suitable listening position, thereby meeting the personalized listening requirements of the user.
The above description mainly uses one family member to watch, and in some cases, multiple family members may watch at the same time. Or the television may identify the identity of each member in other manners, and the method for identifying the identity of each member is not limited in the embodiment of the present application. The television can calculate a listening area suitable for each member according to the audio preference information of each member, and prompt each member to adjust the sitting position so as to improve the listening effect.
For example, as shown in fig. 21, when a child and dad watch video together, the television may prompt the child to sit in position 1 of the sofa (position 1 can hear normal audio, low volume audio) and prompt dad to sit in position 2 (position 2 can hear stereo audio, higher volume audio). Thus, different family members can realize the sound listening effect of each preference.
In some embodiments, the television may automatically adjust the listening area if it detects that a member of the plurality of family members who are watching video is away from the current seated position and has not returned for a period of time. For example, the use of the corresponding audio playback parameters in the listening area in which the member was previously located is stopped.
Or in some embodiments, if it is detected that a new member wants to join in watching video, the television may calculate a listening area suitable for the new member according to the identity of the new member and the audio preference information, and prompt the new member to sit in a corresponding position of the listening area, so as to promote the listening effect of the new member.
Through the method, the user can be positioned through cooperation of the multiple devices, different hearing effects are realized for different seating positions aiming at the seating positions of different users, and personalized hearing requirements of the user can be met.
The method for adaptively adjusting the audio playing effect is mainly described by taking a home scene as an example, and it can be understood that the method for adaptively adjusting the audio playing effect can also be applied to other scenes.
For example, in an in-vehicle scenario, a vehicle may include multiple intelligent headrest seats, each of which may have speakers mounted on both sides for playing audio. Multiple components within the vehicle may cooperate to detect the seating position a of user a and the seating position B of user B. Assuming that the audio preference information of the user A represents that the user A prefers to listen to stereo sound effects and high-volume audio; the audio preference information of the user B indicates that the user B prefers to listen to the audio with the non-stereo effect and low volume, when the vehicle plays the audio A (such as the audio of the vehicle-mounted radio), the speakers at the two sides of the seat at the seating position A can be controlled to play the audio A with the stereo effect and high volume, and the speakers at the two sides of the seat at the seating position B play the audio A with the non-stereo effect and low volume so as to meet the listening requirements of different users. In addition, because the sound of the speakers on the two sides of the seat is low, the user sitting on the seat usually listens to the speakers by himself without disturbing the users of other seats.
For another example, in a cinema or other public place playing scene, a plurality of positioning devices can cooperatively position a user, and adaptively adjust an audio playing effect according to a cooperative positioning result.
Scene five: after the multi-equipment is co-located, the same-room detection is carried out according to the co-location result
The same room detection refers to detecting whether a plurality of devices are located in the same space. The same space is understood to be the same room. In some scenarios, a device may be at an interface of two spaces. Such as at the junction of a kitchen and living room. In this case, accurate positioning of the device is required.
In some embodiments, the plurality of positioning devices use respective ultrasonic positioning capabilities to cooperatively initiate positioning to the unknown coordinate point device and correct a co-positioning result according to the embodiments, and determine which space the positioned device belongs to according to the co-positioning result and the space where the plurality of positioning devices are located. Alternatively, the space in which the locating device closest to the located device among the plurality of locating devices is located may be used as the space in which the located device is located.
Illustratively, as in FIG. 22, the locating devices 1-3 cooperate to locate the located device 4. Assuming that the distance between the positioning device 1 and the positioned device 4 among the positioning devices 1-3 is closest, the space (living room) in which the positioning device 1 is located may be regarded as the space in which the positioned device 4 is located. Similarly, the positioning device 2 and the positioning device 3 cooperate to position the positioned device 5. Assuming that the distance between the positioning device 3 and the positioned device 5 among the positioning devices 2, 3 is closest, the space (bedroom) in which the positioning device 2 is located may be regarded as the space in which the positioned device 5 is located. The positioned devices 4, 5 are not in the same space.
In other embodiments, after determining the positional relationship between the positioning device and the positioned device, map information may be generated, and the space in which the positioned device is located may be determined according to the map information. As shown in fig. 23, after the positioning device 3 positions the positioned device 5 by cooperating with the positioning device 2, a map may be generated which may present the position coordinates (x 3, y3, z 3) of the positioning device 3 in the whole house, the relative positional relationship between the positioning device 3 and the positioned device 5, the position coordinates (x 5, y5, z 5) of the positioned device 5, and the like. From this map, the locating device 3 knows that the located device 5 (x 5, y5, z 5) is located in the bedroom.
Optionally, the map as shown in fig. 23 may also present the positions of other located devices (such as located device 4), other locating devices (such as locating devices 1, 2).
Through the same-room detection method, the plurality of positioned devices can be positioned accurately in a cooperative manner through the plurality of positioning devices, and then whether the plurality of positioned devices are in the same space can be judged accurately according to the cooperative positioning result, so that the same-room detection precision is improved.
Optionally, the plurality of positioning devices may send the co-location result to the central control screen device. The central control screen device can display the devices in the same space on the control interface corresponding to the space according to the co-location result. For example, as in fig. 12 (a), the center screen may present the identification of the device located in the living room on interface 401, as in fig. 12 (b), and the identification of the device located in the restaurant on interface 402. Therefore, the central control screen can accurately display the identification of the equipment in the corresponding space, so that the equipment in the corresponding space is convenient to assist a user to control, and the equipment control efficiency is improved.
Scene six: after the multi-equipment is co-located, according to the co-location result, the communication circulation is realized
In the scene, the user can be cooperatively positioned through a plurality of positioning devices, and the conversation is automatically circulated to the device in the space where the user is positioned for pickup and playing based on the position of the user and the space where the user is positioned.
Illustratively, as in fig. 24, the cell phone 2401 is in a bedroom where the user moves to the living room (one example of the fifth positioning result). When the handset 2401 (one example of the fourth device) receives a video call request (one example of the first service) from another handset, the user may not hear the video call message, such as the bedroom handset 2401 is muted, or the user is inconvenient to take the handset 2401 to the bedroom although hearing the video call message. In this case, when the mobile phone 2401 senses that the user is not nearby, a connection message may be sent to the peripheral device so that the call flow can be transferred to a device closer to the user. Alternatively, the handset 2401 may send a connection message to a plurality of devices in the vicinity via multicast.
In some examples, the television 2402 in the living room directly or indirectly receives the connection message and determines that the distance between the user and the television 2402 is relatively close by cooperating with other devices, so that the television 2402 can automatically connect the call of the mobile phone 2401. Alternatively, the television 2402 may send a response message directly or indirectly to the mobile phone 2401, and after the mobile phone 2401 receives the response message, the information of the call may be directly or indirectly sent to the television 2402. The information of the call includes audio and video information of the counterpart in the call process. Therefore, the call can be automatically transferred to the television in the living room by the mobile phone in the bedroom, so that the user can conveniently and quickly call through the television with the closer distance.
According to the method, the positions of the users are accurately positioned through the plurality of positioning devices, the devices capable of communicating can be called nearby according to the accurate positions of the users, sound follow-up is achieved, the users can conveniently communicate nearby, and the man-machine interaction efficiency is improved.
Compared with the technical scheme of the embodiment of the application, compared with the technical scheme that the sensor in the related art is used as the positioning equipment to sense and record the position information and then call the playing equipment and the pickup equipment to generate the communication signaling and the communication delay, the positioning equipment can directly respond after positioning, for example, the positioning equipment can be directly used as the playing equipment and the pickup equipment after positioning, the playing equipment and the pickup equipment do not need to be called separately, and the connection delay caused by calling the playing equipment and the pickup equipment separately can be reduced.
In some embodiments, after the plurality of positioning devices cooperate to position the user, if the distances between the plurality of positioning devices and the user are similar, or the plurality of positioning devices meet the condition of call connection, the positioning device that the user is accustomed to using may be selected to perform call connection. For example, assuming that the distances between the flat panel in the living room and the television are similar to those between the television and the user, the television on which the user frequently performs video call can be selected to continue the video call, so as to meet the use habit of the user and promote the continuing experience.
In the above description, the video call is taken as an example, and it should be understood that the call that can be connected in the above connection method may also be a voice call or the like. The video call, the voice call may be an operator call, or may be a non-operator call (network call), such as a voice over IP (voice over internet protocol) call. An operator call may refer to a mobile call utilizing an operator network.
In the above description, the following call is taken as an example, it should be understood that, in the process that the user moves in each space of the whole house, the content that can be connected between the devices may also be a media stream, such as audio, video, etc., so as to meet the connection requirement of the user on various information. The embodiment of the application does not limit the content which can be connected.
Optionally, the devices can utilize local circulation or cloud circulation to realize circulation of continuous content. Taking the following audio stream as an example, when device 1 wants to transfer the audio stream to device 2, device 1 may directly transfer the audio stream to device 2. For another example, taking a continuous call as an example, the mobile phone 1 can directly transfer the audio and video streams of the call to the television. Or the call server may forward the audio and video streams of the call to the television.
Scene seven: after the multi-equipment co-location, judging whether to transfer the content according to the co-location result
The plurality of positioning devices can cooperatively position the user and calculate the circulation intention parameter of the user so as to determine the connection intention of the user. The user's circulation intent parameters include, but are not limited to, any one or more of the following: acceleration, speed, face orientation of the user. For example, the plurality of positioning devices can judge whether the user stays at a certain position according to the position, the acceleration and the speed of the user. When it is determined that the user is staying at a certain position, the positioning device can continue the content that the user previously viewed/listened to. When it is judged that the user passes through a certain position only and does not want to stay at the position, the positioning device does not continue the content which the user previously watched/listened to, so that the power consumption caused by the connection is reduced.
Illustratively, as shown in fig. 25, a small Y looks at a net lesson in a child's house and wants to go to the kitchen for taking. The television of the child's house may send a continuation message. Multiple positioning devices in the living room can directly or indirectly receive the continuous messages sent by the television in the child room. When the small Y moves to the living room, a plurality of positioning devices in the living room can cooperatively position the small Y and calculate the moving speed and the acceleration of the small Y. Through calculation, if the small Y is determined to pass through the living room only, the television is not connected with the net lesson content watched before the small Y so as to reduce the power consumption caused by wrong connection.
Similarly, multiple locating devices in the kitchen may receive directly or indirectly successive messages sent by the child's house television. When the small Y moves to the kitchen, a plurality of positioning devices of the kitchen can cooperatively position the small Y and calculate the moving speed and the acceleration of the small Y. Through calculation, if the small Y is determined to pass through the kitchen only, screen equipment (such as a refrigerator) of the kitchen does not connect the net lesson content watched before the small Y so as to reduce power consumption caused by wrong connection. Therefore, in the whole process that the small Y moves from the child room to the kitchen, as the small Y does not have the intention of staying at a certain position, the small Y does not have the requirement of connecting net lessons, and therefore, the devices of the living room and the kitchen do not connect net lessons watched before the small Y, so that high power consumption is avoided due to frequent and short connection.
Still more illustratively, as shown in fig. 26, a small Y stays in the living room after coming out of the kitchen. After a plurality of positioning devices in the living room detect that a small Y stays in the living room, the small Y is determined to have the requirement of watching the video before the small Y is continued.
In some examples, if the plurality of pointing devices includes a screen device, the screen device (e.g., television) may continue the net lesson content previously viewed by the small YY. Therefore, the playing device can automatically connect the content watched/listened by the user, so that the operation of the user is simplified, and the man-machine interaction experience is improved.
Or in some examples, if the plurality of positioning devices do not include the screen-carrying device, the plurality of positioning devices may instruct other screen-carrying devices to continue the online class content.
Still more illustratively, as shown in fig. 26, a small Y stays in the living room after coming out of the kitchen. After a plurality of positioning devices in the living room detect that a small Y stays in the living room and the face of the small Y faces towards the screen equipment (such as a television), the small Y is determined to have the requirement of watching videos successively, and then the television can be instructed to continue the net class content watched previously by the small Y. Therefore, the playing device can automatically connect the content watched/listened by the user, so that the operation of the user is simplified, and the man-machine interaction experience is improved.
In the method, the plurality of positioning devices can cooperatively and accurately judge the connection intention of the user, intelligently transfer and connect the contents such as audio/video according to the connection intention of the user, and have more accurate connection and lower probability of wrong connection, and can promote connection experience.
In some embodiments, the acceleration, speed, etc. of the user may be calculated by a device (such as a switch) that does not require a bright screen, or a constant electrical device (such as a speaker), etc. and the television or other device may be notified whether to wake up to continue the content that the user previously viewed/listened to. Therefore, the power consumption of the television and other equipment caused by shutdown or standby after frequent startup detection and judgment are not continuous can be reduced.
In some embodiments, the plurality of positioning devices may further determine whether to perform audio/video connection according to a state of the user and a usage habit of the user. For example, user a (e.g., elderly) is accustomed to continuing video on a large screen device. After the user A watches the video in the bedroom, the plurality of positioning devices detect that the user A moves to the living room and stays in the living room, and then the plurality of positioning devices determine that the user A has the requirement of watching the video continuously.
In some embodiments, a plurality of positioning devices may determine devices available for connection (which may be simply referred to as connection devices) according to a certain policy. For example, in some examples, if the plurality of positioning devices includes a large screen device, the large screen device may be directly used as a connection device to play the video, so that the user a can watch the video through the large screen device that is used to. For another example, if the plurality of positioning devices includes a plurality of large screen devices available for connection, the plurality of positioning devices may determine a final connection device from the plurality of large screen devices according to a user's usage habits or other policies.
For another example, if the plurality of positioning devices do not include a large screen device, such as none of the plurality of positioning devices has a screen, or the screen size of the configuration of the plurality of positioning devices is smaller, the plurality of positioning devices may instruct other large screen devices to play the video successively.
Optionally, the circulation intention parameter is determined according to a plurality of positioning results. Such as based on a plurality of positioning results between the user moving from the third positioning result to the fifth positioning result. Specifically, in the process that the user moves from the third positioning result to the fifth positioning result, the circulation intention parameter of the user is determined according to the third positioning result and the fifth positioning result. Or in the process that the user moves from the third positioning result to the fifth positioning result, determining the circulation intention parameter of the user according to the third positioning result, the fifth positioning result and other positioning results in the middle.
Optionally, the positioning device may also calculate the circulation intention parameter of the user in combination with other sensors or cameras, etc.
Scene eight: after the multi-equipment is co-located, the streaming content is adjusted according to the co-location result
In the scene, the audio played by the playing device can be processed based on the co-location result of the user, so that the listening effect of the user is ensured.
The plurality of positioning devices can cooperatively position the user and continue the content listened to/watched by the user according to the position of the user. Illustratively, as in FIG. 27, the user begins listening to Song 1 played by a speaker 2501. Then, when the user is detected to move to the home position by co-positioning of a plurality of positioning devices (such as the television 2502 and the sound box 2503 for the home position), the sound box 2503 can transfer the song stream played in the sound box 2501 to the user for playing, and the television 2502 can play the video (MV) corresponding to the song. In this way, the user can enjoy songs in the main sleeper.
Optionally, the enclosure 2503 may adjust the sound parameters of the audio being played such that the auditory sensation of the audio is consistent with or close to the auditory sensation of the audio being played by enclosure 2501. Optionally, the playing parameters of the audio include, but are not limited to, sound pressure level, volume.
Optionally, the sound box can adjust the playing parameters of the audio according to the distance between the sound box and the user. For example, when a user is in a living room, the distance between the sound box 2501 and the user is d1, the sound volume played by the sound box 2501 is 50dB (decibel), and the hearing volume of the user is 45dB. After the user moves to lie on his or her home, the distance between the sound box 2503 and the user is d2 (d 2 is greater than d 1), and in order to ensure that the hearing volume of the user is still 45dB or close to 45dB, the sound box 2503 may increase the playing volume, for example, adjust the playing volume to 55dB.
Optionally, the sound box can adjust the playing parameters of the audio according to the hearing volume of the user. For example, when the sound box has the same playing volume as the mobile phone and the same distance from the user, the user feels that the sound box has higher volume. Based on this, different types of devices can adjust the volume after the audio is continued so that the user's auditory volume coincides with the auditory volume before the continuation.
Optionally, the sound box can adjust the playing parameters of the audio according to the reverberation coefficient of the room and the number of devices in the room. For example, when there are multiple playing devices in the room, parameters such as volume of each playing device can be adjusted, so that the cooperative playing effect of the multiple playing devices accords with the listening habit of the user.
Therefore, the positioning equipment can cooperate with other equipment to accurately position the user, and the parameters of the played audio are adjusted according to the accurate position of the user, so that the hearing balance of the user is realized, and the hearing effect and experience are improved.
In some embodiments, the positioning device may also obtain personalized audio parameter preferences of the user, and adjust parameters of playing audio according to the co-location result and the audio parameter preferences of the user.
Alternatively, the locating device may identify a particular user. For example, based on ultrasound technology, a particular user is identified, the user's pose, morphology (e.g., height, body shape), etc. Or the locating device may also identify the identity of the user using other methods such as camera shooting.
The user a is located in a living room, and after the identity of the user a is identified, the plurality of positioning devices can adjust the playing parameters of the sound box in the living room according to the co-positioning result so as to conform to the hearing habit of the user a. For another example, after the user B is located in the restaurant and the identity of the user B is identified, the plurality of positioning devices may adjust the playing parameters of the sound box in the restaurant according to the co-location result, so as to conform to the hearing habit of the user B.
The above description mainly uses a home scenario as an example to describe a scenario to which the embodiment of the present application is applied, and the technical solution of the embodiment of the present application is also applicable to other scenarios for positioning based on wireless signals. For example, the method is applicable to positioning scenes of shops and factories. Techniques for locating based on wireless signals may also be, but are not limited to, locating based on bluetooth, locating based on Wi-Fi, locating based on wireless signals in a mobile network such as a cellular.
The above-mentioned multiple scenes are mainly described by taking a co-location manner as an example, and in some of the above-mentioned scenes, the location may be performed by other manners. Other positioning means include, but are not limited to, single device positioning. In some scenarios, for an object to be precisely positioned, a co-location approach is used to locate the object. For related sub-scenes which do not need to be accurately positioned, or objects which do not need to be accurately positioned, a single-device positioning mode can be used for positioning.
In addition, the above is mainly exemplified by that each positioning device in the positioning system adopts an ultrasonic technology for positioning, and in some embodiments, the positioning devices in the positioning system may also all adopt other same positioning technologies for positioning. Or each positioning device in the positioning system is also positioned based on different positioning technologies, and the devices adopting different positioning technologies can also be co-positioned. Or the positioning equipment can also cooperate with equipment or devices such as a sensor to position so as to improve the positioning precision. For example, some devices are equipped with microphone arrays that can be positioned by ultrasound techniques. Some devices are configured with bluetooth modules that can be positioned by bluetooth technology. The device 1 may perform preliminary positioning on the object 1 to be positioned through an ultrasonic technology to obtain a positioning result 1, and receive a positioning result 2 (such as a positioning result based on a bluetooth technology) of the object 1 to be positioned from a device 2 or other devices, where the device 1 may determine a final positioning result 3 of the object 1 to be positioned according to the positioning result 1 and the positioning result 2.
Similarly, the device 2 may perform preliminary positioning on the object 2 to be positioned through bluetooth technology to obtain a positioning result 4, and receive a positioning result 5 (positioning result based on ultrasonic technology) of the object 2 to be positioned from the device 1, and the device 2 may determine a final positioning result 6 of the object 2 to be positioned according to the positioning result 4 and the positioning result 5.
The above-mentioned one or more interfaces are exemplary, and other interface design manners are also possible, and the present application is not limited to the specific design manner of the interfaces, and the switching manner between the interfaces is not limited.
The above embodiments may be combined and the combined solution may be implemented. Optionally, some operations in the flow of method embodiments are optionally combined, and/or the order of some operations is optionally changed. The order of execution of the steps in each flow is merely exemplary, and is not limited to the order of execution of the steps, and other orders of execution may be used between the steps. And is not intended to suggest that the order of execution is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize a variety of ways to reorder the operations described herein. In addition, it should be noted that details of processes involved in a certain embodiment herein are equally applicable to other embodiments in a similar manner, or may be used in combination between different embodiments.
Moreover, some steps in method embodiments may be equivalently replaced with other possible steps. Or some steps in method embodiments may be optional and may be deleted in some usage scenarios. Or other possible steps may be added to the method embodiments. Or the execution subject (such as a functional module) of some steps in the method embodiment may be replaced with another execution subject.
Moreover, the method embodiments described above may be implemented alone or in combination.
Further embodiments of the present application provide an apparatus, which may be the first device, the second electronic device, etc. described above. The apparatus may include: a memory and one or more processors. Optionally, the apparatus may further comprise a display screen. The display, memory, and processor are coupled. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the apparatus may perform the various functions or steps performed by the corresponding devices in the method embodiments described above. The structure of the device can be referred to as an electronic apparatus (device) shown in fig. 28.
The core structure of the device may be represented as the structure shown in fig. 28, where the device includes: a processing module 1301, an input module 1302, a storage module 1303, and a display module 1304.
Processing module 1301 may include at least one of a Central Processing Unit (CPU), an application processor (Application Processor, AP), or a communication processor (Communication Processor, CP). Processing module 1301 may perform operations or data processing related to control and/or communication of at least one of the other elements of the consumer electronic device. Specifically, the processing module 1301 may be configured to control the content displayed on the home screen according to a certain trigger condition. The processing module 1301 is further configured to process the input instruction or data, and determine a display style according to the processed data.
The input module 1302 is configured to obtain an instruction or data input by a user, and transmit the obtained instruction or data to other modules of the electronic device. Specifically, the input mode of the input module 1302 may include touch, gesture, proximity screen, or voice input. For example, the input module may be a screen of an electronic device, acquire an input operation of a user, generate an input signal according to the acquired input operation, and transmit the input signal to the processing module 1301.
The storage module 1303 may include volatile memory and/or nonvolatile memory. The storage module is used for storing at least one relevant instruction or data in other modules of the user equipment device, in particular, the storage module can record the positioning result of the object to be positioned.
Display module 1304, which may include, for example, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, an Organic Light Emitting Diode (OLED) display, a microelectromechanical system (MEMS) display, or an electronic paper display. For displaying user viewable content (e.g., text, images, video, icons, symbols, etc.).
Optionally, a communication module 1305 is also included to support the device to communicate with other devices (via a communication network). For example, the communication module may be connected to a network via wireless communication or wired communication to communicate with other devices or network servers. The wireless communication may employ at least one of cellular communication protocols, such as Long Term Evolution (LTE), long term evolution-advanced (LTE-a), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), universal Mobile Telecommunications System (UMTS), wireless broadband (WiBro), or global system for mobile communications (GSM). The wireless communication may include, for example, short-range communication. The short-range communication may include at least one of wireless fidelity (Wi-Fi), bluetooth, near Field Communication (NFC), magnetic Stripe Transmission (MST), or GNSS.
It should be noted that each functional module of the apparatus may perform one or more steps of the above-described method embodiments.
Embodiments of the present application also provide a chip system including at least one processor 1401 and at least one interface circuit 1402, as shown in fig. 29. The processor 1401 and the interface circuit 1402 may be interconnected by wires. For example, interface circuit 1402 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, interface circuit 1402 may be used to send signals to other devices (e.g., processor 1401). Illustratively, the interface circuit 1402 may read instructions stored in the memory and send the instructions to the processor 1401. The instructions, when executed by the processor 1401, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
The embodiment of the application also provides a computer storage medium, which comprises computer instructions, when the computer instructions run on the electronic equipment, the electronic equipment is caused to execute the functions or steps executed by the mobile phone in the embodiment of the method.
The embodiment of the application also provides a computer program product which, when run on a computer, causes the computer to execute the functions or steps executed by the mobile phone in the above method embodiment.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and the division of modules or units, for example, is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (21)

1. A multi-device collaboration method, applied to a first device, the method comprising:
receiving a plurality of signals associated with an object to be positioned, and determining a first positioning result and/or a second positioning result of the object to be positioned according to the plurality of signals;
Acquiring a third positioning result; the third positioning result is determined from the first positioning result and the second positioning result.
2. The method according to claim 1, wherein the plurality of signals comprises a reflected wave signal of the transmission signal of the first device after reflection by the object to be positioned and/or a reflected wave signal of the transmission signal of the second device after reflection by the object to be positioned.
3. Method according to claim 1 or 2, wherein determining the first positioning result and/or the second positioning result of the object to be positioned from the plurality of signals comprises:
According to the information carried by the signals, determining that the signals are reflected wave signals of the first equipment after the transmission signals of the first equipment are reflected by the object to be positioned and/or reflected wave signals of the second equipment after the transmission signals of the second equipment are reflected by the object to be positioned;
and determining the first positioning result and/or the second positioning result according to the reflected wave signal of the first equipment after the transmitting signal of the first equipment is reflected by the object to be positioned and/or the reflected wave signal of the second equipment after the transmitting signal of the second equipment is reflected by the object to be positioned.
4. A method according to claim 2 or 3, wherein a first signal of the plurality of signals is a reflected wave signal of the first device after reflection of the transmission signal of the object to be positioned, the first signal carrying at least one of the following information: coordinates of the first device, an identification of the first device;
And/or, a second signal in the plurality of signals is a reflected wave signal of the second device after the transmitting signal of the second device is reflected by the object to be positioned, and the second signal carries at least one of the following information: coordinates of the second device, an identification of the second device.
5. The method of any of claims 1-4, wherein the plurality of signals are further used to determine at least one fourth positioning result in addition to the first positioning result and the second positioning result
Obtaining a third positioning result, comprising: and determining the third positioning result according to the first positioning result, the second positioning result and the at least one fourth positioning result.
6. The method of any of claims 1-5, wherein after obtaining the third positioning result, the method further comprises:
Transmitting a first message to a second device, the first message being used to instruct the second device to transmit a signal;
Receiving a reflected wave signal of the second equipment after the transmitting signal of the second equipment is reflected by the object to be positioned;
when the reflected wave signal meets a first condition, determining that the third positioning result passes the verification;
the first condition includes any one or more of the following: the direction of the reflected wave signal is within a first range of directions and the angle of the reflected wave signal is within a first range of angles.
7. Method according to any of claims 1-6, wherein determining the first positioning result and/or the second positioning result of the object to be positioned from the plurality of signals comprises: determining the first positioning result according to the plurality of signals;
the method further comprises the steps of: the second positioning result is received from a second device.
8. The method of any of claims 1-7, further comprising, after obtaining the third positioning result:
and presenting first prompt information, wherein the first prompt information is used for prompting the third positioning result of the object to be positioned.
9. The method according to any of claims 1-8, wherein the object to be positioned comprises a mobile device; after the third positioning result is obtained, the method further comprises:
And sending the third positioning result to the movable equipment so that the movable equipment adjusts a moving path.
10. The method according to any of claims 1-8, wherein the object to be positioned comprises a mobile device; after the third positioning result is obtained, the method further comprises:
And sending the third positioning result to the movable equipment so that the movable equipment displays a control interface associated with the third positioning result.
11. The method according to any of claims 1-8, wherein the object to be positioned comprises a useful sound source or noise source; after the third positioning result is obtained, the method further comprises:
According to the third positioning result, adjusting the coverage range of the pickup beam;
wherein, the coverage area of the pickup beam satisfies any of the following conditions: and aiming at the area where the useful sound source is located and avoiding the area where the noise source is located.
12. The method of any of claims 1-8, wherein the object to be positioned is a first user, the method further comprising:
Receiving a device control instruction of the first user;
And determining a response device for responding to the device control instruction according to the device control instruction and the third positioning result, wherein the response device is the device closest to the first user in a plurality of selectable response devices, and the plurality of selectable response devices comprise the first device.
13. The method according to claim 12, wherein the method further comprises:
receiving a device control instruction of a second user;
And if the distance between the second user and the first device is greater than a threshold value, not responding to the device control instruction of the second user.
14. The method according to any of claims 1-8, wherein the object to be positioned is a first user; after the third positioning result is obtained, the method further comprises:
And presenting second prompt information according to the third positioning result of the first user and the audio preference information of the first user, wherein the second prompt information is used for recommending the listening position associated with the audio preference information.
15. The method according to any of claims 1-8, wherein the object to be positioned is a first user; after a third positioning result is obtained; the method further comprises the steps of:
determining that a fourth device near the third positioning result provides a first service for the first user;
obtaining a fifth positioning result of the first user;
and streaming the first service from the fourth device to a fifth device near the fifth positioning result.
16. The method of claim 15, wherein the first service comprises any one or more of the following: audio service, video service, telephony service.
17. The method according to claim 15 or 16, wherein streaming the first traffic from the fourth device to a fifth device in the vicinity of the fifth positioning result comprises:
acquiring a circulation intention parameter; the circulation intention parameter comprises at least one of the following parameters: the speed of the first user, the acceleration of the first user and the face orientation of the first user;
And if the circulation intention parameter indicates to circulate the first service, the first service is circulated to the fifth device.
18. The method of claim 17, wherein the circulation intent parameter is determined from a plurality of positioning results between the first user moving from the third positioning result to the fifth positioning result.
19. The method of any one of claims 1-18, wherein the signal comprises at least one of the following: ultrasonic signals, bluetooth signals, wireless fidelity Wi-Fi signals.
20. An electronic device comprising a memory and one or more processors; the memory is coupled to the processor; the memory is for storing computer program code comprising computer instructions which, when executed by the processor, cause the one or more processors to perform the method of any of claims 1-19.
21. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-19.
CN202211667722.3A 2022-12-23 2022-12-23 Multi-device cooperation method and electronic device Pending CN118244273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211667722.3A CN118244273A (en) 2022-12-23 2022-12-23 Multi-device cooperation method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211667722.3A CN118244273A (en) 2022-12-23 2022-12-23 Multi-device cooperation method and electronic device

Publications (1)

Publication Number Publication Date
CN118244273A true CN118244273A (en) 2024-06-25

Family

ID=91557137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211667722.3A Pending CN118244273A (en) 2022-12-23 2022-12-23 Multi-device cooperation method and electronic device

Country Status (1)

Country Link
CN (1) CN118244273A (en)

Similar Documents

Publication Publication Date Title
CN110972033B (en) System and method for modifying audio data
US8472632B2 (en) Dynamic sweet spot tracking
CN102355748B (en) For determining method and the handheld device of treated audio signal
CN106028226B (en) Sound playing method and equipment
EP3202160B1 (en) Method of providing hearing assistance between users in an ad hoc network and corresponding system
US20150358768A1 (en) Intelligent device connection for wireless media in an ad hoc acoustic network
US9462109B1 (en) Methods, systems, and devices for transferring control of wireless communication devices
WO2021037129A1 (en) Sound collection method and apparatus
WO2018127915A1 (en) An audio communication system and method
TW201728857A (en) Lighting and sound system
WO2015191787A2 (en) Intelligent device connection for wireless media in an ad hoc acoustic network
CN110072177B (en) Space division information acquisition method and device and storage medium
WO2022056126A1 (en) Wearable audio device within a distributed audio playback system
WO2022242405A1 (en) Voice call method and apparatus, electronic device, and computer readable storage medium
EP3376781B1 (en) Speaker location identifying system, speaker location identifying device, and speaker location identifying method
JP2006229738A (en) Device for controlling wireless connection
CN118244273A (en) Multi-device cooperation method and electronic device
CN118259289A (en) Multi-device cooperation method and electronic device
US20220382508A1 (en) Playback Device with Conforming Capacitive Touch Sensor Assembly
EP4160565A1 (en) A system for locating an electronic accessory device
US20210399578A1 (en) Wireless charger for playback devices
CN117083882A (en) Information processing device, information processing method, and program
US20070041598A1 (en) System for location-sensitive reproduction of audio signals
US20190302838A1 (en) Display device and method for operating display device
US20240111041A1 (en) Location-based audio configuration systems and methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination