WO2024041341A1 - 发出、接收声音信号以及检测设备间相对位置的方法 - Google Patents

发出、接收声音信号以及检测设备间相对位置的方法 Download PDF

Info

Publication number
WO2024041341A1
WO2024041341A1 PCT/CN2023/110850 CN2023110850W WO2024041341A1 WO 2024041341 A1 WO2024041341 A1 WO 2024041341A1 CN 2023110850 W CN2023110850 W CN 2023110850W WO 2024041341 A1 WO2024041341 A1 WO 2024041341A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound signal
speaker
microphone
plane
relative position
Prior art date
Application number
PCT/CN2023/110850
Other languages
English (en)
French (fr)
Inventor
李世明
孟姝彤
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024041341A1 publication Critical patent/WO2024041341A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/26Position of receiver fixed by co-ordinating a plurality of position lines defined by path-difference measurements

Definitions

  • the present application relates to the field of terminal technology, and in particular to a method for detecting emitted and received sound signals and detecting relative positions between devices.
  • the relative position between two terminal devices can be determined through special components such as radar, but the cost of using special components such as radar to detect the relative position between devices is often very high.
  • the present application provides a method for emitting and receiving sound signals and detecting relative positions between devices, which can reduce the cost of detecting relative positions between devices.
  • embodiments of the present application provide a method for emitting a sound signal, which is applied to a first device, where the first device at least includes a first speaker, a second speaker, and a third speaker;
  • the first speaker and the second speaker are located on the first side of the first plane, the third speaker is located on the second side of the first plane, the first speaker and the third speaker are located on the second On the third side of the plane, the second speaker is located on the fourth side of the second plane; the first plane and the second plane are not parallel;
  • the methods include:
  • the first device emits a first sound signal through the first speaker and the second speaker, and emits a second sound signal through the third speaker;
  • the first device In response to detecting the first event, switches to emit a third sound signal through the first speaker and the third speaker, and emits a fourth sound signal through the second speaker.
  • the first device at least includes a first speaker, a second speaker and a third speaker.
  • the first speaker and the second speaker are located on the first side of the first plane, and the third speaker is located on the second side of the first plane. side, the first speaker and the third speaker are located on the third side of the second plane, and the second speaker is located on the fourth side of the second plane; the first plane and the second plane are not parallel.
  • the first device may first emit the first sound signal through the first speaker and the second speaker, emit the second sound signal through the third speaker, and in response to detecting the first event, switch to emit the third sound signal through the first speaker and the third speaker.
  • Three sound signals, and a fourth sound signal is emitted through the second speaker. That is to say, in response to the first event, the relative positions between the speakers that emit sound signals can be switched, so that the relative positions between the first device and other devices can be updated, and the detection between the first device and other devices can be improved. relative position accuracy.
  • the first device further includes a fourth speaker located on the second side of the first plane and on the fourth side of the second plane;
  • the first device emits a first sound signal through the first speaker and the second speaker, and emits a second sound signal through the third speaker, including:
  • the first device emits the first sound signal through the first speaker and the second speaker, and emits the second sound signal through the third speaker and the fourth speaker;
  • the first device switches to emit a third sound through the first speaker and the third speaker.
  • Three sound signals, a fourth sound signal emitted through the second speaker include:
  • the first device In response to detecting the first event, switches to emit the third sound signal through the first speaker and the third speaker, and emit the third sound signal through the second speaker and the fourth speaker. the fourth sound signal.
  • At least one of the first sound signal and the second sound signal is the same as at least one of the third sound signal and the fourth sound signal.
  • the first event includes at least one of the following:
  • the posture of the first device changes
  • the display mode of the first device changes
  • the first device establishes a communication connection with the second device
  • the first device discovers the second device
  • the first device receives a first request sent by the second device, the first request is used to trigger the first device to detect the relative position relationship between the first device and the second device;
  • the first device receives a second request sent by the second device, and the second request is used to request to trigger the first device to switch the sound mode.
  • the first event includes a change in posture of the second device.
  • the display mode may include landscape display and portrait display, and the change in the display mode may include changing from landscape display to portrait display, or from portrait display to landscape display.
  • the display mode may include main screen display and secondary screen display, and the change in the display mode may include changing from main screen display to secondary screen display, or from secondary screen display to main screen display.
  • the display mode may include split-screen display and full-screen display, and the display mode change may include changing from split-screen display to main-screen display, or from main-screen display to split-screen display.
  • the posture of the first device or the second device changes, which may include movement, shaking, rotation, etc.
  • the second event may include an event in which the posture change amplitude of the first device or the second device is greater than a preset amplitude threshold, wherein the amplitude threshold may be used to illustrate the detection of the difference between the first device and the second device.
  • the relative position operation is sensitive to posture changes of the first device or the second device.
  • the relative position relationship between the first device and the second device can be detected when the posture of the first device or the second device changes with a small amplitude, and the frequency of detecting the relative position relationship is small;
  • the amplitude threshold is large, when a large change in the posture of the first device or the second device is detected, the relative position relationship between the first device and the second device can be detected, and the frequency of the relative position relationship can be detected.
  • the second event may be that the first device rotates at an angle greater than or equal to 90 degrees, or the second device rotates at an angle greater than or equal to 90 degrees.
  • the first device may switch the way of emitting the sound signal in response to detecting the first event. Therefore, when the posture of the first device changes, the display mode of the first device changes, the first device establishes a communication connection with the second device, and the first device.
  • a first event such as a device discovers a second device
  • the first device receives a first request sent by the second device
  • the first device receives a second request sent by the second device
  • the posture of the second device changes, other events occur.
  • the device can accurately determine the relative position between the device and the first device based on the switched sound mode in a timely manner, thereby improving the accuracy of detecting the relative position between the device and the first device.
  • the display screen of the first device includes a set of relatively long sides and a set of relatively short sides;
  • the first plane and the second plane are perpendicular to each other, and the first plane and the second plane are perpendicular to the plane where the display screen is located;
  • the first plane is parallel to the longer side, and the second plane is parallel to the shorter side; or, the first plane is parallel to the shorter side, and the second plane is parallel to the shorter side. Long sides parallel.
  • the first device may emit a first sound signal through the first speaker and the second speaker, a second sound signal through the third speaker (and fourth speaker), and the first sound signal through the third speaker.
  • a speaker and the third speaker emit a third sound signal
  • the second speaker (and the fourth speaker) emit a fourth sound signal.
  • embodiments of the present application provide a method for receiving a sound signal, applied to a first device, where the first device at least includes a first microphone, a second microphone, and a third microphone;
  • the first microphone and the second microphone are located on the first side of the first plane, the third microphone is located on the second side of the first plane, the first microphone and the third microphone are located on the second On the third side of the plane, the second microphone is located on the fourth side of the second plane; the first plane and the second plane are not parallel;
  • the methods include:
  • the first device receives a first sound signal through the first microphone and the second microphone, and receives a second sound signal through the third microphone;
  • the first device In response to detecting the first event, switches to receive a third sound signal through the first microphone and the third microphone, and receives a fourth sound signal through the second microphone.
  • the first device at least includes a first microphone, a second microphone and a third microphone.
  • the first microphone and the second microphone are located on the first side of the first plane, and the third microphone is located on the second side of the first plane. side, the first microphone and the third microphone are located on the third side of the second plane, and the second microphone is located on the fourth side of the second plane; the first plane and the second plane are not parallel.
  • the first device may first receive the first sound signal through the first microphone and the second microphone, receive the second sound signal through the third microphone, and in response to detecting the first event, switch to receive the third sound signal through the first microphone and the third microphone. Three sound signals, and a fourth sound signal is received through the second microphone. That is to say, in response to the first event, the relative positions between the microphones that receive the sound signals can be switched, so that the relative positions between the first device and other devices can be updated, and the detection between the first device and other devices can be improved. relative position accuracy.
  • the first device further includes a fourth microphone located on the second side of the first plane and on the fourth side of the second plane;
  • the first device receives a first sound signal through the first microphone and the second microphone, and receives a second sound signal through the third microphone, including:
  • the first device receives the first sound signal through the first microphone and the second microphone, and receives the second sound signal through the third microphone and the fourth microphone;
  • the first device switches to emit a third sound signal through the first microphone and the third microphone, and emits a fourth sound signal through the second microphone, including:
  • the first device In response to detecting the first event, switches to emit the third sound signal through the first microphone and the third microphone, and emits a fourth sound signal through the second microphone and the fourth microphone. sound signal.
  • the first event includes at least one of the following:
  • the posture of the first device changes
  • the display mode of the first device changes
  • the first device establishes a communication connection with the second device
  • the first device discovers the second device
  • the first device receives a first request sent by the second device, the first request is used to trigger the first device to detect the relative position relationship between the first device and the second device;
  • the first device receives a second request sent by the second device, and the second request is used to request to trigger the first device to switch the radio mode.
  • the first event includes a change in posture of the second device.
  • the first device may switch the mode of receiving the sound signal in response to detecting the first event. Therefore, when the posture of the first device changes, the display mode of the first device changes, the first device establishes a communication connection with the second device, and the first device.
  • a first event such as a device discovers a second device
  • the first device receives a first request sent by the second device
  • the first device receives a second request sent by the second device
  • the posture of the second device changes
  • the first event occurs.
  • One device can accurately determine the relative position between the first device and other devices in time based on the switched sound mode, which improves the accuracy of detecting the relative positions between the first device and other devices.
  • the display screen of the first device includes a set of relatively long sides and a set of relatively short sides;
  • the first plane and the second plane are perpendicular to each other, and the first plane and the second plane are perpendicular to the plane where the display screen is located;
  • the first plane is parallel to the longer side, and the second plane is parallel to the shorter side; or, the first plane is parallel to the shorter side, and the second plane is parallel to the shorter side. Long sides parallel.
  • the first device may receive a first sound signal through the first microphone and the second microphone, a second sound signal through the third microphone (and fourth microphone), and a second sound signal through the third microphone.
  • a microphone and the third microphone receive a third sound signal
  • the second microphone (and the fourth microphone) receive a fourth sound signal.
  • embodiments of the present application provide a method for detecting relative positions between devices, which can be applied to devices including the first device and the third device.
  • a system of two devices the first device including at least a first speaker, a second speaker and a third speaker;
  • the first speaker and the second speaker are located on the first side of the first plane, the third speaker is located on the second side of the first plane, the first speaker and the third speaker are located on the second On the third side of the plane, the second speaker is located on the fourth side of the second plane; the first plane and the second plane are not parallel;
  • the methods include:
  • the first device emits a first sound signal through the first speaker and the second speaker, and emits a second sound signal through the third speaker;
  • the second device receives the first sound signal and the second sound signal
  • the second device determines a first relative position between the second device and the first device based on the first arrival time of the first sound signal and the second arrival time of the second sound signal. .
  • the first device includes at least a first speaker, a second speaker and a third speaker.
  • the first speaker and the second speaker are located on the first side of the first plane, and the third speaker is located on the second side of the first plane. side, the first speaker and the third speaker are located on the third side of the second plane, and the second speaker is located on the fourth side of the second plane; the first plane and the second plane are not parallel.
  • the first device can emit a first sound signal through the first speaker and the second speaker, and can emit a second sound signal through the third speaker.
  • the second device can receive the first sound signal and the second sound signal, and receive the first sound signal according to the first sound signal.
  • the first arrival time, and the second arrival time of the second sound signal determine a first relative position between the second device and the first device. That is to say, the relative position between devices can be accurately detected through speakers and microphones without relying on components such as radar, which reduces the cost of detecting the relative position between devices.
  • the first device emits a first sound signal through the first speaker and the second speaker, and emits a second sound signal through the third speaker, including:
  • the first device In response to detecting the second event, the first device emits a first sound signal through the first speaker and the second speaker, and emits a second sound signal through the third speaker.
  • the second event includes any of the following:
  • the first device establishes a communication connection with the second device
  • the first device discovers the second device
  • the first device receives a first request sent by the second device, and the first request is used to trigger the first device to detect the relative position relationship between the first device and the second device.
  • the second event includes a change in posture of the second device.
  • the first device may emit a sound signal through a specific speaker in response to detecting the second event. Therefore, when the first device establishes a communication connection with the second device, the first device discovers the second device, and the first device receives the signal sent by the second device. When a second event such as a first request or a change in the posture of the second device occurs, the other device can accurately determine its relative position to the first device in time based on the sound signal emitted by the first device.
  • the method further includes:
  • the second device notifies the first device of the first relative position.
  • the method further includes:
  • the first device and the second device cooperate to process a first task in a first cooperation mode corresponding to the first relative position.
  • the first device and the second device collaborate to process a first task in a first collaboration mode corresponding to the first relative position, including:
  • the first device When the first relative position is that the second device is on the left side of the first device, the first device displays a first interface, and the second device displays a second interface;
  • the first device displays the first interface
  • the second device displays a third interface
  • the second interface is associated with the first interface
  • the third interface is associated with the first interface
  • the first device and the second device can collaboratively process the first task based on the first collaboration mode corresponding to the first relative position, that is, the first task is collaboratively processed in a manner corresponding to the first relative position, thereby improving the collaborative processing of the first task.
  • the method further includes:
  • the first device In response to detecting the first event, switches to emit a third sound signal through the first speaker and the third speaker, and emits a fourth sound signal through the second speaker;
  • the second device receives the third sound signal and the fourth sound signal
  • the second device determines a second relative position between the second device and the first device based on the third arrival time of the third sound signal and the fourth arrival time of the fourth sound signal. .
  • the first event includes at least one of the following:
  • the posture of the first device changes
  • the display mode of the first device changes
  • the first device receives a second request sent by the second device, and the second request is used to trigger the first device to switch the pronunciation mode.
  • the first device may switch the way of emitting the sound signal in response to detecting the first event, so that when the posture of the first device changes, the display mode of the first device changes, or the first device receives the second signal sent by the second device.
  • a first event such as a request occurs
  • the other device can accurately determine its relative position to the first device based on the switched sound mode in a timely manner, thereby improving the accuracy of detecting the relative position between it and the first device.
  • At least one of the first sound signal and the second sound signal is the same as at least one of the third sound signal and the fourth sound signal.
  • the first sounding moment and the second sounding moment are the same, and the sound characteristics of the first sound signal and the sound characteristics of the second sound signal are different; or,
  • the first sounding time and the second sounding time are different, and the sound characteristics of the first sound signal and the sound characteristics of the second sound signal are the same;
  • the first sounding time is the time when the first device sends the first sound signal
  • the second sounding time is the time when the first device sends the second sound signal
  • the display screen of the first device includes a set of relatively long sides and a set of relatively short sides;
  • the first plane and the second plane are perpendicular to each other, and the first plane and the second plane are perpendicular to the plane where the display screen is located;
  • the first plane is parallel to the longer side, and the second plane is parallel to the shorter side; or, the first plane is parallel to the shorter side, and the second plane is parallel to the shorter side. Long sides parallel.
  • the first sound signal and the second sound signal are ultrasonic signals.
  • the third sound signal and the fourth sound signal are ultrasonic signals.
  • the first device further includes a fourth speaker located on the second side of the first plane and on the fourth side of the second plane;
  • the first device emits a first sound signal through the first speaker and the second speaker, and emits a second sound signal through the third speaker, including:
  • the first device emits the first sound signal through the first speaker and the second speaker, and emits the second sound signal through the third speaker and the fourth speaker;
  • the first device emits a third sound signal through the first speaker and the third speaker, and emits a fourth sound signal through the second speaker, including:
  • the first device emits the third sound signal through the first speaker and the third speaker, and emits the fourth sound signal through the second speaker and the fourth speaker.
  • the first device may emit a first sound signal through a first speaker and a second speaker, a second sound signal through a third speaker (and a fourth speaker), and a third sound signal through the first speaker and the third speaker.
  • a fourth sound signal is emitted through the second speaker (and the fourth speaker)
  • the second device determines the relationship between the second device and the second sound signal based on the first arrival time of the first sound signal and the second arrival time of the second sound signal.
  • the first relative position between the first devices, the third device determines the fifth relative position between the third device and the first device based on the third arrival time of the third sound signal and the fourth arrival time of the fourth sound signal.
  • the first device can emit multiple sets of sound signals through multiple sets of speakers, so that multiple devices can determine their relative positions to the first device, which greatly improves the efficiency of detecting the relative positions of the devices.
  • embodiments of the present application provide a method for detecting relative positions between devices, applied to a system including a first device and a second device, where the first device at least includes a first microphone, a second microphone and a third device. microphone;
  • the first microphone and the second microphone are located on the first side of the first plane, the third microphone is located on the second side of the first plane, the first microphone and the third microphone are located on the second On the third side of the plane, the second microphone is located on the fourth side of the second plane; the first plane and the second plane are not parallel;
  • the methods include:
  • the second device emits a first sound signal and a second sound signal
  • the first device receives the first sound signal through the first microphone and the second microphone, and receives the second sound signal through the third microphone;
  • the first device determines a first relative position between the second device and the first device based on the first arrival time of the first sound signal and the second arrival time of the second sound signal. .
  • the first device at least includes a first microphone, a second microphone and a third microphone.
  • the first microphone and the second microphone are located on the first side of the first plane, and the third microphone is located on the second side of the first plane. side, the first microphone and the third microphone are located on the third side of the second plane, and the second microphone is located on the fourth side of the second plane; the first plane and the second plane are not parallel.
  • the second device may emit the first sound signal and the second sound signal.
  • the first device may receive the first sound signal through the first microphone and the second microphone, receive the second sound signal through the third microphone, and receive the first sound signal according to the first arrival time of the first sound signal, and the second arrival time of the second sound signal. , determine the first relative position between the second device and the first device. That is to say, the relative position between devices can be accurately detected through speakers and microphones without relying on components such as radar, which reduces the cost of detecting the relative position between devices.
  • the first device receives the first sound signal through the first microphone and the second microphone, and receives the second sound signal through the third microphone, including:
  • the first device receives the first sound signal through the first microphone and the second microphone and the second sound signal through the third microphone in response to detecting the second event.
  • the second event includes any of the following:
  • the first device establishes a communication connection with the second device
  • the first device discovers the second device
  • the first device receives a first request sent by the second device, and the first request is used to trigger the first device to detect the relative position relationship between the first device and the second device.
  • the first event includes a change in posture of the second device.
  • the first device may receive a sound signal through a specific microphone in response to detecting the second event. Therefore, when the first device establishes a communication connection with the second device, the first device discovers the second device, and the first device receives the signal sent by the second device. When a second event such as a first request or a change in the attitude of the second device occurs, the first device can accurately determine its relative position to the first device in a timely manner based on the received sound signal.
  • the method further includes:
  • the second device emits a third sound signal and a fourth sound signal
  • the first device In response to detecting the first event, switches to receive the third sound signal through the first microphone and the third microphone, and receives a fourth sound signal through the second microphone;
  • the second device determines a second relative position between the second device and the first device based on the third arrival time of the third sound signal and the fourth arrival time of the fourth sound signal. .
  • the first event includes at least one of the following:
  • the posture of the first device changes
  • the display mode of the first device changes
  • the first device receives a second request sent by the second device, and the second request is used to trigger the first device to switch the radio mode.
  • the first device may switch a way of receiving the sound signal in response to detecting the first event, so that when the posture of the first device changes, the display mode of the first device changes, or the first device receives the second signal sent by the second device,
  • a first event such as a request occurs
  • the first device can accurately determine the relative position between the first device and other devices based on the switched sound mode in a timely manner. position, which improves the accuracy of detecting the relative position between the first device and other devices.
  • the first device further includes a fourth microphone located on the second side of the first plane and on the fourth side of the second plane;
  • the first device receives a first sound signal through the first microphone and the second microphone, and receives a second sound signal through the third microphone, including:
  • the first device receives the first sound signal through the first microphone and the second microphone, and receives the second sound signal through the third microphone and the fourth microphone;
  • the first device emits a third sound signal through the first microphone and the third microphone, and emits a fourth sound signal through the second microphone, including:
  • the first device emits the third sound signal through the first microphone and the third microphone, and emits the fourth sound signal through the second microphone and the fourth microphone.
  • the first device may receive the first sound signal emitted by the second device through the first microphone and the second microphone, receive the second sound signal emitted by the second device through the third microphone (and the fourth microphone), and The third sound signal emitted by the third device is received through the first microphone and the third microphone, and the fourth sound signal emitted by the third device is received through the second microphone (and the fourth microphone), and the first device can then receive the third sound signal emitted by the third device according to the first sound signal.
  • the first arrival time of the second sound signal, and the second arrival time of the second sound signal determine the first relative position between the second device and the first device, according to the third arrival time of the third sound signal, and the fourth sound signal At the fourth arrival moment, the fifth relative position between the third device and the first device is determined. That is, the first device can receive multiple sets of sound signals through multiple sets of microphones, thereby being able to determine the relative positions between the multiple devices and the first device, which greatly improves the efficiency of detecting the relative positions of the devices.
  • inventions of the present application provide a system.
  • the system includes a first device and a second device.
  • the first device includes at least a first speaker, a second speaker, and a third speaker.
  • the first speaker and the second speaker is located on the first side of the first plane
  • the third speaker is located on the second side of the first plane
  • the first speaker and the third speaker are located on the third side of the second plane
  • the second speaker is located on the fourth side of the second plane
  • the first plane and the second plane are not parallel;
  • the first device is configured to emit a first sound signal through the first speaker and the second speaker, and emit a second sound signal through the third speaker;
  • the second device is configured to receive the first sound signal and the second sound signal; the second device is configured to receive the first sound signal and the second sound signal according to the first arrival time of the first sound signal and the second sound signal of the second sound signal. At the second arrival moment, determine the first relative position between the second device and the first device.
  • inventions of the present application provide a system.
  • the system includes a first device and a second device.
  • the first device includes at least a first microphone, a second microphone, and a third microphone.
  • the first microphone and the second microphone is located on the first side of the first plane
  • the third microphone is located on the second side of the first plane
  • the first microphone and the third microphone are located on the third side of the second plane
  • the second microphone is located on the fourth side of the second plane
  • the first plane and the second plane are not parallel;
  • the second device is configured to emit a first sound signal and a second sound signal
  • the first device is configured to receive the first sound signal through the first microphone and the second microphone, and receive the second sound signal through the third microphone; the first device is configured according to the The first arrival time of the first sound signal and the second arrival time of the second sound signal determine the first relative position between the second device and the first device.
  • embodiments of the present application provide a device that has the function of realizing the terminal equipment behavior in the above-mentioned aspects and possible implementation modes of the above-mentioned aspects.
  • Functions can be implemented by hardware, or by hardware executing corresponding software.
  • Hardware or software includes one or more modules or units corresponding to the above functions. For example, a transceiver module or unit, a processing module or unit, an acquisition module or unit, etc.
  • embodiments of the present application provide a terminal device, including: a memory and a processor.
  • the memory is used to store a computer program; the processor is used to execute any of the methods described in the first aspect when calling the computer program or The method described in any one of the second aspects.
  • embodiments of the present application provide a chip system.
  • the chip system includes a processor, the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement any of the above-mentioned aspects of the first aspect. a method described in Or the method described in any one of the second aspects.
  • the chip system may be a single chip or a chip module composed of multiple chips.
  • embodiments of the present application provide a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the method described in any one of the first aspects or any one of the second aspects is implemented. method described in the item.
  • embodiments of the present application provide a computer program product, which when the computer program product is run on a terminal device, causes the terminal device to execute any of the methods described in the first aspect or any of the second aspects. the method described.
  • Figure 1 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • Figure 2 is a schematic structural diagram of another terminal device provided by an embodiment of the present application.
  • Figure 3 is a schematic structural diagram of another terminal device provided by an embodiment of the present application.
  • Figure 4 is a system architecture diagram provided by an embodiment of the present application.
  • Figure 5 is another system architecture diagram provided by an embodiment of the present application.
  • Figure 6 is another system architecture diagram provided by an embodiment of the present application.
  • Figure 7 is another system architecture diagram provided by an embodiment of the present application.
  • Figure 8 is another system architecture diagram provided by an embodiment of the present application.
  • Figure 9 is another system architecture diagram provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of a sound signal propagation process provided by an embodiment of the present application.
  • Figure 11 is a schematic diagram of a sound signal propagation process provided by an embodiment of the present application.
  • Figure 12 is a schematic flowchart of a method for detecting relative positions between devices provided by an embodiment of the present application.
  • Figure 13 is a schematic flowchart of another method for detecting relative positions between devices provided by an embodiment of the present application.
  • Figure 14 is a flow chart of another method for detecting relative positions between devices provided by an embodiment of the present application.
  • Figure 15 is a flow chart of a method of grouping speakers provided by an embodiment of the present application.
  • Figure 16 is a schematic structural diagram of another terminal device provided by an embodiment of the present application.
  • Figure 17 is a schematic diagram of a display mode provided by an embodiment of the present application.
  • Figure 18 is a schematic diagram of another display mode provided by an embodiment of the present application.
  • Figure 19 is a schematic flowchart of another method for detecting relative positions between devices provided by an embodiment of the present application.
  • Figure 20 is a schematic flowchart of another method for detecting relative positions between devices provided by an embodiment of the present application.
  • Figure 21 is a flow chart of another method for detecting relative positions between devices provided by an embodiment of the present application.
  • Figure 22 is a flow chart of a method of grouping microphones provided by an embodiment of the present application.
  • Figure 23 is a schematic structural diagram of another terminal device provided by an embodiment of the present application.
  • Figure 24 is a schematic flowchart of a method for multi-device collaborative processing of tasks provided by an embodiment of the present application.
  • Figure 25 is a schematic diagram of a collaboration scenario provided by an embodiment of the present application.
  • Figure 26 is a schematic diagram of another collaboration scenario provided by the embodiment of the present application.
  • Figure 27 is a schematic diagram of another collaboration scenario provided by the embodiment of the present application.
  • Figure 28 is a schematic diagram of another collaboration scenario provided by the embodiment of the present application.
  • Figure 29 is a schematic diagram of another collaboration scenario provided by the embodiment of the present application.
  • Figure 30 is a schematic diagram of another collaboration scenario provided by the embodiment of the present application.
  • Figure 31 is a schematic diagram of another collaboration scenario provided by the embodiment of the present application.
  • Figure 32 is a schematic flowchart of a method for emitting a sound signal provided by an embodiment of the present application.
  • Figure 33 is a schematic flowchart of a method for receiving sound signals provided by an embodiment of the present application.
  • Figure 34 is a schematic flowchart of another method for detecting relative positions between devices provided by an embodiment of the present application.
  • Figure 35 is a schematic flowchart of another method for detecting relative positions between devices provided by an embodiment of the present application.
  • the method for detecting relative positions between devices can be applied to mobile phones, tablet computers, wearable devices, Vehicle-mounted equipment, augmented reality (AR)/virtual reality (VR) equipment, notebook computers, ultra-mobile personal computers (UMPC), netbooks, personal digital assistants (PDA) ) and other terminal devices, the embodiments of this application do not place any restrictions on the specific types of terminal devices.
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computers
  • PDA personal digital assistants
  • FIG. 1 is a schematic structural diagram of an example terminal device 100 provided by an embodiment of the present application.
  • the terminal device 100 may include a processor 110, a memory 120, a communication module 130, an acoustic-electric transducer 140, a sensor 150, and the like.
  • the processor 110 may include one or more processing units, and the memory 120 is used to store program codes and data.
  • the processor 110 can execute computer execution instructions stored in the memory 120 to control and manage the actions of the terminal device 100 .
  • the communication module 130 may be used for communication between various internal modules of the terminal device 100, or communication between the terminal device 100 and other external terminal devices, etc.
  • the communication module 130 may include an interface, etc., such as a USB interface.
  • the USB interface may be an interface that complies with the USB standard specification, and may specifically be a Mini USB interface, Micro USB interface, etc. USB interface, USB Type C interface, etc.
  • the USB interface can be used to connect a charger to charge the terminal device 100, and can also be used to transmit data between the terminal device 100 and peripheral devices. It can also be used to connect headphones to play audio through them. This interface can also be used to connect other terminal devices, such as AR devices.
  • the communication module 130 may include an audio device, a radio frequency circuit, a Bluetooth chip, a wireless fidelity (Wi-Fi) chip, a near-field communication (NFC) module, etc., and may be configured through a variety of different The interaction between the terminal device 100 and other terminal devices is realized in a manner.
  • Wi-Fi wireless fidelity
  • NFC near-field communication
  • the acoustic-electric transducer 140 can be used for mutual conversion between sound signals and electrical signals.
  • acoustoelectric transducer 140 may be used to convert acoustic signals to electrical signals, and/or to convert electrical signals to acoustic signals.
  • the acoustoelectric transducer 140 when the acoustoelectric transducer 140 is used to convert electrical signals into sound signals, the acoustoelectric transducer 140 may also be called a speaker 141; when the acoustoelectric transducer is used to convert sound signals into electrical signals , the acoustic-electric transducer 140 may also be called a microphone 142 .
  • the terminal device 100 may include multiple acoustic and electrical transducers, and the multiple acoustic and electrical transducers may be distributed at different locations of the terminal device 100 .
  • multiple acoustic and electrical transducers 140 may be distributed on the frame of the terminal device 100 .
  • the plurality of acoustoelectric transducers 140 may be evenly or symmetrically distributed on the terminal device 100 .
  • the frame of the terminal device 100 may be a rectangle, and the frame includes two opposite longer sides and two opposite shorter sides.
  • the terminal device 100 may include at least three speakers 141. At least three speakers 141 are located at different positions of the terminal device 100 so that there are at least two planes, and for each plane there is at least one speaker 141 on one side of the plane. Among them, any two planes are not parallel.
  • the at least two planes include plane a and plane b
  • the display screen 160 of the terminal device 100 includes a set of relatively long sides and a set of relatively short sides
  • the plane a and the plane b are perpendicular to each other
  • the plane Plane a and plane b are perpendicular to the plane where the display screen 160 is located. Plane a is parallel to the longer side, and plane b is parallel to the shorter side; or plane a is parallel to the shorter side, and plane b is parallel to the longer side.
  • the terminal device 100 includes a display screen 160, speakers a, speakers b, speakers c, and speakers d.
  • Speaker a, speaker b, speaker c and speaker d are arranged on a set of shorter frames of the terminal device 100 .
  • Plane a and plane b are perpendicular, and plane a and plane b are perpendicular to the plane where the display screen 160 is located.
  • speaker a and speaker c are located on the left side of plane a
  • speaker b and speaker d are located on the right side of plane a.
  • speaker a and speaker b are located on the upper side of plane b, and speaker c and speaker d are located on the lower side of plane b.
  • the terminal device 100 may include at least three microphones 142. At least three microphones 142 are located at different positions of the terminal device 100 so that there are at least two planes, for each plane there is at least one microphone 142 on one side of the plane. Among them, any two planes are not parallel.
  • the at least two planes include plane c and plane d
  • the display screen 160 of the terminal device 100 includes a set of relatively long sides and a set of relatively short sides
  • plane a and plane b are perpendicular to each other
  • plane Plane a and plane b are perpendicular to the plane where the display screen 160 is located. Plane a is parallel to the longer side, and plane b is parallel to the shorter side; or plane a is parallel to the shorter side, and plane b is parallel to the longer side.
  • the terminal device 100 includes a display screen 160, a microphone a, a microphone b, a microphone c, and a microphone d.
  • Microphone a, microphone b, microphone c and microphone d are arranged on a set of shorter frames of the terminal device 100 .
  • Plane c and plane d are perpendicular to each other, and plane c and plane d are perpendicular to the plane where the display screen 160 is located.
  • plane c microphone a and microphone c are located on the left side of plane c
  • microphone b and microphone d are located on the right side of plane c.
  • microphone a and microphone b are located on the upper side of plane d
  • microphone c and microphone d are located on the lower side of plane d.
  • the terminal device includes at least three speakers 141 and at least three microphones 142
  • the number of the at least three speakers 141 and the number of the at least three microphones 142 may be the same or different.
  • the location of the at least three microphones 142 may be the same or different.
  • the terminal device includes speaker a, speaker b, speaker c, and speaker d, and also includes microphone a, microphone b, microphone c, and microphone d.
  • speaker a and microphone a are at the same position
  • speaker b and microphone b are at the same position
  • speaker c and microphone c are at the same position
  • speaker d and microphone d are at the same position.
  • Plane a and plane c are the same plane
  • plane b and plane d are the same plane.
  • speaker a, speaker c, microphone a, and microphone c are located on the left side of plane a, and speaker b, speaker d, microphone b, and microphone d are located on the right side of plane a.
  • speaker a, speaker b, microphone a, and microphone b are located on the upper side of plane b, and speaker c, speaker d, microphone c, and microphone d are located on the lower side of plane b.
  • the sensor 150 may be used to detect the posture of the terminal device 100.
  • sensors 150 may include gyroscope sensors, acceleration sensors, distance sensors, etc.
  • the gyro sensor can be used to determine the motion posture of the terminal device 100 .
  • the angular velocity of the terminal device 100 around three axes ie, x, y, and z axes
  • the acceleration sensor can detect the acceleration of the terminal device 100 in various directions (generally three axes). When the terminal device 100 is stationary, the magnitude and direction of gravity can be detected. In some examples, the acceleration sensor can be used for switching between horizontal and vertical screens.
  • the terminal device 100 may determine whether the terminal device 100 has moved based on the distance change measured by the distance sensor, or determine whether other terminal devices near the terminal device 100 have moved.
  • the distance sensor may include a light sensor.
  • the terminal device 100 may also include a display screen 160, and the display screen 160 may display images or videos in the human-computer interaction interface.
  • the embodiment of the present application does not specifically limit the structure of the terminal device 100 .
  • the terminal device 100 may also include more or less components than shown in the figures, or combine some components, or split some components, or arrange different components.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the software system of the terminal device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of this application do not limit the type of the software system.
  • Figure 4 is a system architecture diagram provided by an embodiment of the present application.
  • the system includes device A 210 and device B 220. Both device A 210 and device B 220 may be the aforementioned terminal device 100.
  • Figure 2 can be understood as a top view of device A 210 and device B 220, and the positional relationship type between device A 210 and device B 220 can be an up-down relationship.
  • Device A 210 may be the terminal device 100 as shown in Figure 2.
  • Device B 220 may include microphone e.
  • the microphone e can be distributed at any position on the device B 220.
  • the microphone e may be distributed on the frame of device B 220.
  • speaker C is distributed on the shorter side on the left side of device B 220.
  • device B 220 is located above device A 210.
  • the distance d1 from speaker a to microphone e of device B 220 and the distance d2 from speaker b to microphone e are smaller than the distance d3 from speaker c to microphone e and the distance d3 from speaker d to The distance of microphone e is d4.
  • the distance d1 from speaker a to microphone e and the distance d2 from speaker b to microphone e are greater than the distance d3 from speaker c to microphone e.
  • the distance d4 from speaker d to microphone e is the distance d1 from speaker a to microphone e and the distance d2 from speaker b to microphone e.
  • the propagation speed v of microphone e is also the same.
  • the size relationship between T1, T2, T3 and T4 is the same as the size relationship between d1, d2, d3 and d4. Consistent, that is, based on the propagation time of the sound signal from speaker a, speaker b, speaker c and speaker d to microphone e, the distance between speaker a, speaker b, speaker c and speaker d and microphone e can be determined distance, thereby determining whether device B 220 is above or below device A 210.
  • device A 210 can emit sound signal A through speaker group A and sound signal B through speaker group B, where speaker group A can include speaker a and speaker At least one of b, device B 220 receives sound signal A and sound signal B based on microphone e, determines the propagation time TA of sound signal A from speaker group A to microphone e, and determines that sound signal B propagates from speaker group B to microphone e.
  • the propagation time TB is, and then based on the difference between TA and TB, it is determined that microphone e is closer to speaker group A or closer to speaker group B.
  • device B 220 is closer to the location of speaker group A in device A 210. Combined with the fact that speaker group A is above or below device A 210, it can be determined that device B 220 is in device A 210. above or below. Similarly, if microphone e is closer to speaker group B, then device B 220 is closer to the location of speaker group B in device A 210. Combined with the fact that speaker group B is above or below device A 210, it can be determined that device B 220 is at above or below device A 210.
  • speaker group A includes speaker a and speaker b
  • speaker group B includes speaker c and speaker d
  • TA can be positively related to T1 and T2
  • TB can be positively related to T3 and T4
  • TA is related to T1 and T2
  • the positive correlation is the same as the positive correlation between TB, T3 and TA.
  • TA (T1+T2)/2
  • TB (T3+T4)/2.
  • device B 220 can directly determine TA, instead of determining T1 and T2 and then determining TA based on T1 and T2.
  • device B 220 can Determine TB directly instead of determining T3 and T4 and then determining TB based on T3 and T4.
  • the way to determine TA and TB can refer to the method shown in Figure 10 and Figure 11 below.
  • the difference between the above-mentioned sound signal A and sound signal B may include differences in pronunciation time and/or differences in sound characteristics (such as frequency, etc.), so that the receiving device can distinguish between sound signal A and sound signal B. Propagation duration.
  • the sounding moments of sound signal A and sound signal B are the same, and the sound characteristics of sound signal A and sound signal B may be different.
  • the sounding moments of the sound signal A and the sound signal B are different, and the sound characteristics of the sound signal A and the sound signal B are the same.
  • device A 210 may also include a microphone and/or more speakers.
  • device B 220 may also include speakers and/or more speakers. microphone.
  • Figure 5 is a system architecture diagram provided by an embodiment of the present application.
  • the position relationship type of device A 210 and device B 220 in Figure 5 can be a left-right relationship.
  • device B 220 is located to the right of device A 210.
  • the distance d1 from speaker a to microphone e of device B 220 and the distance d3 from speaker c to microphone e are greater than the distance d2 from speaker b to microphone e and the distance d2 from speaker d to The distance of microphone e is d4.
  • the distance d1 from speaker a to microphone e and the distance d3 from speaker c to microphone e are smaller than the distance d3 from speaker b to microphone e.
  • distance d2 and distance d4 from speaker d to microphone e are smaller than the distance d3 from speaker b to microphone e.
  • speaker group A may include at least one of speaker a and speaker c
  • speaker group B may include at least one of speaker b and speaker d.
  • device B 220 receives sound signal A and sound signal B based on microphone e, determines the propagation time TA for sound signal A from speaker group A to microphone e, and determines the propagation time TB for sound signal B from speaker group B to microphone e, Based on the difference between TA and TB, it is determined that microphone e is closer to speaker group A or closer to speaker group B. If the microphone e changes is close to speaker group A, then device B 220 is closer to the position of speaker group A in device A 210.
  • device B 220 is to the left of device A 210. square or right.
  • device B 220 is closer to the location of speaker group B in device A 210, and combined with speaker group B being to the left or right of device A 210, device B can be determined 220 to the left or right of device A 210.
  • speaker group A is on the left side of device A 210 and speaker group B is on the right side of device A 210
  • the device B 220 is located to the right of device A 210, or, as can be understood, device B 220 is located to the left of device A 210 when device B 220 is closer to the location of speaker group A in device A 210.
  • speaker group A includes speaker a and speaker c
  • speaker group B includes speaker b and speaker d
  • TA can be positively related to T1 and T3
  • TB can be positively related to T2 and T4
  • TA is related to T1 and T3
  • the positive correlation is the same as the positive correlation between TB and T2 and T4.
  • TA (T1+T3)/2
  • TB (T2+T4)/2.
  • TA T1
  • device B 220 can directly determine TA, instead of determining T1 and T3 and then determining TA based on T1 and T3.
  • device B 220 can Determine TB directly instead of determining T2 and T4 and then determining TB based on T2 and T4.
  • the way to determine TA and TB can refer to the method shown in Figure 10 below.
  • FIG. 6 is a system architecture diagram provided by an embodiment of the present application.
  • the system shown in FIG. 6 can be understood as a combination of the systems shown in FIG. 4 and FIG. 5 .
  • the system in Figure 6 also includes device C 230.
  • Device C 230 can be the terminal device 100 mentioned above.
  • Device C 230 includes microphone f.
  • the position relationship type of device A 210 and device B 220 can be an up-down relationship.
  • the positional relationship type of C 230 can be a left-right relationship.
  • Device A 210 emits sound signal A through speaker group A, and emits sound signal B through speaker group B.
  • Device B 220 receives sound signal A and sound signal B through microphone e, and determines the propagation time TA of sound signal A from device A 210 to device B 220, and the propagation time TA of sound signal B from device A 210 to device B 220.
  • TB according to the size relationship between TA and TB, determine whether device B 220 is above or below device A 210.
  • Device A 210 also emits sound signal C through speaker group C and sound signal D through speaker group D.
  • Device C 230 receives the sound signal C and the sound signal D through the microphone f, and determines the TC of the sound signal C propagating from the device A 210 to the device C 230, and the TD of the sound signal D propagating from the device A 210 to the device C 230, according to the TC The size relationship with TD determines whether C 230 is to the left or right of device A 210.
  • speaker group A may include at least one of speaker a and speaker b
  • speaker group B may include at least one of speaker c and speaker d
  • speaker group C may include at least one of speaker a and speaker c
  • speaker group D At least one of speaker b and speaker d may be included.
  • device A 210 may include multiple pairs of speaker groups, each speaker group may include at least one speaker, and each pair of speaker groups is located on both sides of a certain plane, and the plane makes the position between the pair of speaker groups conform to a plane.
  • a type of positional relationship such as up-down relationship or left-right relationship.
  • Each pair of speaker groups can emit a set of sound signals.
  • a set of sound signals includes two sound signals. Each sound signal is emitted by speakers of different speaker groups. The pronunciation moments and/or sound characteristics of these two sound signals are different, so that This allows other devices to determine the specific relative position between the other device and device A 210 based on the two sound signals, thereby improving the efficiency of detecting the relative positions between devices.
  • the difference between the sound signal C and the sound signal D can be the same as the difference between the sound signal A and the sound signal B.
  • FIG. 7 is a system architecture diagram provided by an embodiment of the present application.
  • the location relationship type of device A 210 and device B 220 can be an up-down relationship.
  • Device A 210 may be the terminal device 100 shown in Figure 3.
  • Device B 220 may include speaker e.
  • the speakers e can be distributed at any location on the device B 220. In some examples, the speakers e may be distributed on the frame of device B 220 . For example, as shown in FIG. 7 , the speaker e is distributed on the shorter left side of device B 220 .
  • device B 220 is located above device A 210.
  • the distance d1 from microphone a to speaker e of device B 220 and the distance d2 from microphone b to speaker e are smaller than the distance d3 from microphone c to speaker e and the distance d3 from microphone d to speaker e.
  • the distance of speaker e is d4. It can be understood that when device B 220 is located below device A 210 (not shown in Figure 7), the distance d1 from microphone a to speaker e and the distance d2 from microphone b to speaker e are greater than the distance from microphone c to speaker e. d3 and the distance d4 from microphone d to speaker e.
  • the propagation speed v is also the same.
  • the size relationship between T1, T2, T3 and T4 is consistent with the size relationship between d1, d2, d3 and d4, also That is, the distance between microphone a, microphone b, microphone c and microphone d and speaker e can be determined based on the propagation time between the sound signal transmitted from speaker e to microphone a, microphone b, microphone c and microphone d, It is then determined whether device B 220 is above or below device A 210.
  • device B 220 can emit sound signal A and sound signal B through speaker e.
  • Device A 210 receives sound signal A based on microphone group A and receives sound signal based on microphone group B.
  • Sound signal B wherein microphone group A may include at least one of microphone a and microphone b, microphone group B may include at least one of microphone c and microphone d, it is determined that sound signal A propagates from speaker e to microphone group A to
  • the propagation time TA is used to determine the propagation time TB of the sound signal B from the speaker e to the microphone group B, and then based on the difference between TA and TB, it is determined that the speaker e is closer to the microphone group A or closer to the microphone group B. If speaker e is closer to microphone group A, then device B 220 is closer to the location of microphone group A in device A 210. Combined with the fact that microphone group A is above or below device A 210, it can be determined that device B 220 is in device A 210.
  • microphone group A includes microphone a and microphone b
  • microphone group B includes microphone c and microphone d
  • TA can be positively related to T1 and T2
  • TB can be positively related to T3 and T4
  • TA is related to T1 and T2
  • the positive correlation is the same as the positive correlation between TB and T3 and T4.
  • TA (T1+T2)/2
  • TB (T3+T4)/2.
  • device A 210 may also include speakers and/or more microphones.
  • device B 220 may also include microphones and/or more microphones. of speakers.
  • FIG. 8 is a system architecture diagram provided by an embodiment of the present application.
  • the position relationship type of device A 210 and device B 220 in Figure 8 can be a left-right relationship.
  • device B 220 is located to the right of device A 210.
  • the distance d1 from microphone a to speaker e of device B 220 and the distance d3 from microphone c to speaker e are greater than the distance d2 from microphone b to speaker e and microphone d.
  • the distance d4 to the speaker e it can be understood that when device B 220 is located to the left of device A 210, the distance d1 from microphone a to speaker e and the distance d3 from microphone c to speaker e are smaller than the distance d2 from microphone b to speaker e and the distance d2 from microphone d to speaker e.
  • the distance of speaker e is d4.
  • device B 220 emits a signal through speaker e Sound signal A and sound signal B
  • device A 210 receives sound signal A based on microphone group A
  • receives sound signal B based on microphone group B where microphone group A may include at least one of microphone a and microphone c
  • microphone group B may Including at least one of microphone b and microphone d, determine the propagation time TA for the sound signal A to propagate from the speaker e to the microphone group A, determine the propagation time TB for the sound signal B to propagate from the speaker e to the microphone group B, and then based on TA The difference between TB and TB determines whether speaker e is closer to microphone group A or closer to microphone group B.
  • device B 220 is closer to the location of microphone group A in device A 210. Combined with the fact that microphone group A is to the left or right of device A 210, it can be determined that device B 220 is in the device A 210 left or right. Similarly, if speaker e is closer to microphone group B, then device B 220 is closer to the location of microphone group B in device A 210. Combined with whether microphone group B is to the left or right of device A 210, device B can be determined. 220 to the left or right of device A 210.
  • microphone group A includes microphone a and microphone b
  • microphone group B includes microphone c and microphone d
  • TA can be positively related to T1 and T3
  • TB can be positively related to T2 and T4
  • TA is related to T1 and T3
  • the positive correlation is the same as the positive correlation between TB and T2 and T4.
  • TA (T1+T2)/2
  • TB (T2+T4)/2.
  • FIG. 9 is a system architecture diagram provided by an embodiment of the present application.
  • the system shown in Figure 9 can be understood as a combination of the systems shown in Figures 7 and 8.
  • the system in Figure 9 also includes device C 230.
  • Device C 230 can be the terminal device 100 mentioned above.
  • Device C 230 includes speaker D.
  • the position relationship type of device A 210 and device B 220 can be an up-down relationship.
  • the positional relationship type of C 230 can be a left-right relationship.
  • Device B 220 emits sound signal A and sound signal B through speaker e.
  • Device C 230 emits sound signal C and sound signal D through speaker D.
  • Device A 210 receives sound signal A and sound signal B through microphone group A reception and microphone group B, and determines the propagation time TA for sound signal A to propagate from device B 220 to device A 210, and the propagation time TA for sound signal B to propagate from device B 220 to The propagation time TB of device A 210 determines whether device B 220 is above or below device A 210 based on the size relationship between TA and TB.
  • Device A 210 also receives sound signal C and sound signal D through microphone group C reception and microphone group D, and determines the propagation time TC for sound signal C to propagate from device C 230 to device A 210, and the propagation time TC for sound signal D to propagate from device C 230
  • the propagation time TD to device A 210 determines whether device C 230 is to the left or right of device A 210 based on the relationship between TC and TD.
  • microphone group A may include at least one of microphone a and microphone b
  • microphone group B may include at least one of microphone c and microphone d
  • microphone group C may include at least one of microphone a and microphone c
  • microphone group D At least one of microphone b and microphone d may be included.
  • device A 210 may include multiple pairs of microphone groups.
  • Each microphone group may include at least one microphone.
  • Each pair of microphone groups is located on both sides of a certain plane. The plane makes the position between the pair of microphone groups conform to a plane.
  • a type of positional relationship such as up-down relationship or left-right relationship.
  • Each pair of microphone groups can receive a set of sound signals.
  • a set of sound signals includes two sound signals. Each sound signal is received by a microphone of a different microphone group. The two sound signals have different pronunciation moments and/or sound characteristics, so that The specific relative position between the other device and device A 210 can be determined based on the two sound signals, which improves the efficiency of detecting the relative position between devices.
  • the step of determining the relative position between device A 210 and device B 220 can be performed by the device that receives the sound signal, but it can be understood that in actual applications, it is also The device that receives the sound signal can send the data required to determine the relative position between device A 210 and device B 220 to another device, and then the other device determines device A 210 and device B 220 based on the received data. relative position between them.
  • device B 220 can send the arrival time of sound signal A and the arrival time of sound signal B to device A 210, and device A 210 receives sound signal A according to the arrival time of sound signal A.
  • the propagation time TA of sound signal A and the propagation time TB of sound signal B are determined, and then based on the size relationship between TA and TB, the relative position of device B 220 and device A 210 is determined.
  • device A 210 will receive the arrival time of sound signal A and the arrival time of sound signal B and send it to device B 220.
  • Device B 220 will receive the arrival time of sound signal A according to the arrival time of sound signal A. Based on the arrival time and the arrival time of sound signal B, determine the propagation time TA of sound signal A and the propagation time TB of sound signal B, and then determine the relative position of device B 220 and device A 210 based on the size relationship between TA and TB.
  • the above-mentioned sound signal A, sound signal B, sound signal C and sound signal D may be ultrasonic waves or infrasound waves, thereby reducing the possibility of the user hearing sound signal A, sound signal B, sound signal C and sound signal D, Reduce or avoid disturbing users during the process of detecting relative positions between devices and improve user experience.
  • device A 210 includes speaker group A, and speaker group A includes speaker a, that is, speaker group A only includes one speaker.
  • device A 210 includes speaker group A.
  • Speaker group A includes speaker a and speaker c. That is, speaker group A includes multiple speakers.
  • Device A 210 emits sound signal A through speaker a and speaker c at sounding time 1, and device B 220 receives sound signal A twice at different times.
  • device B 220 when device B 220 receives any sound signal, it can determine the arrival time of the sound signal based on the moment with the largest amplitude in the sound signal. Of course, in practical applications, device B 220 can also determine the arrival time of the sound signal through other methods. The arrival time of the sound signal.
  • device A 210 can notify device B 220 of pronunciation time 1, and the embodiment of this application does not limit the manner in which device A 210 notifies device B 220 of pronunciation time 1.
  • device A 210 can send the pronunciation moment 1 to device B 220 through near-field communication such as Bluetooth or WIFI.
  • device A 210 can carry the pronunciation moment 1 in the sound signal A by modulating the sound signal A, and emits the sound signal A when reaching the pronunciation time 1.
  • the device B 220 modulates the sound signal A. Demodulate to obtain the pronunciation time 1.
  • the way device A 210 determines the propagation duration TA of the sound signal A is the same as the way device B 220 determines the propagation duration TA of the sound signal A in the system shown in Figures 4-6.
  • TA's method is similar or the same.
  • device B 220 includes microphone group A, and microphone group A includes microphone a, that is, microphone group A only includes one microphone.
  • device B 220 includes microphone group A, which includes microphone a and microphone c, that is, microphone group A includes multiple microphones.
  • device A 210 or device B 220 determines the propagation duration TB of sound signal B may be the same or similar to the manner in which TA is determined.
  • FIG. 12 is a flow chart of a method for detecting relative positions between devices provided by an embodiment of the present application. Among them, this method can be used in the system shown in any one of Figures 4 to 6. It should be noted that this method is not limited to the specific order described in Figure 12 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be It can also be omitted or deleted. The method includes the following steps:
  • Device A 210 detects the second event.
  • the second event is used to trigger the detection of the relative position relationship between device A 20 and device B 220. It should be noted that the second event may be an event set in advance.
  • the second event may include device A 210 establishing a communication connection with device B 220.
  • the relative position relationship between device A 210 and device B 220 may affect the operation of device A 210 and/or device B 220. Therefore, device A 210 can detect the device The relative position relationship between A 210 and device B 220.
  • the second event may include device A 210 discovering device B 220.
  • device A 210 may determine to discover device B 220 when receiving a specific signal sent by device B 220.
  • device A 210 may determine to discover device B 220 when receiving a beacon frame sent by device B 220. , where the beacon frame can be used to indicate information such as the hotspot name of device B 220.
  • device A 210 may determine to discover device B 220 when device B 220 changes from an offline state to an online state.
  • device A 210 can also discover device B 220 through other methods. The embodiment of this application does not limit the way in which device A 210 discovers device B 220.
  • the second event may include device A 210 creating a first task, where the first task may be a task performed depending on the relative position relationship between device A 210 and device B 220. In some examples, the first task may be a task that device A 210 and device B 220 jointly process.
  • the second event may include device A 210 receiving a first request sent by device B 220, wherein the first request may be used to trigger detection of the relative position relationship between device A 210 and device B 220.
  • the second event may include a change in the posture of device A 210.
  • the change in posture may include movement, shaking, rotation, etc. of device A 210. Since when the posture of device A 210 changes, the relative position between device A 210 and device B 220 may change. Therefore, when the posture of device A 210 changes, the relationship between device A 210 and device B 220 can be detected. relative positional relationship between them.
  • the second event may include an event in which the posture change amplitude of device A 210 is greater than a preset amplitude threshold, wherein the amplitude threshold may be used to illustrate that device A 210 detects the relative position between device A 210 and device B 220 The position operation is sensitive to changes in the posture of device A 210.
  • device A 210 can detect the relative position relationship between device A 210 and device B 220 when detecting a small change in the posture of device A, and detect the relationship between device A 210 and device B 220.
  • the frequency of the relative position relationship is small; when the amplitude threshold is large, device A 210 can detect the relative position relationship between device A 210 and device B 220 when detecting a large change in the posture of device A, That is, the frequency of detecting the relative position relationship between device A 210 and device B 220 is relatively high.
  • the second event may be that device A 210 rotates by an angle greater than or equal to 90 degrees. It should be noted that the embodiment of the present application does not limit the attitude change mode of device A 210 and the size of the amplitude threshold.
  • the first event may include a change in the posture of device B 220. Since when the posture of device B 220 changes, it may also cause the relative position between device A 210 and device B 220 to change. Therefore, device A 210 can detect device A when detecting that the posture of device B 220 changes. The relative position relationship between 210 and device B 220.
  • the first event may be an event in which the attitude change amplitude of device B 220 is greater than a preset amplitude threshold.
  • the first event may include a change in the display mode of device A 210.
  • the display mode may include landscape display and portrait display, and the change in the display mode may include changing from landscape display to portrait display, or from portrait display to landscape display.
  • the display mode may include main screen display and secondary screen display, and the change in the display mode may include changing from main screen display to secondary screen display, or from secondary screen display to main screen display.
  • the display mode may include split-screen display and full-screen display, and the display mode change may include changing from split-screen display to main-screen display, or from main-screen display to split-screen display.
  • device A 210 can also perform at least some of the following steps under other circumstances to detect the relative position between device A 210 and device B 220, so S1201 is an optional step.
  • Device A 210 determines speaker group A and speaker group B.
  • speaker group A and speaker group B may each include at least one speaker, and the number of speakers included in speaker group A and speaker group B may be less than or equal to the total number of speakers included in device A 210. Speaker group A and speaker group B may be located on both sides of a certain plane respectively.
  • the relative position between speaker A and speaker group B is the third relative position, and the positional relationship type matched by the third relative position is the first positional relationship type.
  • the first positional relationship type may be any of a plurality of positional relationship types into which the at least three speakers included in the device A 210 can be divided.
  • the multiple position relationship types may include at least one of top-bottom relationship and left-right relationship.
  • the multiple position relationship types may also include other more position relationship types, such as the relationship between upper left and lower right and the relationship between lower left and lower left. At least one of the upper-right relationships.
  • the third relative position may be a relative position matching the first position relationship type.
  • the first positional relationship type includes two relative positions, and the third relative position may be any one of the two relative positions.
  • the third relative position between speaker group A and speaker group B may be that speaker group A is above speaker group B, and the first positional relationship type matching the third relative positional relationship may be an up-down relationship.
  • the third relative position between speaker group A and speaker group B may be that speaker group A is to the left of speaker group B, and the first positional relationship type matching the third relative positional relationship may be a left-right relationship.
  • device A 210 may first determine speaker group A and speaker group B.
  • the determined relative position between speaker group A and speaker group B is the third relative position, and the positional relationship matched by the third relative position
  • the type is the first position relationship type.
  • device A 210 may also first determine the first position relationship type, and then determine speaker group A and speaker group B based on the first position relationship type.
  • the relative position between speaker group A and speaker B is The third relative position.
  • device A 210 stores multiple pairs of speaker groups, each pair of speaker groups includes two speaker groups respectively located on both sides of a plane, and device A 210 can determine speaker group A and speaker group B among the multiple pairs of speaker groups. .
  • device A 210 groups all or some of the at least three speakers included in device A 210 to determine speaker group A and speaker group B.
  • S1202 is an optional step.
  • Device A 210 emits sound signal A through speaker group A, and emits sound signal B through speaker group B.
  • the sound moment 1 when the device A 210 emits the sound signal A and the sound moment 2 when the sound signal B is emitted can be the same. Then, in order to facilitate the device B 220 to distinguish the sound signal A and the sound signal B, the sound characteristics of the sound signal A and The sound characteristics of the sound signal B can be different. In some examples, the frequency of sound signal A and the frequency of sound signal B may be different.
  • the sound moment 1 when the device A 210 emits the sound signal A and the sound moment 2 when the sound signal B is emitted may be different. Since the pronunciation time 1 and the pronunciation time 2 are different, the device B 220 can already distinguish the sound signal A and the sound signal B based on the pronunciation time 1 and the pronunciation time 2. Therefore, the frequency of the sound signal A and the frequency of the sound signal B can be the same, or they can different.
  • device A 210 in order to facilitate device B 220 to receive sound signal A and sound signal B more accurately and improve the accuracy of subsequent detection of the relative position between device A 210 and device B 220, device A 210 can also send a message to device B 210 First configuration information.
  • the first configuration information can be used to indicate the way to emit the sound signal A and the sound signal B; and/or the first configuration information can be used to indicate the sound characteristics of the sound signal A and the sound characteristics of the sound signal B; and/or , the first configuration information may be used to indicate the sounding time 1 when the sound signal A is issued and the sounding time 2 when the sound signal B is issued.
  • the first configuration information can also be used to indicate more or less information related to the relative position between detection devices.
  • the first configuration information can be used to indicate the first position relationship type and/or the speaker. Third relative position between Group A and Speaker Group B.
  • device A 210 may carry the first configuration information in sound signal A and/or sound signal B through modulation.
  • At least part of the information included in the first configuration information may also be sent by device A 210 to device B 220 in advance, or device B 220 may obtain at least part of the information through other methods. Therefore, device A 210 may also It is not necessary to send this at least part of the information to device B 220 each time sound signal A and sound signal B are emitted.
  • device A 210 can send information such as the sound characteristics of sound signal A and sound signal B, the way of emitting sound signal A and sound signal B to device B 220 in advance, or the relevant technical personnel can provide information on device B 220 Before leaving the factory, on the device B 220
  • the sound characteristics of sound signal A and the sound characteristics of sound signal B are preset in the method, and the sound signal A and sound signal B are emitted.
  • device B 220 determines the first relative position between device A 210 and device B 220 based on the arrival time 1 of the received sound signal A and the arrival time 2 of the received sound signal B.
  • the device B 220 Since the device A 210 emits the sound signal A and the sound signal B through the speaker group A and the speaker group B at the third relative position, the device B 220 is based on the arrival time 1 of the received sound signal A and the arrival of the received sound signal B. At time 2, the size between the propagation time TA of sound signal A and the propagation time TB of sound signal B is determined, and then the distance from device B 220 to speaker group A and the distance from device B 220 to speaker group B are determined, and then based on the speaker The third relative position of group A and speaker group B and the first positional relationship type accurately determine the first relative position between device A 210 and device B 220. Wherein, the first relative position matches the first position relationship type.
  • device B 220 may receive sound signal A and sound signal B through the same microphone (such as microphone e), and arrival time 1 and arrival time 2 are the arrival times when the microphone receives sound signal A and sound signal B respectively. .
  • device B 220 may identify sound signal A and sound signal B through an autocorrelation algorithm. In some examples, device B 220 can obtain the sound characteristics of sound signal A and the sound characteristics of sound signal B sent by device A 210. When the sound characteristics of a received sound signal and the sound characteristics of sound signal A are similar, the similarity is obtained. When it is greater than the preset similarity threshold, it is determined that sound signal A has been received. When the similarity between the sound characteristics of a received sound signal and the sound characteristics of sound signal B is greater than the preset similarity threshold, it is determined that sound signal A has been received. Signal B.
  • device B 220 may determine the time corresponding to the maximum amplitude of sound signal A as arrival time 1, and determine the time corresponding to the maximum amplitude of sound signal B as arrival time 2.
  • device B 220 can also determine arrival time 1 and arrival time 2 through other methods, as long as the method for determining arrival time 1 and the method for determining arrival time 2 are the same.
  • device B 220 may receive the first configuration information sent by device A 210. In some examples, device B 220 can demodulate sound signal A and/or sound signal B, thereby obtaining the first configuration information carried in sound signal A or sound signal B.
  • device B 220 can compare the arrival time 1 and the arrival time 2. If arrival time 1 is less than arrival time 2, that is, TA is less than TB, then device B 220 can determine that device B 220 is closer to speaker group A than device B 220 to speaker group B. If arrival time 1 is greater than arrival time 2, that is, TA is greater than TB, then device B 220 can determine that device B 220 is closer to speaker group B than device B 220 is to speaker group A.
  • device B 220 can compare the propagation of sound signal A.
  • device B is also determined.
  • the first positional relationship type is an up-down relationship
  • speaker group A is located above speaker group B.
  • device B 220 is closer to speaker group A than device B 220 is to speaker group B, that is, device B 220 is above device A 210; or, understandably, device B 220 is closer to speaker group B than device B 220 is to speaker group A, that is, device B 220 is below device A 210.
  • the first positional relationship type is a left-right relationship
  • speaker group A is located to the left of speaker group B.
  • device B 220 is closer to speaker group B than device B 220 is to speaker group A, that is, It is the device B 220 that is on the right side of the device A 210; or, it can be understood that the device B 220 is closer to the speaker group A than the device B 220 is to the speaker group B, that is, the device B 220 is on the left side of the device A 210.
  • device B 220 may receive sound signal A and sound signal B through multiple microphones.
  • the distance between the multiple microphones is negligible.
  • the multiple microphones are different microphone units in the same microphone array, and the duration of the same sound signal being transmitted to the multiple microphones is almost the same, so it can still be Follow by the same mic The wind receives sound signal A and sound signal B for processing.
  • the distance between the multiple microphones is relatively large, and the time required for the same sound signal to be transmitted to different microphones is greatly different, then device B 220 can receive sound signal A and sound signal B through multiple microphones respectively. , and obtain multiple first relative positions. When the multiple first relative positions are the same, the first relative positions are determined to be valid.
  • the multiple first relative positions are determined to be invalid and reset.
  • the new first relative position is determined, that is, the reliability of the first relative position can be ensured by comparing the sound signal A and the sound signal B received based on multiple microphones and determining the first relative position.
  • device B 220 can receive sound signals A and 220 respectively through two of the microphones. Sound signal B, based on the distance between the two microphones, corrects arrival time 1 and/or arrival time 2 to obtain new arrival time 1 and new arrival time 2, thereby reducing or avoiding the interference caused by these two microphones. The impact of position on receiving sound signal A and/or sound signal B.
  • the relative position between device B 220 and device A 210 can be determined based on the new arrival time 1 and the new arrival time 2.
  • the time between device A 210 and device B 220 can be determined based on the arrival time 1 of sound signal A and the arrival time 2 of sound signal B with reference to the time of arrival (TOA). first relative position.
  • device B 220 sends the first relative position to device A 210.
  • S1204 is an optional step.
  • device A 210 may, in response to detecting the second event, determine speaker group A and speaker group B, and the third relative position between speaker group A and speaker group B matches the first position relationship type.
  • the device B 220 can determine the device based on the arrival time 1 of the sound signal A and the arrival time 2 of the sound signal B.
  • the first relative position between A 210 and device B 220 that matches the first position relationship type enables accurate detection of the relative position between devices through speakers and microphones without relying on components such as radar, reducing the cost of detection equipment. relative location costs.
  • device A 210 when device A 210 performs S1203 to play sound signal A and sound signal B, it may continue to play until the second event is detected next time. In some examples, device A 210 may stop playing the sound signal A and the sound signal B when the third preset time period after starting to play the sound signal A and the sound signal B in S1203. Of course, in practical applications, device A 210 can also determine the timing to stop playing sound signal A and sound signal B through other methods. For example, device A 210 can receive the information between device A 210 and device B 220 sent by device B 220. When the first relative position between them is reached, the sound signal A and the sound signal B are stopped playing.
  • FIG. 13 is a flow chart of a method for detecting relative positions between devices provided by an embodiment of the present application. Among them, this method can be used in the system shown in any one of Figures 4 to 6.
  • Device A 210 can emit multiple sets of sound signals through multiple pairs of speaker groups, so that the location of multiple different locations such as device B 220 and device C 230 can be determined.
  • the relative position between the device and device A 210 It should be noted that this method is not limited to the specific order described in Figure 13 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be It can also be omitted or deleted.
  • the method includes the following steps:
  • Device A 210 determines speaker group A, speaker group B, speaker group C, and speaker group D.
  • speaker group C and speaker group D may each include at least one speaker, and the number of speakers included in speaker group C and speaker group D may be less than or equal to the total number of speakers included in device A 210. Speaker group C and speaker group D may be respectively located on both sides of a certain plane.
  • the relative position between speaker C and speaker group D is the fourth relative position, and the positional relationship type matched by the fourth relative position is the second positional relationship type.
  • the third relative position is different from the fourth relative position
  • the second positional relationship type is different from the first positional relationship type.
  • the first positional relationship type between speaker group A and speaker group B is an up-down relationship
  • the third relative position is that speaker group A is above speaker group B
  • speaker group C is above speaker group B
  • the second positional relationship type between the speaker groups D is a left-right relationship
  • the fourth relative position is that the speaker group C is to the left of the speaker group D.
  • speaker group A and speaker group C or speaker group D may include at most some of the same speakers
  • speaker group B and speaker group C or speaker group D may include at most some of the same speakers.
  • speaker group A and speaker group C both include speaker a
  • speaker group A and speaker group D both include speaker b
  • speaker group B and speaker group C both include speaker c
  • speaker group Both B and speaker group D include speaker d.
  • the method in which device A 210 determines speaker group A and speaker group B in S1301 can be the same as that in S1202.
  • the method of determining speaker group A and speaker group B is the same.
  • the method of device A 210 determining speaker group C and speaker group D in S1301 may be similar or the same as the method of determining speaker group A and speaker group B in S1202, which is no longer the same here. Let’s not go into details.
  • Device A 210 emits sound signal A through speaker group A, emits sound signal B through speaker group B, emits sound signal C through speaker group C, and emits sound signal D through speaker group D.
  • the sound characteristics of the sound signal A, the sound characteristics of the sound signal B, the sound characteristics of the sound signal C and the sound characteristics of the sound signal D are each different.
  • device A 210 may simultaneously emit sound signal A through speaker group A, sound signal B through speaker group B, sound signal C through speaker group C, and sound signal D through speaker group D.
  • device A 210 may first emit sound signal A through speaker group A, emit sound signal B through speaker group B, and then emit sound signal C through speaker group C after a first preset time interval. D sends out the sound signal D. After another interval of the first preset time period, the sound signal A is sent out through the speaker group A again, the sound signal B is sent out through the speaker group B,..., and so on, thereby cyclically sending out the sound signal A and the sound. Signal B and sound signal C and sound signal D. It should be noted that the embodiments of the present application do not limit the method of determining the first preset duration and the duration of the first preset duration.
  • device B 220 determines the first relative position between device A 210 and device B 220 based on the arrival time 1 of the received sound signal A and the arrival time 2 of the received sound signal B.
  • the method in which device B 220 performs S1303a to determine the first relative position between device A 210 and device B 220 based on the arrival time 1 of the received sound signal A and the arrival time 2 of the received sound signal B can be It is the same as the method of executing S1203 to determine the first relative position between device A 210 and device B 220 based on the arrival time 1 of the received sound signal A and the arrival time 2 of the received sound signal B, which will not be described again here. .
  • device A 210 sends first configuration information to device B 220.
  • the first configuration information can be used to indicate the sound characteristics of sound signal A and the sound characteristics of sound signal B.
  • device B 220 can be based on the sound characteristics of sound signal A.
  • the sound characteristics identify sound signal A, identify sound signal B based on the sound characteristics of sound signal B, and ignore sound signal C and sound signal D.
  • the first configuration information can also be used to indicate other information.
  • the first configuration information can be used to indicate the way to emit sound signal A and sound signal B; and/or the first configuration information can be used to indicate Indicates the sounding time 1 when the sound signal A is emitted and the sounding time 2 when the sound signal B is emitted.
  • device C 230 determines the fifth relative position between device A 210 and device C 230 based on the arrival time 3 of the received sound signal C and the arrival time 4 of the received sound signal D.
  • the fifth relative position matches the second position relationship type.
  • the method in which device C 230 performs S1303b to determine the fifth relative position between device A 210 and device C 230 based on the arrival time 3 of the received sound signal C and the arrival time 4 of the received sound signal D can be The same or similar manner as the way in which device B 220 performs S1303a based on the arrival time 1 of the received sound signal A and the arrival time 2 of the received sound signal B, determining the first relative position between the device A 210 and the device B 220, here I won’t go into details one by one.
  • device A 210 sends second configuration information to device C 230.
  • the second configuration information can be used to indicate the sound characteristics of the sound signal C and the sound characteristics of the sound signal D. Then the device C 230 can be based on the sound of the sound signal C. The feature identifies the sound signal C, identifies the sound signal D based on the sound characteristics of the sound signal D, and ignores the sound signal A and the sound signal B.
  • the second configuration information can also be used to indicate other information.
  • the second configuration information can be used to indicate the way to send the sound signal C and the sound signal D; and/or the first configuration information can be used to indicate At the time 3 when the sound signal C is emitted and the time 4 when the sound signal D is emitted, the instructions are given.
  • device A 210 can send at least part of the second configuration information to device C 230 in advance, or device C 230 can obtain at least part of the information through other methods. Therefore, device A 210 does not need to This at least partial information is sent to device C 230 each time sound signal C and sound signal D are emitted.
  • device A 210 may send fourth configuration information to device B 220 and device C 230, the fourth configuration information indicating a manner of emitting sound signal A, sound signal B, sound signal C, and sound signal D; and/or , the fourth configuration information may be used to indicate the sound characteristics of the sound signal A, the sound characteristics of the sound signal B, the sound signal C, and the sound characteristics of the sound signal D. and/or the fourth configuration information may be used to indicate the sounding time 1 when the sound signal A is emitted, the sounding time 2 when the sound signal B is sent, the sounding time 3 when the sound signal C is sent out, and the sounding time 4 when the sound signal D is sent out.
  • Device B 220 may receive the sound signal C and the sound signal D, and then determine a second relative position between device B 220 and device A 210, where the second relative position matches the second position relationship type.
  • Device C 230 may receive the sound signal C and the sound signal D, and then determine a fifth relative position between device C 230 and device A 210, where the fifth relative position matches the second position relationship type.
  • device A 210 can send at least part of the fourth configuration information to device B 220 and device C 230 in advance, or device B 220 and device C 230 can obtain at least part of the information through other methods, so , device A 210 also does not need to send the at least part of the information to device B 220 and device C 230 every time it emits sound signal A, sound signal B, sound signal C and sound signal D.
  • device A 210 can emit multiple sets of sound signals through multiple sets of speakers, and the relative positions between different sets of speakers correspond to different position relationship types, so that multiple devices in multiple position relationship types with device A 210 Each device can determine its relative position with device A 210, which greatly improves the efficiency of detecting the relative position of the device.
  • FIG. 14 is a flow chart of a method for detecting relative positions between devices provided by an embodiment of the present application.
  • this method can be used for device A 210 in any of the systems shown in Figures 4 to 6.
  • Device A 210 can continue to execute the method shown in Figure 14 after the method shown in Figure 12, thereby changing the sound emitted.
  • the method of signaling causes the relative position between device A 210 and device B 220 to be updated. It should be noted that this method is not limited to the specific order described in Figure 14 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be It can also be omitted or deleted.
  • the method includes the following steps:
  • Device A 210 detects the first event.
  • the first event can be used to trigger device A 210 to switch the sound mode. It should be noted that the first event may be an event set in advance.
  • the first event may include a change in the posture of device A 210.
  • the first event may include a change in the posture of device B 210.
  • the first event may include a change in the display mode of device A 210.
  • the first event may include device A 210 receiving a second request sent by device B 220, and the second request is used to request device A 210 to switch the sound mode.
  • S1206 is an optional step. For example, in some examples, at least some of the following steps may be continued when device A 210 performs the second preset time period after S1203. It should also be noted that the embodiment of the present application does not limit the method of determining the second preset duration and the duration of the second preset duration.
  • Device A 210 determines speaker group C and speaker group D.
  • speaker group C and speaker group D may each include at least one speaker, and the number of speakers included in speaker group C and speaker group D may be less than or equal to the total number of speakers included in device A 210. Speaker group C and speaker group D may be respectively located on both sides of a certain plane.
  • the relative position between speaker C and speaker group D is the fourth relative position, and the positional relationship type matched by the fourth relative position is the second positional relationship type.
  • the third relative position is different from the fourth relative position
  • the second positional relationship type is different from the first positional relationship type
  • S1207 is an optional step.
  • device A 210 emits sound signal C through speaker group C, and emits sound signal D through speaker group D.
  • the method in which the device A 210 in S1208 sends out the sound signal C through the speaker group C and the sound signal D through the speaker group D can also be used in the same manner as in S1203 in which the device A 210 sends out the sound signal A through the speaker group A and through the speaker group.
  • the manner in which B sends the sound signal B is the same or similar, and will not be described again here.
  • device B 220 determines the second relative position between device A 210 and device B 220 based on the arrival time 3 of the received sound signal C and the arrival time 4 of the received sound signal D.
  • the second relative position may match the second position relationship type.
  • device B 220 performs S1209 based on the arrival time 3 of the received sound signal C and the received sound signal C.
  • the arrival time 4 of signal D the way to determine the second relative position between device A 210 and device B 220, may be the same as the arrival time 3 of device C 230 receiving sound signal C and the arrival time of receiving sound signal D in S1303b.
  • the fifth relative position between device A 210 and device C 230 is determined in the same or similar manner.
  • device B 220 sends the second relative position to device A 210.
  • S1210 is an optional step.
  • device A 210 can switch the way of emitting sound signals in response to detecting the first event, so that device B 220 and device A 210 can adjust the relative positions between device B 220 and device A 210. Updated to improve the accuracy of detecting the relative position of device B 220 and device A 210.
  • the first positional relationship type matched by the third relative position is a left-right relationship.
  • Device B 220 determines a first relative position between device B 220 and device A 210 based on sound signal A and sound signal B. However, it may be that device B 220 is above or below device A 210 at this time, and it may be difficult for device B 220 to determine the first relative position based on sound signal A and sound signal B. Therefore, device B 220 sends a second request to device A 210.
  • the second request is used to indicate that device B 220 failed to determine the relative position with device A 210, or the second request is used to request device A 210 to switch the sound mode.
  • device A 210 receives the second request, it plays the sound signal C through the speaker group C and plays the sound signal D through the speaker group D.
  • the fourth relative position of the speaker group C and the speaker group D is that the speaker group A is in the speaker group.
  • Device B 220 determines the second relative position based on sound signal C and sound signal D.
  • FIG. 15 is a flow chart of a method of grouping speakers according to an embodiment of the present application.
  • this method can be used for device A 210 in any of the systems shown in Figures 4 to 6. It should be noted that this method is not limited to the specific sequence shown in Figure 15 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be It can also be omitted or deleted.
  • the method includes the following steps:
  • Device A 210 determines the first state of Device A 210.
  • the relative position of the speakers on device A 210 in device A 210 may also be different.
  • speaker a and speaker c are in device A 210.
  • speaker b and speaker d are on the right side of device A 210, and when device A 210 is rotated 90 degrees clockwise, speaker c and speaker d are on the left side of device A 210, and speaker a and speaker b are on the left side of device A 210.
  • the right side of device A 210 as shown in Figure 16. Therefore, in order to facilitate device A 210 to determine the current relative position of each speaker in device A 210, device A 210 may determine the first state of device A 210.
  • the first state may be used to indicate the state that device A 210 is in.
  • the first state of device A 210 may include a first posture of device A 210 or a first display mode of device A 210. Among them, the device A 210 can determine the first posture of the device A 210 through the sensor 150 in the device A 210.
  • the first posture of device A 210 and the first display mode may or may not correspond.
  • the screen rotation switch of device A 210 when the screen rotation switch of device A 210 is turned on, and the plane where the display screen of device A 210 is located is perpendicular to the horizontal plane (that is, device A 210 is placed vertically) or close to vertical, the first display mode of device A 210
  • the positive direction of the content displayed on the display screen of device A 210 is parallel or nearly parallel to the gravity direction of device A 210.
  • the plane where the coordinate axis y and the coordinate axis z are located is the horizontal plane, and the direction of the coordinate axis x is parallel and opposite to the direction of gravity.
  • the plane where the display screen is located is perpendicular to the horizontal plane, the content displayed on the display screen The positive direction of is opposite to the direction of gravity.
  • Device A 210 may determine the corresponding first display mode based on the first gesture.
  • the first display of device A 210 when the screen rotation switch of device A 210 is turned off, or the plane where the display screen of device A 210 is located is parallel to the horizontal plane (that is, device A 210 is placed horizontally) or nearly parallel, the first display of device A 210 The mode and the first gesture may not correspond, and the current first display mode of device A 210 may be a preset display mode or a user-specified display mode.
  • the plane where the coordinate axis y and the coordinate axis z are located is the horizontal plane
  • the direction of the coordinate axis x is parallel and opposite to the direction of gravity
  • the plane where the display screen of device A 210 is located is parallel to the horizontal plane.
  • device A The positive direction of the content displayed on the display screen 210 is perpendicular to the gravity direction of device A 210.
  • Device A 210 determines the first location relationship type.
  • the first position relationship type may be used to indicate a position relationship matched by the relative positions between device A 210 and device B 220 type.
  • device A 210 emits a sound signal through a pair of speaker groups matching the first position relationship type
  • other devices in the first position relationship type with device A 210 can determine the distance between the device and device A 210 based on the sound signal. , the specific relative position matching the first position relationship type.
  • device A 210 may obtain the first location relationship type sent by device B 220. In some examples, device A 210 may obtain the first location relationship type from the first request or the second request sent by device B 220.
  • device A 210 may determine the first location relationship type based on the first task. In some examples, device A 210 may determine the first location relationship type based on the application corresponding to the first task. For example, when the first task is a task that needs to be processed based on device A 210 and device B 220 being in a left-right relationship, the first location relationship type may be a left-right relationship; when the first task is a task that needs to be processed based on device A 210 and device B 220 being in a left-right relationship; When processing tasks based on a top-down relationship, the first position relationship type may be a top-down relationship.
  • device A 210 can also execute S1502 first to determine the first location relationship type, and then execute S1501 to determine the first state. Alternatively, device A 210 can also execute S1501 and S1502 at the same time to determine the first state and the first state.
  • a location relationship type This embodiment of the present application does not limit the order in which device A 210 determines the first location relationship type and the first state.
  • Device A 210 determines speaker group A and speaker group B based on the first state and the first position relationship type.
  • Device A 210 may detect the relative position between device A 210 and device B 220 based on the actual required relative position between a pair of speaker groups as indicated by the first position relationship type, when device A 210 is in the first state.
  • the relative position of each speaker on device A 210 accurately determines speaker group A and speaker group B, and the third relative position between speaker group A and speaker group matches the first positional relationship type.
  • device A 210 receives the first request sent by device B 220, and the first position relationship type carried in the first request is a top-bottom relationship, then device A 210 can determine that two speaker groups in a top-bottom relationship need to be divided.
  • Device A 210 is in landscape mode. Based on the landscape mode, it can be determined that speaker a and speaker b are currently at the top, and speaker c and speaker d are currently at the bottom. Therefore, it is determined that speaker group A includes speaker a and speaker b, and speaker group B includes speaker c and speaker d.
  • the first location relationship type is still an up-down relationship.
  • Device A 210 is in portrait mode. Based on the portrait mode, it can be determined that speaker a and speaker c are currently at the top, and speaker b and speaker d are currently at the bottom. Therefore, it is determined that speaker group A includes speaker a and speaker c, and speaker group B includes speaker b and speaker d.
  • device A 210 may store a corresponding relationship between the display mode and the positional relationship type and the speaker group, and the corresponding relationship includes the speaker group A and the speaker group B corresponding to the first display mode and the first positional relationship type, Then, when device A 210 determines the first display mode and the first target position relationship type, it can determine speaker group A and speaker group B from the corresponding relationship.
  • the corresponding relationship between the display mode and position relationship type stored by device A 210 and the speaker can be as shown in Table 1 below.
  • device A 210 may store a correspondence between gestures and position relationship types and speaker groups.
  • the correspondence includes speaker group A and speaker group B corresponding to the first gesture and the first position relationship type. Then the device When A 210 determines the first posture and the first position relationship type, the speaker group A and the speaker group B can be determined from the corresponding relationship.
  • the corresponding relationship between the gesture and position relationship types stored by device A 210 and the speaker can be as shown in Table 2 below.
  • device A 210 may store the first coordinates of each speaker on device A 210.
  • the first coordinates may be the coordinates of the speaker on device A 210 when device A 210 is in the preset second posture.
  • the first coordinate can be understood as the absolute coordinate of the speaker on device A 210.
  • the first coordinates corresponding to the second posture may be transformed into second coordinates corresponding to the first posture, and the second coordinates may be understood as relative coordinates corresponding to the first posture.
  • device A 210 determines the second coordinates of each speaker, it can determine speaker group A and speaker group B based on the second coordinates of each speaker and the first position relationship type.
  • the posture of the device A 210 in Figure 4 is the second posture.
  • the manufacturer of the device A 210 calibrates the second posture and the first coordinates of each speaker before the device A 210 leaves the factory.
  • the first coordinate of the speaker a It can be (-1,1)
  • the first coordinate of speaker b can be (1,1)
  • the first coordinate of speaker c can be (-1,-1)
  • the first coordinate of speaker d can be (1, -1)
  • device A 210 can determine that speaker a is at the upper left, speaker b is at the upper right, speaker c is at the lower left, and speaker d is at the lower right based on the first coordinates of each speaker.
  • device A 210 transforms the first coordinates of each speaker based on the first posture, and the second coordinates of speaker a can be obtained as (1,1), the second coordinate of speaker b can be (1,-1), the second coordinate of speaker c can be (1,-1), and the second coordinate of speaker d can be (-1,-1 ), device A 210 can determine that speaker a is at the upper right, speaker b is at the lower left, speaker c is at the upper left, and speaker d is at the lower left based on the second coordinates of each speaker.
  • device A 210 can detect the relative position between device A 210 and device B 220 based on the first position relationship type indicated, and the actual required relative position between a pair of speaker groups, in the device When A 210 is in the first state, the relative position of each speaker on device A 210 accurately determines speaker group A and speaker group B, which improves the accuracy of grouping speakers so that based on the sound emitted by the pair of speaker groups
  • the signal can accurately determine the relative position between device A 210 and device B 220 that matches the first position relationship type, that is, the accuracy of detecting the relative position between device A 210 and device B 220 is improved.
  • FIG. 19 is a flow chart of a method for detecting relative positions between devices provided by an embodiment of the present application. Among them, this method can be used in any of the systems shown in Figures 7 to 9. It should be noted that this method is not limited to the specific order described in Figure 19 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be exchanged. It can also be omitted or deleted. The method includes the following steps:
  • Device A 210 detects the second event.
  • device A 210 can also perform at least some of the following steps under other circumstances to detect the relative position between device A 210 and device B 220, so S1901 is an optional step.
  • Device A 210 determines microphone group A and microphone group B.
  • microphone group A and microphone group B may each include at least one microphone, and the number of microphones included in microphone group A and microphone group B may be less than or equal to the total number of microphones included in device A 210.
  • Microphone group A and microphone group B may be located on both sides of a certain plane respectively.
  • the relative position between microphone A and microphone group B is the third relative position, and the positional relationship type matched by the third relative position is the first positional relationship type.
  • the first positional relationship type may be any of at least one positional relationship type that the at least three microphones included in the device A 210 can be divided into.
  • the third relative position between microphone group A and microphone group B may be that microphone group A is above microphone group B, and the first positional relationship type matched by the third relative position may be an up-down relationship.
  • the first relative position between microphone group A and microphone group B may be that microphone group A is to the left of microphone group B, and the first positional relationship type of the third relative position matching may be a left-right relationship.
  • device A 210 may first determine microphone group A and microphone group B.
  • the determined relative positions between microphone group A and microphone group B are the third pair of positions, and the positional relationship matched by the third relative position
  • the type is the first position relationship type.
  • device A 210 may also first determine the first position relationship type, and then determine microphone group A and microphone group B based on the first position relationship type.
  • the relative position between microphone group A and microphone group B is is the third relative position.
  • device A 210 stores multiple pairs of microphone groups, each pair of microphone groups includes two microphone groups respectively located on both sides of a plane, and device A 210 can determine microphone group A and microphone group B among the multiple pairs of microphone groups. .
  • device A 210 groups all or some of the at least three microphones included in device A 210, thereby determining microphone group A and microphone group B.
  • the method for device A 210 to determine microphone group A and microphone group B may refer to the method shown in Figure 22 below.
  • the manner in which device A 210 performs S1902 to determine microphone group A and microphone group B may be similar to the manner in which device A 210 performs S1202 to determine speaker group A and speaker group B.
  • S1902 is an optional step.
  • device B 220 sends sound signal A and sound signal B.
  • device A 210 may send a third request to device B 220 when determining microphone group A and microphone group B.
  • the third request is used to request to send out sound signal A and sound signal B.
  • Device B 220 receives the sound signal A and the sound signal B. On the third request, sound signal A and sound signal B are emitted.
  • device B 220 can first send out sound signal A and sound signal B, and send a first request to device A 210, and device A 210 performs S1901 when receiving the first request.
  • device B 220 may emit sound signal A and sound signal B through one speaker (such as speaker e).
  • device B 220 includes multiple speakers, and the distance between the multiple speakers is negligible.
  • the multiple speakers are different speaker units in the same speaker array, and the same sound signal comes from The time required for each speaker unit to transmit to the same microphone is almost the same, then device B 220 can emit sound signal A and sound signal B through more than one speaker in the plurality of speakers.
  • device B 220 may send the first configuration information to device A 210.
  • first configuration information and the method of sending the first configuration information please refer to the related description in S1202 mentioned above.
  • device A 210 determines the first relative position between device A 210 and device B 220 based on the arrival time 1 of sound signal A received through microphone group A and the arrival time 2 of sound signal B received through microphone group B.
  • the device A 210 Since the device A 210 receives the sound signal A and the sound signal B through the microphone group A and the microphone group B at the first relative position, the device A 210 receives the sound signal A based on the arrival time 1 and the arrival time of the sound signal B. At time 2, the size between the propagation time TA of sound signal A and the propagation time TB of sound signal B is determined, and then the distance between device B 220 and microphone group A and the distance between device B 220 and microphone group B are determined, and then based on the microphone The third relative position of group A and microphone group B and the first positional relationship type accurately determine the first relative position between device A 210 and device B 220 . Wherein, the first relative position matches the first position relationship type.
  • the way device A 210 identifies sound signal A and sound signal B, and the way device A 210 determines the first relative position between device A 210 and device B 220 based on arrival time 1 and arrival time 2, It can be the same as the way in which the device B 220 identifies the sound signal A and the sound signal B in the aforementioned S1204, and the way in which the device B 220 determines the first relative position between the device A 210 and the device B 220 based on the arrival time 1 and the arrival time 2. or similar, so I won’t go into details here.
  • device A 210 sends the first relative position to device B 220.
  • S1905 is an optional step.
  • device A 210 may, in response to detecting the second event, determine that the third relative position between microphone group A and microphone group B, microphone group A and speaker group B matches the first positional relationship type.
  • the device B 220 emits the first sound signal and the second sound signal
  • the device A 210 receives the sound signal A through the microphone group A and receives the sound signal B through the microphone group B.
  • the arrival time 2 of sound signal B determines the first relative position between device A 210 and device B 220 that matches the first position relationship type, that is, the relative position between the devices is detected through the acoustic and electrical transducer.
  • the need to rely on components such as radar reduces the cost of detecting relative positions between devices.
  • the device A 210 when performing S1904 to receive the sound signal A and the sound signal B, the device A 210 may continue to receive until the second event is detected next time. In some examples, device A 210 may stop receiving the sound signal A and the sound signal B when the third preset time period after starting to receive the sound signal A and the sound signal B in S1904. Of course, in practical applications, device A 210 can also determine the timing to stop receiving sound signal A and sound signal B through other methods. For example, device A 210 can determine the first relative position between device A 210 and device B 220. when, stop receiving audio signal A and audio signal B.
  • FIG. 20 is a flow chart of a method for detecting relative positions between devices provided by an embodiment of the present application.
  • this method can be used in the system shown in any one of Figures 7 to 9.
  • Device A 210 can receive multiple sets of sound signals through multiple pairs of microphone groups, so that the location of multiple different locations such as device B 220 and device C 230 can be determined.
  • this method is not limited to the specific sequence shown in Figure 20 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be It can also be omitted or deleted.
  • the method includes the following steps:
  • Device A 210 determines microphone group A, microphone group B, microphone group C, and microphone group D.
  • microphone group C and microphone group D may each include at least one microphone, and the number of microphones included in microphone group C and microphone group D may be less than or equal to the total number of microphones included in device A 210.
  • Microphone group C and microphone group D may be located on both sides of a certain plane respectively.
  • the relative position between microphone C and microphone group D is the fourth relative position, and the positional relationship type matched by the fourth relative position is the second positional relationship type.
  • the third relative position is different from the fourth relative position
  • the second positional relationship type is different from the first positional relationship type.
  • the first positional relationship type between microphone group A and microphone group B is an up-down relationship
  • the third relative position is that microphone group A is above microphone group B
  • microphone group C and The second positional relationship type between the microphone groups D is a left-right relationship
  • the fourth relative position is that the microphone group C is to the left of the microphone group D.
  • the way device A 210 determines microphone group A and microphone group B in S2001 can be the same as the way in which microphone group A and microphone group B are determined among multiple microphones in S1901.
  • Device A 210 determines microphone group A and microphone group B in S2001.
  • the method of determining microphone group C and microphone group D may be similar or the same as the method of determining microphone group A and microphone group B among multiple microphones in S1901, and will not be described again here.
  • microphone group A and microphone group C or microphone group D may include at most some of the same microphones
  • microphone group B and microphone group C or microphone group D may include at most some of the same microphones.
  • microphone group A and microphone group C both include microphone a
  • microphone group A and microphone group D both include microphone b
  • microphone group B and microphone group C both include microphone c
  • microphone group Both B and microphone group D include microphone d.
  • device B 220 emits sound signal A and sound signal B.
  • device B 220 sends first configuration information to device A 210.
  • the first configuration information may be used to indicate the sound characteristics of sound signal A and the sound characteristics of sound signal B. Then device A 210 may based on the sound characteristics of sound signal A. The sound characteristics identify sound signal A, identify sound signal B based on the sound characteristics of sound signal B, and ignore sound signal C and sound signal D.
  • the first configuration information can also be used to indicate other information.
  • the first configuration information can be used to indicate the way to emit sound signal A and sound signal B; and/or the first configuration information can be used to indicate Instructs the generator to send out the sound signal A Sound time 1 and sound time 2 when sound signal B is emitted.
  • the way in which the device B 220 sends out the sound signal A and the sound signal B in S2002a can be the same as the way in which the device B 220 sends out the sound signal A and the sound signal B in the aforementioned S1903.
  • device C 230 sends out sound signal C and sound signal D.
  • device C 230 may send second configuration information to device A 210.
  • the second configuration information may be used to indicate the sound characteristics of the sound signal C and the sound characteristics of the sound signal D. Then the device A 210 may be based on the sound signal C. Identify the sound signal C based on the sound characteristics of the sound signal D, identify the sound signal D based on the sound characteristics of the sound signal D, and ignore the sound signal A and the sound signal B.
  • the second configuration information can also be used to indicate other information.
  • the second configuration information can be used to indicate the way to send the sound signal C and the sound signal D; and/or the first configuration information can be used to indicate At the time 3 when the sound signal C is emitted and the time 4 when the sound signal D is emitted, the instructions are given.
  • the sound characteristics of sound signal A, sound signal The sound characteristics of B, the sound characteristics of sound signal C, and the sound characteristics of sound signal D are each different.
  • the way in which the device B 220 sends out the sound signal C and the sound signal D in S2002b can be the same as the way in which the device B 220 sends out the sound signal A and the sound signal in S2002a.
  • device A 210 determines the first relative position between device A 210 and device B 220 based on the arrival time 1 of sound signal A received through microphone group A and the arrival time 2 of sound signal B received through microphone group B.
  • the arrival time 3 of the sound signal C received by the microphone group C and the arrival time 4 of the sound signal D received by the microphone group D determine the fifth relative position between the device A 210 and the device C 230.
  • the first relative position matches the first positional relationship type
  • the fifth relative position matches the second positional relationship type
  • device A 210 determines the first relative position between device A 210 and device B 220 based on the arrival time 1 of sound signal A received through microphone group A and the arrival time 2 of sound signal B received through microphone group B. In the same manner as in S1904, device A 210 determines the first time between device A 210 and device B 220 based on the arrival time 1 of sound signal A received through microphone group A and the arrival time 2 of sound signal B received through microphone group B. The relative positions are in the same manner; device A 210 determines the third position between device A 210 and device C 230 based on the arrival time 3 of sound signal C received through microphone group C and the arrival time 4 of sound signal D received through microphone group D.
  • the method of five relative positions can be the same as that in S1904, device A 210 determines the distance between device A 210 and device B 220 based on the arrival time 1 of sound signal A received through microphone group A and the arrival time 2 of sound signal B received through microphone group B.
  • the method of the first relative position is similar and will not be repeated here.
  • device A 210 can receive multiple sets of sound signals through multiple sets of microphones.
  • the relative positions between different sets of microphones correspond to different position relationship types, so that device A 210 can determine whether the user is in multiple position relationship types.
  • the relative positions between multiple devices and device A 210 greatly improves the efficiency of detecting the relative positions of the devices.
  • FIG. 21 is a flow chart of a method for detecting relative positions between devices provided by an embodiment of the present application. Among them, this method can be used for device A 210 in any of the systems shown in Figures 7-9. Device A 210 can continue to execute the method shown in Figure 21 after the method shown in Figure 19, thereby changing the received sound. The method of signaling causes the relative position between device A 210 and device B 220 to be updated. It should be noted that this method is not limited to the specific order described in Figure 23 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be It can also be omitted or deleted. The method includes the following steps:
  • the first event can be used to trigger device A 210 to switch the radio mode. It should be noted that the first event may be an event set in advance.
  • the first event may include a change in the posture of device A 210.
  • the first event may include a change in the posture of device B 210.
  • the first event may include a change in the display mode of device A 210.
  • the first event may include device A 210 receiving a second request sent by device B 220, and the second request is used to request device A 210 to switch the radio mode.
  • S1906 is an optional step.
  • device A 210 may perform S1904 During the second preset time period, continue to perform at least some of the following steps. It should also be noted that the embodiment of the present application does not limit the method of determining the second preset duration and the duration of the second preset duration.
  • device A 210 determines microphone group C and microphone group D.
  • microphone group C and microphone group D may each include at least one microphone, and the number of microphones included in microphone group C and microphone group D may be less than or equal to the total number of microphones included in device A 210.
  • Microphone group A and microphone group B may be located on both sides of a certain plane respectively. The relative position between microphone A and microphone group B is the fourth relative position, and the positional relationship type matched by the fourth relative position is the second positional relationship type.
  • the third relative position is different from the fourth relative position
  • the second positional relationship type is different from the first positional relationship type
  • device B 220 sends out sound signal C and sound signal D.
  • the way in which the device B 220 sends out the sound signal C and the sound signal D in S1908 can be the same or similar to the way in which the device B 220 sends out the sound signal A and the sound signal D in S1903, which will not be described again here.
  • device A 210 determines the second relative position between device A 210 and device B 220 based on the arrival time 3 of the sound signal C received through the microphone group C and the arrival time 4 of the sound signal D received through the microphone group D.
  • device A 210 determines the distance between device A 210 and device B 220 based on the arrival time 3 of sound signal C received through microphone group C and the arrival time 4 of sound signal D received through microphone group D.
  • the method of the second relative position can be the same as that in S1904, device A 210 determines the relationship between device A 210 and device B 220 based on the arrival time 1 of sound signal A received through microphone group A and the arrival time 2 of sound signal B received through microphone group B.
  • the first relative position between the two is similar and will not be repeated here.
  • device A 210 sends the second relative position to device B 220.
  • S1910 is an optional step.
  • device A 210 can switch the mode of receiving the sound signal in response to detecting the first event, so that device B 220 and device A 210 can perform relative positions between device B 220 and device A 210. Updated to improve the accuracy of detecting the relative position of device B 220 and device A 210.
  • FIG. 22 is a flow chart of a method of grouping microphones provided by an embodiment of the present application. Among them, this method can be used for device A 210 in any of the systems shown in Figures 7 to 9. It should be noted that this method is not limited to the specific order described in Figure 22 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be exchanged. It can also be omitted or deleted. The method includes the following steps:
  • Device A 210 determines the first state of Device A 210.
  • the relative position of the microphone on device A 210 in device A 210 may also be different.
  • microphone a and Microphone c is on the left side of device A 210
  • microphone b and microphone d are on the right side of device A 210
  • when device A 210 is rotated 90 degrees clockwise microphone c and microphone d are on the left side of device A 210
  • microphone a and microphone b are on the right side of device A 210, as shown in Figure 23. Therefore, in order to facilitate device A 21 to determine the current relative position of each microphone in device A 210, device A 210 may determine the first state.
  • the first state may be used to indicate the state that device A 210 is in.
  • the first state of device A 210 may include a first posture of device A 210 or a first display mode of device A 210.
  • Device A 210 determines the first location relationship type.
  • the first position relationship type may be used to indicate the position relationship type matched by the relative positions between device A 210 and device B 220.
  • device A 210 can receive sound signals through a pair of microphone groups matching the first position relationship type, and determine the relationship between the device and the device based on the received sound signals. The specific relative position between A and 210 that matches the first position relationship type.
  • Device A 210 determines microphone group A and microphone group B based on the first state and the first position relationship type.
  • Device A 210 may detect the relative position between device A 210 and device B 220 based on the first position relationship type indicated. position, the actual required relative position, the relative position of each microphone on device A 210 when device A 210 is in the first state, is accurately determined to be between microphone group A and microphone group B, and between microphone group A and microphone group B The third relative position matches the first position relationship type.
  • device A 210 receives the first request sent by device B 220, and the first location relationship type carried in the first request is a top-down relationship, then device A 210 can determine that two microphone groups in a top-down relationship need to be divided.
  • Device A 210 is in landscape mode. Based on the landscape mode, it can be determined that microphone a and microphone b are currently at the top, and microphone c and microphone d are currently at the bottom. Therefore, it is determined that microphone group A includes microphone a and microphone b, and microphone group B includes microphone c and microphone d.
  • the first location relationship type is still an up-down relationship.
  • Device A 210 is in a portrait posture. Based on the portrait mode, it can be determined that microphone a and microphone c are currently at the top, and microphone b and microphone d are currently at the bottom. Therefore, it is determined that microphone group A includes microphone a and microphone c, and microphone group B includes microphone b and microphone d.
  • device A 210 may store a corresponding relationship between the display mode and the positional relationship type and the microphone group.
  • the corresponding relationship includes the microphone group A and the microphone group B corresponding to the first display mode and the first positional relationship type, Then when device A 210 determines the first display mode and the first target position relationship type, it can determine microphone group A and microphone group B from the corresponding relationship.
  • device A 210 may store a correspondence between gestures and position relationship types and microphone groups.
  • the correspondence includes microphone group A and microphone group B corresponding to the first gesture and the first position relationship type. Then the device When A 210 determines the first posture and the first position relationship type, the microphone group A and the microphone group B can be determined from the corresponding relationship.
  • device A 210 may store the first coordinates of each microphone on device A 210.
  • the first coordinates may be the coordinates of the microphone on device A 210 when device A 210 is in the preset second posture.
  • the first coordinate can be understood as the absolute coordinate of the microphone on device A 210.
  • the first coordinates corresponding to the second posture may be transformed into second coordinates corresponding to the first posture, and the second coordinates may be understood as relative coordinates corresponding to the first posture.
  • device A 210 determines the second coordinates of each microphone, it can determine microphone group A and microphone group B based on the second coordinates of each microphone and the first position relationship type.
  • the first positional relationship type can indicate the relative position between a pair of microphone groups to be divided
  • the first state can be used to determine the relative position of each microphone currently included in device A 210 on device A 210. Therefore, the device A 210 can accurately determine the microphone group A and the microphone group B at the first relative position among the plurality of microphones based on the first state and the first position relationship type.
  • FIG. 24 is a flow chart of a method for multi-device collaborative processing of tasks provided by an embodiment of the present application.
  • the multiple devices may include device A 210 and device B 220.
  • Device A 210 may be a tablet computer
  • device B 220 may be a mobile phone.
  • the embodiment of this application only uses device A 210 and device B 220 as an example to illustrate the method of multi-device collaborative processing of tasks, but does not include the specific content of the task and the number and equipment of the devices that collaboratively process the task. Type is limited.
  • this method is not limited to the specific order described in Figure 24 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be exchanged according to actual needs. Steps can also be omitted or deleted.
  • the method includes the following steps:
  • Device A 210 and device B 220 can discover each other through short-range communication such as Bluetooth and WIFI, and establish a communication connection. It should be noted that the embodiment of this application does not limit the way in which device A 210 and device B 220 discover each other. .
  • Device A 210 and Device B 220 determine the first relative position between Device A 210 and Device B 220.
  • device A 210 and device B 220 can determine the distance between device A 210 and device B 220 according to the methods and related descriptions shown in the aforementioned Figures 12-15 or according to the methods and related descriptions shown in Figures 19-22. the first relative position.
  • Device A 210 notifies device B 220 of the first collaboration mode.
  • Device A 210 can determine the first collaboration mode corresponding to the first relative position according to the first relative position, and notify device B 220 of the first collaboration mode.
  • the first collaboration mode can be used to indicate that when device A 210 and device B 22 are in In the first relative position, the device A 210 and the device B 220 operate together.
  • device A 210 stores a correspondence between the relative position and the collaboration mode, and determines the location according to the first relative position. Set to obtain the corresponding first collaboration mode from the corresponding relationship.
  • the correspondence between the relative position and the collaboration mode may be determined in advance by device A 210, and the correspondence includes the correspondence between the first relative position and the first collaboration mode.
  • device A 210 can also determine the first collaboration mode corresponding to the first relative position through other methods. The embodiments of this application do not limit the method of determining the first collaboration mode corresponding to the first relative position. .
  • the first task may be any kind of task, and the embodiment of the present application does not limit the type of the first task.
  • the first display content included in the second interface may be at least part of the second display content included in the second interface, so as to display details of the at least part of the display content, and/or, for The user edits at least part of the displayed content.
  • the third display content included in the third interface may be content associated with the second display content included in the second interface.
  • the third display content may be annotations and explanations of the second display content.
  • the first interface may be a subordinate interface of the second interface (or the second interface may be a superior interface of the first interface), and the third interface may be a subordinate interface of the first interface (or the first interface may be a superior interface of the first interface).
  • the superior interface of the three interfaces may be any two interfaces, if the two interfaces include an upper-level interface and a lower-level interface (in some examples, the upper-level interface can also be called a parent interface, and the lower-level interface can also be called a sub-interface), then the lower-level interface can be Generated based on the upper-level interface or existing depending on the upper-level interface.
  • the first relative position is that device B 220 is above device A 210
  • device A 210 displays the first interface
  • device B 220 displays the second interface
  • the first relative position is that device B 220 is above device A 210
  • device A 210 displays the first interface
  • device B 220 displays the third interface
  • the collaboration modes of device A 210 and device B 220 for processing the first task may be the same or different, and in a certain relative position, the device
  • the collaboration mode in which A 210 and device B 220 collaboratively process the first task may not be limited to the several modes mentioned in the embodiments of this application.
  • device A 210 and device B 220 determine the second relative position between device A 210 and device B 220.
  • device A 210 and device B 220 may perform S2405 to determine the second relative position between device A 210 and device B 220 when the state of device A 210 changes or device B 220 moves.
  • device A 210 and device B 220 can also determine the second relative position between device A 210 and device B 220 at other times.
  • device A 210 notifies device B 220 of the second collaboration mode.
  • device A 210 and device B 220 can determine the first relative position between device A 210 and device B 220, thereby collaboratively processing the first task based on the first collaboration mode corresponding to the first relative position.
  • collaboratively processing the first task that is, the method of collaboratively processing the first task corresponds to the first relative position, which improves the reliability and user experience of collaboratively processing the first task.
  • device A 210 and device B 220 can also collaboratively process the first task based on the second collaboration mode corresponding to the second relative position.
  • the way in which device A 210 and device B 220 collaboratively process the first task can match the relative position between device A 210 and device B 220, thereby improving the accuracy and flexibility of processing the first task, and thereby improving the accuracy and flexibility of processing the first task. It also improves user experience.
  • device A 210 is currently processing the presentation alone.
  • device B 220 moves to a certain distance from device A 210, device B 220 discovers device A 210 and displays the HyperTerminal option included in the control center interface.
  • the icon and name of device A 210 "My Tablet" are displayed in the card 1400.
  • device B 220 When device B 220 receives the user's click operation based on the icon of device A 210, device B 220 sends a message to device A 210. A first request is sent, thereby requesting detection of the relative position between device A 210 and device B 220. Please continue to refer to Figure 26.
  • device A 210 When device A 210 receives the first request, it displays prompt information 1500.
  • the prompt information 1500 includes text information such as "Mobile phone P30 requests to join", an agree button, and an ignore button. If device A 210 receives the user's click operation based on the ignore button, it ignores the prompt information.
  • device A 210 If device A 210 receives the user's click operation based on the agree button, it determines that the current first display mode of device A 210 is the landscape mode, determines the first position relationship type according to the landscape mode to be the left-right relationship, and then determines the first position relationship type according to the landscape mode. And the relationship between left and right, the speakers on the left side of the control device A 210 (ie, the speaker b and the speaker d) emit the sound signal A, and the speakers on the right side of the control device A 210 (ie, the speaker a and the speaker c) emit the sound signal B, as shown in Figure 27 Show. Among them, the sound signal A can be a left channel signal, and the sound signal B can be a right channel signal.
  • “Joining" is displayed below the icon of device A 210 in the hyperterminal tab 1400 of device B 220 to prompt the user that the user is joining the first task processed by device A 210.
  • the device B 220 determines that the first relative position is that the device B 220 is on the left side of the device A 210 , and sends the first relative position to the device A 210 .
  • device A 210 determines that the corresponding first collaboration mode is to display a directory interface on device B 220 and display a specific page of document content on device A 210.
  • the display mode of device A 210 can be called preview mode
  • the display mode of device B 220 can be called tool mode.
  • Device A 210 sends the table of contents of the presentation to device B 220, and device B 220 receives and displays the table of contents of the presentation, as shown in Figure 28.
  • device A 210 is displaying page 2 1710 of the presentation.
  • the interface displayed by device B 220 may also include a new button 1720.
  • the user-specified content such as pictures, etc.
  • the user moves device B 220 to the right side of device A 210.
  • Device B 220 detects that the posture of device B 220 has changed, and therefore sends a second request to device A 210 again.
  • the sound signal A is emitted through the current left speaker again, and the current right speaker of device A 210 is controlled to emit sound signal B.
  • Device B 220 determines the second request based on sound signal A and sound signal B again.
  • the second relative position is that device B 220 is on the right side of device A 210, and the second relative position is sent to device A 210.
  • Device A 210 determines that the second collaboration mode is to display the upper-level interface on device A 210 and display the lower-level interface on device B 220 according to the fact that device B 220 is on the right side of device A 210. Therefore, the display on device B 220 is the same as the currently displayed presentation.
  • the note editing interface corresponding to the page is shown in Figure 29.
  • the display mode of device A 210 may be called editing mode
  • the display mode of device B 220 may be called preview mode.
  • Device B 220 can receive the remark information corresponding to the page submitted by the user in the remark editing interface.
  • Device A 210 switches to portrait mode display according to the posture change, and plays sound signal A again through the speakers on the left (i.e., speaker a and speaker b).
  • the sound signal B is played through the speakers on the right (that is, the speaker c and the speaker d), where the sound signal A can be a left channel signal, and the sound signal B can be a right channel signal.
  • Device B 220 again determines that the second relative position is that device B is to the left of device A based on sound signal A and sound signal B, and sends the second relative position to device A 210.
  • device A 210 determines that the second collaboration mode is to display a note editing interface corresponding to the page of the currently displayed presentation on device B 220, display a certain page of the presentation on device A 210 and simultaneously display the user's location on the device
  • the remark content added by B 220 is shown in Figure 30.
  • the first task is a document editing task.
  • Device A 210 can display a document editing page interface.
  • Device C 230 is on the left side of device A 210, and device C 230 can collaboratively display comments corresponding to the document edited by device A 210.
  • device C 230 receives a user click based on any annotation, device A 210 can jump to the location of the annotation.
  • the comment may be sent to device B 220 for display.
  • Device B 220 is located on the right side of device A 210.
  • Device B 220 can collaboratively display an insertion tool interface.
  • Device B 220 can receive user-submitted content through the insertion tool interface and insert the content into the document edited by device A 210.
  • device B 220 can insert the image into the document edited by device A 210 when receiving a user's click operation based on any image.
  • FIG. 32 is a flow chart of a method for emitting a sound signal provided by an embodiment of the present application.
  • the method can be used in a first device, the first device includes at least a first speaker, a second speaker and a third speaker, the first speaker and the second speaker are located on the first side of the first plane, and the third speaker is located on the first On the second side of the plane, the first speaker and the third speaker are located on the third side of the second plane, and the second speaker is located on the fourth side of the second plane; the first plane and the second plane are not parallel.
  • the first device may be device A 210 as shown in Figures 4-6, and the first speaker, the second speaker, and the third speaker may be any three of speaker a, speaker b, speaker, and speaker d.
  • device A 210 in the methods shown in Figures 12 to 15 and 24 can emit a sound signal based on the method provided by the embodiment of the present application, and the first device can implement at least part of the operations performed by device A 210. It should be noted that this method is not limited to the specific order described in Figure 32 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be It can also be omitted or deleted.
  • the method includes the following steps:
  • the first device emits a first sound signal through the first speaker and the second speaker, and emits a second sound signal through the third speaker.
  • the first device further includes a fourth speaker located on the second side of the first plane and on a fourth side of the second plane.
  • the first device may emit a first sound signal through the first speaker and the second speaker, and emit a second sound signal through the third speaker and the fourth speaker.
  • the fourth speaker may be another one of speaker a, speaker b, speaker c, and speaker d.
  • the first speaker and the second speaker may be speakers in the aforementioned speaker group A
  • the third speaker and the fourth speaker may be speakers in the aforementioned speaker group B
  • the first sound signal may be the aforementioned sound signal.
  • the second sound signal may be the aforementioned sound signal B.
  • S3202 In response to detecting the first event, the first device switches to emit a third sound signal through the first speaker and a third speaker, and emits a fourth sound signal through the second speaker.
  • the first device further includes a fourth speaker located on the second side of the first plane and on a fourth side of the second plane. In response to detecting the first event, the first device switches to emitting a third sound signal through the first speaker and the third speaker, and the fourth sound signal through the second speaker and the fourth speaker.
  • the first speaker and the third speaker may be speakers in the aforementioned speaker group C
  • the second speaker and the fourth speaker may be speakers in the aforementioned speaker group D
  • the third sound signal may be the aforementioned sound signal.
  • the fourth sound signal may be the aforementioned sound signal D.
  • At least one of the first sound signal and the second sound signal is the same as at least one of the third sound signal and the fourth sound signal.
  • the first event includes at least one of the following: the posture of the first device changes; the display mode of the first device changes; the first device establishes a communication connection with the second device; the first device discovers the second device; The first device receives the first request sent by the second device.
  • the first request is used to trigger the first device to detect the relative position relationship between the first device and the second device; the first device receives the second request sent by the second device. request, the second request is used to request to trigger the first device to switch the sound mode.
  • the second device may be the device B 220 in the aforementioned Figures 4-6, and the second device may implement at least part of the operations performed by the device B 220 in the methods shown in Figures 12-15 and 24.
  • the display screen of the first device includes a set of relatively long sides and a set of relatively short sides, the first plane and the second plane are perpendicular to each other, and the first plane and the second plane are aligned with the display screen.
  • the planes are perpendicular, the first plane is parallel to the longer side, and the second plane is parallel to the shorter side; or the first plane is parallel to the shorter side, and the second plane is parallel to the longer side.
  • the first plane and the second plane may be the aforementioned plane a and plane b in FIG. 2 .
  • the first device at least includes a first speaker, a second speaker and a third speaker.
  • the first speaker and the second speaker are located on the first side of the first plane, and the third speaker is located on the second side of the first plane. side, the first speaker and the third speaker are located on the third side of the second plane, and the second speaker is located on the fourth side of the second plane; the first plane and the second plane are not parallel.
  • the first device may first emit the first sound signal through the first speaker and the second speaker, emit the second sound signal through the third speaker, and in response to detecting the first event, switch to emit the third sound signal through the first speaker and the third speaker.
  • Three sound signals, and a fourth sound signal is emitted through the second speaker. That is to say, in response to the first event, the relative positions between the speakers that emit sound signals can be switched, so that the relative positions between the first device and other devices can be updated, and the detection between the first device and other devices can be improved. relative position accuracy.
  • FIG. 33 is a flow chart of a method for receiving a sound signal provided by an embodiment of the present application.
  • the method can be used in a first device, the first device at least includes a first microphone, a second microphone and a third microphone, the first microphone and the second microphone are located on the first side of the first plane, and the third microphone is located on the first On the second side of the plane, the first microphone and the third microphone are located on the third side of the second plane, and the second microphone is located on the fourth side of the second plane; the first plane and the second plane are not parallel.
  • the first device may be device A 210 as shown in Figures 7-9, and the first microphone, the second microphone and the third microphone may be any three of microphone a, microphone b, microphone and microphone d. .
  • device A 210 in the methods shown in Figures 19 to 22 and Figure 24 can receive a sound signal based on the method provided by the embodiment of the present application, and the first device can implement at least part of the operations performed by device A 210. It should be noted that this method is not limited to the specific order described in Figure 33 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be It can also be omitted or deleted.
  • the method includes the following steps:
  • the first device receives the first sound signal through the first microphone and the second microphone, and receives the second sound signal through the third microphone.
  • the first device further includes a fourth microphone located on the second side of the first plane and on a fourth side of the second plane.
  • the first device may receive the first sound signal through the first microphone and the second microphone, and receive the second sound signal through the third microphone and the fourth microphone.
  • the fourth microphone may be another one of microphone a, microphone b, microphone c, and microphone d.
  • the first microphone and the second microphone may be microphones in the aforementioned microphone group A
  • the third microphone and the fourth microphone may be microphones in the aforementioned microphone group B
  • the first sound signal may be the aforementioned sound signal.
  • the second sound signal may be the aforementioned sound signal B.
  • S3302 In response to detecting the first event, the first device switches to receive the third sound signal through the first microphone and the third microphone, and receives the fourth sound signal through the second microphone.
  • the first device further includes a fourth microphone located on the second side of the first plane and on a fourth side of the second plane.
  • the first device may switch to emit a third sound signal through the first microphone and the third microphone and a fourth sound signal through the second microphone and the fourth microphone in response to detecting the first event.
  • the first microphone and the third microphone may be microphones in the aforementioned microphone group C
  • the second microphone and the fourth microphone may be microphones in the aforementioned microphone group D
  • the third sound signal may be the aforementioned sound signal.
  • the fourth sound signal may be the aforementioned sound signal D.
  • the first event includes at least one of the following: the posture of the first device changes; the display mode of the first device changes; the first device establishes a communication connection with the second device; the first device discovers the second device; The first device receives the first request sent by the second device.
  • the first request is used to trigger the first device to detect the relative position relationship between the first device and the second device; the first device receives the second request sent by the second device. request, the second request is used to request to trigger the first device to switch the radio mode.
  • the second device may be the device B 220 in the aforementioned Figures 7-9, and the second device may implement at least part of the operations performed by the device B 220 in the methods shown in Figures 19-22 and 24.
  • the display screen of the first device includes a set of relatively long sides and a set of relatively short sides, the first plane and the second plane are perpendicular to each other, and the first plane and the second plane are aligned with the display screen.
  • the planes are perpendicular, the first plane is parallel to the longer side, and the second plane is parallel to the shorter side; or the first plane is parallel to the shorter side, and the second plane is parallel to the longer side.
  • the first plane and the second plane may be the aforementioned plane c and plane d in FIG. 3 .
  • the first device at least includes a first microphone, a second microphone and a third microphone.
  • the first microphone and the second microphone are located on the first side of the first plane, and the third microphone is located on the second side of the first plane. side, the first microphone and the third microphone are located on the third side of the second plane, and the second microphone is located on the fourth side of the second plane; the first plane and the second plane are not parallel.
  • the first device may first receive the first sound signal through the first microphone and the second microphone, receive the second sound signal through the third microphone, and in response to detecting the first event, switch to receive the third sound signal through the first microphone and the third microphone. Three sound signals, and a fourth sound signal is received through the second microphone. That is, in response to the first event, the microphones receiving the sound signal can be switched.
  • the relative position allows the relative position between the first device and other devices to be updated, thereby improving the accuracy of detecting the relative position between the first device and other devices.
  • FIG. 34 is a flow chart of a method for detecting relative positions between devices provided by an embodiment of the present application.
  • the method can be used in a system including a first device and a second device.
  • the first device includes at least a first speaker, a second speaker and a third speaker.
  • the first speaker and the second speaker are located on the first side of the first plane.
  • the third speaker is located on the second side of the first plane
  • the first speaker and the third speaker are located on the third side of the second plane
  • the second speaker is located on the fourth side of the second plane; the first plane and the second plane are not parallel .
  • the first device may be device A 210 as shown in Figures 4-6, and the first speaker, the second speaker, and the third speaker may be any three of speaker a, speaker b, speaker, and speaker d.
  • the second device may be device B 220 in Figures 4-6.
  • the first device can implement at least part of the operations performed by device A 210 in the methods shown in Figures 12-15 and Figure 24, and the second device can implement at least some of the operations performed by device B 220 in the methods shown in Figures 12-15 and Figure 24. perform at least part of the operation.
  • this method is not limited to the specific order described in Figure 34 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be exchanged. It can also be omitted or deleted.
  • the method includes the following steps:
  • the first device emits a first sound signal through the first speaker and the second speaker, and emits a second sound signal through the third speaker.
  • the first device emits a first sound signal through the first speaker and the second speaker and a second sound signal through the third speaker in response to detecting the second event.
  • the second event includes any of the following: the first device establishes a communication connection with the second device; the first device discovers the second device; the first device receives the first request sent by the second device, the first The request is used to trigger the first device to detect the relative position relationship between the first device and the second device.
  • the first device may emit the second sound signal through the third speaker and the fourth speaker.
  • the second device receives the first sound signal and the second sound signal.
  • the second device determines the first relative position between the second device and the first device based on the first arrival time of the first sound signal and the second arrival time of the second sound signal.
  • the first speaker and the second speaker may be speakers in the aforementioned speaker group A
  • the third speaker and the fourth speaker may be speakers in the aforementioned speaker group B
  • the first sound signal may be the aforementioned sound signal.
  • the second sound signal may be the aforementioned sound signal B
  • the first arrival time may be the arrival time 1 of the aforementioned sound signal A
  • the second arrival time may be the arrival time 2 of the aforementioned sound signal B.
  • the first device switches to emit a third sound signal through the first speaker and a third speaker, emit a fourth sound signal through the second speaker, and the second device receives the third sound signal in response to detecting the first event. and the fourth sound signal, the second device determines the second relative position between the second device and the first device based on the third arrival time of the third sound signal and the fourth arrival time of the fourth sound signal.
  • the first event includes at least one of the following: a posture of the first device changes; a display mode of the first device changes; the first device receives a second request sent by the second device, and the second request is used to Trigger the first device to switch the pronunciation mode.
  • the first speaker and the third speaker may be speakers in the aforementioned speaker group C
  • the second speaker and the fourth speaker may be speakers in the aforementioned speaker group D
  • the third sound signal may be the aforementioned sound signal.
  • the fourth sound signal may be the aforementioned sound signal D
  • the third arrival time may be the arrival time 3 of the aforementioned sound signal C
  • the fourth arrival time may be the arrival time 4 of the aforementioned sound signal D.
  • the first device emits a third sound signal through the first speaker and the third speaker, emits a fourth sound signal through the second speaker, the second device receives the third sound signal and the fourth sound signal, and the third device according to The third arrival time of the third sound signal and the fourth arrival time of the fourth sound signal determine the fifth relative position between the third device and the first device.
  • the third device can be the aforementioned device C 230, and the third device can implement at least part of the operations performed by the device C 230 in the methods shown in Figures 13-15 and Figure 24.
  • the first device includes at least a first speaker, a second speaker and a third speaker.
  • the first speaker and the second speaker are located on the first side of the first plane, and the third speaker is located on the second side of the first plane. side, the first speaker and the third speaker are located on the third side of the second plane, and the second speaker is located on the fourth side of the second plane; the first plane and the second plane are not parallel.
  • the first device can emit a first sound signal through the first speaker and the second speaker, and can emit a second sound signal through the third speaker.
  • the second device can receive the first sound signal and the second sound signal, and receive the first sound signal according to the first sound signal.
  • the first arrival time, and the second arrival time of the second sound signal determine a first relative position between the second device and the first device. That is to say, the relative position between devices can be accurately detected through speakers and microphones without relying on components such as radar, which reduces the cost of detecting the relative position between devices.
  • FIG. 35 is a flow chart of a method for detecting relative positions between devices provided by an embodiment of the present application.
  • the method can be used in a system including a first device and a second device.
  • the first device includes at least a first microphone, a second microphone and a third microphone.
  • the first microphone and the second microphone are located on the first side of the first plane.
  • the third microphone is located on the second side of the first plane
  • the first microphone and the third microphone are located on the third side of the second plane
  • the second microphone is located on the fourth side of the second plane; the first plane and the second plane are not parallel .
  • the first device may be device A 210 as shown in Figures 7-9, and the first microphone, the second microphone and the third microphone may be any three of microphone a, microphone b, microphone and microphone d.
  • the second device may be device B 220 in Figures 7-9.
  • the first device can implement at least part of the operations performed by device A 210 in the methods shown in Figures 19-22 and Figure 24, and the second device can implement at least some of the operations performed by device B 220 in the methods shown in Figures 19-22 and Figure 24. perform at least part of the operation.
  • this method is not limited to the specific order described in Figure 35 and below. It should be understood that in other embodiments, the order of some steps of the method can be exchanged with each other according to actual needs, or some of the steps can be It can also be omitted or deleted.
  • the method includes the following steps:
  • the second device sends the first sound signal and the second sound signal.
  • the first device receives a first sound signal through the first microphone and a second microphone and a second sound signal through a third microphone in response to detecting the second event.
  • the second event includes any of the following: the first device establishes a communication connection with the second device; the first device discovers the second device; the first device receives the first request sent by the second device, the first The request is used to trigger the first device to detect the relative position relationship between the first device and the second device.
  • the first device receives the first sound signal through the first microphone and the second microphone, and receives the second sound signal through the third microphone.
  • the first device determines the first relative position between the second device and the first device based on the first arrival time of the first sound signal and the second arrival time of the second sound signal.
  • the first microphone and the second microphone may be microphones in the aforementioned microphone group A
  • the third microphone and the fourth microphone may be microphones in the aforementioned microphone group B
  • the first sound signal may be the aforementioned sound signal.
  • the second sound signal may be the aforementioned sound signal B
  • the first arrival time may be the arrival time 1 of the aforementioned sound signal A
  • the second arrival time may be the arrival time 2 of the aforementioned sound signal B.
  • the second device emits a third sound signal and a fourth sound signal
  • the first device in response to detecting the first event, switches to receive the third sound signal through the first microphone and the third microphone, and through the second microphone
  • the second device determines the second relative position between the second device and the first device based on the third arrival time of the third sound signal and the fourth arrival time of the fourth sound signal.
  • the first event includes at least one of the following: a posture of the first device changes; a display mode of the first device changes; the first device receives a second request sent by the second device, and the second request is used to Trigger the first device to switch the radio mode.
  • the first microphone and the third microphone may be microphones in the aforementioned microphone group C
  • the second microphone and the fourth microphone may be microphones in the aforementioned microphone group D
  • the third sound signal may be the aforementioned sound signal.
  • the fourth sound signal may be the aforementioned sound signal D
  • the third arrival time may be the arrival time 3 of the aforementioned sound signal C
  • the fourth arrival time may be the arrival time 4 of the aforementioned sound signal D.
  • the third device emits a third sound signal and a fourth sound signal
  • the first device receives the third sound signal through the first microphone and the third microphone
  • the fourth sound signal is received through the second microphone
  • the second device according to The third arrival time of the third sound signal and the fourth arrival time of the fourth sound signal determine the fifth relative position between the third device and the first device.
  • the third device may be the aforementioned device C 230, and the third device may implement at least part of the operations performed by the device C 230 in the methods shown in Figures 19-22 and Figure 24.
  • the first device at least includes a first microphone, a second microphone and a third microphone.
  • the first microphone and the second microphone is located on the first side of the first plane
  • the third microphone is located on the second side of the first plane
  • the first microphone and the third microphone are located on the third side of the second plane
  • the second microphone is located on the third side of the second plane.
  • Four sides; the first plane and the second plane are not parallel.
  • the second device may emit the first sound signal and the second sound signal.
  • the first device may receive the first sound signal through the first microphone and the second microphone, receive the second sound signal through the third microphone, and receive the first sound signal according to the first arrival time of the first sound signal, and the second arrival time of the second sound signal. , determine the first relative position between the second device and the first device. That is to say, the relative position between devices can be accurately detected through speakers and microphones without relying on components such as radar, which reduces the cost of detecting the relative position between devices.
  • inventions of the present application also provide a terminal device.
  • the terminal device includes: a memory and a processor.
  • the memory is used to store the computer program; the processor is used to execute the method described in the above method embodiment when calling the computer program. Operations performed by device A 210 and/or device B 220.
  • the terminal device provided in this embodiment can perform the operations performed by device A 210 and/or device B 220 in the above method embodiment.
  • the implementation principles and technical effects are similar and will not be described again here.
  • embodiments of the present application also provide a chip system.
  • the chip system is provided on a terminal device.
  • the chip system includes a processor.
  • the processor is coupled to a memory.
  • the processor executes a computer program stored in the memory to implement the device A 210 and the method described in the above method embodiment. /or the operations performed by Device B 220.
  • the chip system may be a single chip or a chip module composed of multiple chips.
  • Embodiments of the present application also provide a computer-readable storage medium on which a computer program is stored.
  • the steps performed by device A 210 and/or device B 220 in the method described in the above method embodiment are implemented. operate.
  • Embodiments of the present application also provide a computer program product.
  • the terminal device implements the steps performed by device A 210 and/or device B 220 in the method described in the above method embodiment. operate.
  • the above-mentioned integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • this application can implement all or part of the processes in the methods of the above embodiments by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program When executed by a processor, the steps of each of the above method embodiments may be implemented.
  • the computer program includes computer program code, which may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable storage medium may at least include: any entity or device capable of carrying computer program code to the camera device/terminal device, recording media, computer memory, read-only memory (ROM), random access Memory (random access memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM read-only memory
  • RAM random access Memory
  • electrical carrier signals telecommunications signals
  • software distribution media For example, U disk, mobile hard disk, magnetic disk or CD, etc.
  • the disclosed devices/devices and methods can be implemented in other ways.
  • the apparatus/equipment embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or units. Components may be combined or may be integrated into another system, or some features may be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
  • the term “if” may, depending on the context, be interpreted as “When” or “once” or “in response to determining” or “in response to detecting.”
  • the phrase “if determined” or “if [the described condition or event] is detected” may be interpreted, depending on the context, to mean “once determined” or “in response to a determination” or “once the [described condition or event] is detected ]” or “in response to detection of [the described condition or event]”.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

一种发出、接收声音信号以及检测设备间相对位置的方法,涉及终端技术领域,方法应用于系统,第一设备包括第一扬声器、第二扬声器和第三扬声器,第一扬声器和第二扬声器位于第一平面的第一侧,第三扬声器位于第一平面的第二侧,第一扬声器和第三扬声器位于第二平面的第三侧,第二扬声器位于第二平面的第四侧;第一平面和第二平面不平行;方法包括:第一设备通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器发出第二声音信号(S3201,S3401);第二设备根据第一声音信号的第一到达时刻,以及第二声音信号的第二到达时刻,确定第二设备与第一设备之间的第一相对位置(S3403)。降低了检测设备间相对位置的成本。

Description

发出、接收声音信号以及检测设备间相对位置的方法
本申请要求于2022年08月26日提交国家知识产权局、申请号为202211035615.9、申请名称为“发出、接收声音信号以及检测设备间相对位置的方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种检测发出、接收声音信号以及检测设备间相对位置的方法。
背景技术
随着终端技术的不断发展,各种终端设备得到了广泛的应用,用户所具有的终端设备的种类和数量也在不断地增多。为了实现设备之间的互相协同等多种目的,通常需要确定终端设备之间的相对位置。
现有技术中,终端设备之间可以通过雷达等专用组件,确定两个终端设备之间的相对位置,但采用雷达等专用组件检测设备间的相对位置的成本往往很高。
发明内容
有鉴于此,本申请提供一种发出、接收声音信号以及检测设备间相对位置的方法,能够降低检测设备间相对位置的成本。
为了实现上述目的,第一方面,本申请实施例提供一种发出声音信号的方法,应用于第一设备,所述第一设备至少包括第一扬声器、第二扬声器和第三扬声器;
所述第一扬声器和所述第二扬声器位于第一平面的第一侧,所述第三扬声器位于所述第一平面的第二侧,所述第一扬声器和所述第三扬声器位于第二平面的第三侧,所述第二扬声器位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
所述方法包括:
所述第一设备通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号;
所述第一设备响应于检测到第一事件,切换为通过所述第一扬声器和所述第三扬声器发出第三声音信号,通过所述第二扬声器发出第四声音信号。
在本申请实施例中,第一设备至少包括第一扬声器、第二扬声器和第三扬声器,第一扬声器和第二扬声器位于第一平面的第一侧,第三扬声器位于第一平面的第二侧,第一扬声器和第三扬声器位于第二平面的第三侧,第二扬声器位于第二平面的第四侧;第一平面和第二平面不平行。第一设备可以先通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器发出第二声音信号,并响应于检测到第一事件,切换为通过第一扬声器和第三扬声器发出第三声音信号,通过第二扬声器发出第四声音信号。也即是可以响应于第一事件,切换发出声音信号的扬声器之间的相对位置,从而使得可以对第一设备与其他设备之间的相对位置进行更新,提高了检测第一设备与其他设备之间的相对位置的准确性。
在一些示例中,所述第一设备还包括第四扬声器,所述第四扬声器位于所述第一平面的所述第二侧,且位于所述第二平面的所述第四侧;
所述第一设备通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号,包括:
所述第一设备通过所述第一扬声器和所述第二扬声器发出所述第一声音信号,通过所述第三扬声器和所述第四扬声器发出所述第二声音信号;
所述第一设备响应于检测到第一事件,切换为通过所述第一扬声器和所述第三扬声器发出第 三声音信号,通过所述第二扬声器发出第四声音信号,包括:
所述第一设备响应于检测到所述第一事件,切换为通过所述第一扬声器和所述第三扬声器发出所述第三声音信号,通过所述第二扬声器和所述第四扬声器发出所述第四声音信号。
在一些示例中,所述第一声音信号和所述第二声音信号中的至少一个,与所述第三声音信号和所述第四声音信号中的至少一个相同。
在一些示例中,所述第一事件包括以下至少一项:
所述第一设备的姿态发生变化;
所述第一设备的显示模式发生变化;
所述第一设备与第二设备建立通信连接;
所述第一设备发现所述第二设备;
所述第一设备接收到所述第二设备发送的第一请求,所述第一请求用于触发所述第一设备检测所述第一设备和所述第二设备之间的相对位置关系;
所述第一设备接收到所述第二设备发送的第二请求,所述第二请求用于请求触发所述第一设备切换发声模式。
在一些示例中,第一事件包括第二设备的姿态发生变化。
在一些示例中,显示模式可以包括横屏显示和竖屏显示,该显示模式发生变化可以包括从横屏显示变为竖屏显示,或者,从竖屏显示变为横屏显示。在一些示例中,显示模式可以包括主屏显示和副屏显示,该显示模式发生变化可以包括从主屏显示变为副屏显示,或者,从副屏显示变为主屏显示。在一些示例中,显示模式可以包括分屏显示和全屏显示,该显示模式变化可以包括从分屏显示变为主屏显示,或者,从主屏显示变为分屏显示。
在一些示例中,第一设备或第二设备的姿态发生变化,可以包括发生移动、晃动和旋转等。在一些示例中,第二事件可以包括第一设备或第二设备的姿态变化幅度大于预设的幅度阈值的事件,其中,该幅度阈值可以用于说明检测第一设备与第二设备之间的相对位置这一操作,对第一设备或第二设备的姿态变化的敏感程度。当该幅度阈值较小时,可以在第一设备或第二设备的姿态发生较小幅度变化时,检测第一设备与第二设备之间的相对位置关系,检测该相对位置关系的频率较小;当该幅度阈值较大时,可以在检测到第一设备或第二设备的姿态发生较大幅度变化时,检测第一设备与第二设备之间的相对位置关系,检测该相对位置关系的频率较大。在一些示例中,第二事件可以为第一设备旋转的角度大于或等于90度,或,第二设备旋转的角度大于或等于90度。
第一设备可以响应于检测到第一事件,切换发出声音信号的方式,因此在第一设备的姿态发生变化、第一设备的显示模式发生变化、第一设备与第二设备建立通信连接、第一设备发现第二设备、第一设备接收到第二设备发送的第一请求、第一设备接收到第二设备发送的第二请求或第二设备的姿态发生变化等第一事件发生时,其他设备能够及时根据切换后的发声方式,准确地确定其与第一设备之间的相对位置,提高了检测其与第一设备之间相对位置的准确性。
在一些示例中,所述第一设备的显示屏包括一组相对的较长边和一组相对的较短边;
所述第一平面和所述第二平面互相垂直,且所述第一平面和所述第二平面与所述显示屏所在的平面垂直;
所述第一平面与所述较长边平行,所述第二平面与所述较短边平行;或者,所述第一平面与所述较短边平行,所述第二平面与所述较长边平行。
在一些示例中,第一设备可以通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器(和第四扬声器)发出第二声音信号,且通过所述第一扬声器和所述第三扬声器发出第三声音信号,通过所述第二扬声器(和第四扬声器)发出第四声音信号。
第二方面,本申请实施例提供一种接收声音信号的方法,应用于第一设备,所述第一设备至少包括第一麦克风、第二麦克风和第三麦克风;
所述第一麦克风和所述第二麦克风位于第一平面的第一侧,所述第三麦克风位于所述第一平面的第二侧,所述第一麦克风和所述第三麦克风位于第二平面的第三侧,所述第二麦克风位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
所述方法包括:
所述第一设备通过所述第一麦克风和所述第二麦克风接收第一声音信号,通过所述第三麦克风接收第二声音信号;
所述第一设备响应于检测到第一事件,切换为通过所述第一麦克风和所述第三麦克风接收第三声音信号,通过所述第二麦克风接收第四声音信号。
在本申请实施例中,第一设备至少包括第一麦克风、第二麦克风和第三麦克风,第一麦克风和第二麦克风位于第一平面的第一侧,第三麦克风位于第一平面的第二侧,第一麦克风和第三麦克风位于第二平面的第三侧,第二麦克风位于第二平面的第四侧;第一平面和第二平面不平行。第一设备可以先通过第一麦克风和第二麦克风接收第一声音信号,通过第三麦克风接收第二声音信号,并响应于检测到第一事件,切换为通过第一麦克风和第三麦克风接收第三声音信号,通过第二麦克风接收第四声音信号。也即是可以响应于第一事件,切换接收声音信号的麦克风之间的相对位置,从而使得可以对第一设备与其他设备之间的相对位置进行更新,提高了检测第一设备与其他设备之间的相对位置的准确性。
在一些示例中,所述第一设备还包括第四麦克风,所述第四麦克风位于所述第一平面的所述第二侧,且位于所述第二平面的所述第四侧;
所述第一设备通过所述第一麦克风和所述第二麦克风接收第一声音信号,通过所述第三麦克风接收第二声音信号,包括:
所述第一设备通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风和所述第四麦克风接收所述第二声音信号;
所述第一设备响应于检测到第一事件,切换为通过所述第一麦克风和所述第三麦克风发出第三声音信号,通过所述第二麦克风发出第四声音信号,包括:
所述第一设备响应于检测到第一事件,切换为通过所述第一麦克风和所述第三麦克风发出所述第三声音信号,通过所述第二麦克风和所述第四麦克风发出第四声音信号。
在一些示例中,所述第一事件包括以下至少一项:
所述第一设备的姿态发生变化;
所述第一设备的显示模式发生变化;
所述第一设备与第二设备建立通信连接;
所述第一设备发现所述第二设备;
所述第一设备接收到所述第二设备发送的第一请求,所述第一请求用于触发所述第一设备检测所述第一设备和所述第二设备之间的相对位置关系;
所述第一设备接收到所述第二设备发送的第二请求,所述第二请求用于请求触发所述第一设备切换收音模式。
在一些示例中,第一事件包括第二设备的姿态发生变化。
第一设备可以响应于检测到第一事件,切换接收声音信号的方式,因此在第一设备的姿态发生变化、第一设备的显示模式发生变化、第一设备与第二设备建立通信连接、第一设备发现第二设备、第一设备接收到第二设备发送的第一请求、第一设备接收到第二设备发送的第二请求或第二设备的姿态发生变化等第一事件发生时,第一设备能够及时根据切换后的发声方式,准确地确定第一设备与其他设备之间的相对位置,提高了检测第一设备与其他设备之间相对位置的准确性。
在一些示例中,所述第一设备的显示屏包括一组相对的较长边和一组相对的较短边;
所述第一平面和所述第二平面互相垂直,且所述第一平面和所述第二平面与所述显示屏所在的平面垂直;
所述第一平面与所述较长边平行,所述第二平面与所述较短边平行;或者,所述第一平面与所述较短边平行,所述第二平面与所述较长边平行。
在一些示例中,第一设备可以通过所述第一麦克风和所述第二麦克风接收第一声音信号,通过所述第三麦克风(和第四麦克风)接收第二声音信号,且通过所述第一麦克风和所述第三麦克风接收第三声音信号,通过所述第二麦克风(和第四麦克风)接收第四声音信号。
第三方面,本申请实施例提供了一种检测设备间相对位置的方法,应用于包括第一设备和第 二设备的系统,所述第一设备至少包括第一扬声器、第二扬声器和第三扬声器;
所述第一扬声器和所述第二扬声器位于第一平面的第一侧,所述第三扬声器位于所述第一平面的第二侧,所述第一扬声器和所述第三扬声器位于第二平面的第三侧,所述第二扬声器位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
所述方法包括:
所述第一设备通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号;
所述第二设备接收所述第一声音信号和所述第二声音信号;
所述第二设备根据所述第一声音信号的第一到达时刻,以及所述第二声音信号的第二到达时刻,确定所述第二设备与所述第一设备之间的第一相对位置。
在本申请实施例中,第一设备至少包括第一扬声器、第二扬声器和第三扬声器,第一扬声器和第二扬声器位于第一平面的第一侧,第三扬声器位于第一平面的第二侧,第一扬声器和第三扬声器位于第二平面的第三侧,第二扬声器位于第二平面的第四侧;第一平面和第二平面不平行。第一设备可以通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器发出第二声音信号,第二设备可以接收第一声音信号和第二声音信号,并根据第一声音信号的第一到达时刻,以及第二声音信号的第二到达时刻,确定第二设备与第一设备之间的第一相对位置。即实现了通过扬声器和麦克风准确地检测设备间的相对位置,不需要依赖雷达等组件,降低了检测设备间相对位置的成本。
在一些示例中,所述第一设备通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号,包括:
所述第一设备响应于检测到第二事件,通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号。
在一些示例中,所述第二事件包括下述任一项:
所述第一设备与所述第二设备建立通信连接;
所述第一设备发现所述第二设备;
所述第一设备接收到所述第二设备发送的第一请求,所述第一请求用于触发所述第一设备检测所述第一设备和所述第二设备之间的相对位置关系。
在一些示例中,第二事件包括第二设备的姿态发生变化。
第一设备可以响应于检测到第二事件,通过特定的扬声器发出声音信号,因此在第一设备与第二设备建立通信连接、第一设备发现第二设备、第一设备接收到第二设备发送的第一请求或二设备的姿态发生变化等第二事件发生时,其他设备能够及时根据第一设备所发出的声音信号,准确地确定其与第一设备之间的相对位置。
在一些示例中,所述方法还包括:
所述第二设备向所述第一设备通知所述第一相对位置。
在一些示例中,所述方法还包括:
所述第一设备和所述第二设备以与所述第一相对位置相对应的第一协同模式,协同处理第一任务。
在一些示例中,所述第一设备和所述第二设备以与所述第一相对位置相对应的第一协同模式,协同处理第一任务,包括:
当所述第一相对位置为所述第二设备在所述第一设备左侧时,所述第一设备显示第一界面,所述第二设备显示第二界面;
当所述第一相对位置为所述第二设备在所述第一设备右侧时,所述第一设备显示所述第一界面,所述第二设备显示第三界面;
所述第二界面与所述第一界面相关联,所述第三界面与所述第一界面相关联。
第一设备和第二设备,可以基于以与第一相对位置相对应的第一协同模式,协同处理第一任务,即协同处理第一任务的方式与第一相对位置对应,提高了协同处理第一任务的可靠性和用户体验。
在一些示例中,所述方法还包括:
所述第一设备响应于检测到第一事件,切换为通过所述第一扬声器和所述第三扬声器发出第三声音信号,通过所述第二扬声器发出第四声音信号;
所述第二设备接收所述第三声音信号和所述第四声音信号;
所述第二设备根据所述第三声音信号的第三到达时刻,以及所述第四声音信号的第四到达时刻,确定所述第二设备与所述第一设备之间的第二相对位置。
在一些示例中,所述第一事件包括以下至少一项:
所述第一设备的姿态发生变化;
所述第一设备的显示模式发生变化;
所述第一设备接收到所述第二设备发送的第二请求,所述第二请求用于触发所述第一设备切换发音模式。
第一设备可以响应于检测到第一事件,切换发出声音信号的方式,因此在第一设备的姿态发生变化、第一设备的显示模式发生变化或第一设备接收到第二设备发送的第二请求等第一事件发生时,其他设备能够及时根据切换后的发声方式,准确地确定其与第一设备之间的相对位置,提高了检测其与第一设备之间相对位置的准确性。
在一些示例中,所述第一声音信号和所述第二声音信号中的至少一个,与所述第三声音信号和所述第四声音信号中的至少一个相同。
在一些示例中,第一发音时刻和第二发音时刻相同,且所述第一声音信号的声音特征和所述第二声音信号的声音特征不同;或,
所述第一发音时刻和所述第二发音时刻不同,且所述第一声音信号的声音特征和所述第二声音信号的声音特征相同;
其中,所述第一发音时刻为所述第一设备发出所述第一声音信号的时刻,所述第二发音时刻为所述第一设备发出所述第二声音信号的时刻。
在一些示例中,所述第一设备的显示屏包括一组相对的较长边和一组相对的较短边;
所述第一平面和所述第二平面互相垂直,且所述第一平面和所述第二平面与所述显示屏所在的平面垂直;
所述第一平面与所述较长边平行,所述第二平面与所述较短边平行;或者,所述第一平面与所述较短边平行,所述第二平面与所述较长边平行。
在一些示例中,所述第一声音信号和所述第二声音信号为超声波信号。
在一些示例中,第三声音信号和第四声音信号为超声波信号。
在一些示例中,所述第一设备还包括第四扬声器,所述第四扬声器位于所述第一平面的所述第二侧,且位于所述第二平面的所述第四侧;
所述第一设备通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号,包括:
所述第一设备通过所述第一扬声器和所述第二扬声器发出所述第一声音信号,通过所述第三扬声器和所述第四扬声器发出所述第二声音信号;
所述第一设备通过所述第一扬声器和所述第三扬声器发出第三声音信号,通过所述第二扬声器发出第四声音信号,包括:
所述第一设备通过所述第一扬声器和所述第三扬声器发出所述第三声音信号,通过所述第二扬声器和所述第四扬声器发出所述第四声音信号。
在一些示例中,第一设备可以通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器(和第四扬声器)发出第二声音信号,且通过第一扬声器和第三扬声器发出第三声音信号,通过第二扬声器(和第四扬声器)发出第四声音信号,第二设备根据第一声音信号的第一到达时刻,以及第二声音信号的第二到达时刻,确定第二设备与第一设备之间的第一相对位置,第三设备根据第三声音信号的第三到达时刻,以及第四声音信号的第四到达时刻,确定第三设备与第一设备之间的第五相对位置。也即是,第一设备可以通过多组扬声器发出多组声音信号,从而使得多个设备,都能够确定其与第一设备之间的相对位置,极大地提高了检测设备相对位置的效率。
第四方面,本申请实施例提供了一种检测设备间相对位置的方法,应用于包括第一设备和第二设备的系统,所述第一设备至少包括第一麦克风、第二麦克风和第三麦克风;
所述第一麦克风和所述第二麦克风位于第一平面的第一侧,所述第三麦克风位于所述第一平面的第二侧,所述第一麦克风和所述第三麦克风位于第二平面的第三侧,所述第二麦克风位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
所述方法包括:
所述第二设备发出第一声音信号和第二声音信号;
所述第一设备通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风接收所述第二声音信号;
所述第一设备根据所述第一声音信号的第一到达时刻,以及所述第二声音信号的第二到达时刻,确定所述第二设备与所述第一设备之间的第一相对位置。
在本申请实施例中,第一设备至少包括第一麦克风、第二麦克风和第三麦克风,第一麦克风和第二麦克风位于第一平面的第一侧,第三麦克风位于第一平面的第二侧,第一麦克风和第三麦克风位于第二平面的第三侧,第二麦克风位于第二平面的第四侧;第一平面和第二平面不平行。第二设备可以发出第一声音信号和第二声音信号。第一设备可以通过第一麦克风和第二麦克风接收第一声音信号,通过第三麦克风接收第二声音信号,并根据第一声音信号的第一到达时刻,以及第二声音信号的第二到达时刻,确定第二设备与第一设备之间的第一相对位置。即实现了通过扬声器和麦克风准确地检测设备间的相对位置,不需要依赖雷达等组件,降低了检测设备间相对位置的成本。
在一些示例中,所述第一设备通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风接收所述第二声音信号,包括:
所述第一设备响应于检测到第二事件,通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风接收所述第二声音信号。
在一些示例中,所述第二事件包括下述任一项:
所述第一设备与所述第二设备建立通信连接;
所述第一设备发现所述第二设备;
所述第一设备接收到所述第二设备发送的第一请求,所述第一请求用于触发所述第一设备检测所述第一设备和所述第二设备之间的相对位置关系。
在一些示例中,第一事件包括第二设备的姿态发生变化。
第一设备可以响应于检测到第二事件,通过特定的麦克风接收声音信号,因此在第一设备与第二设备建立通信连接、第一设备发现第二设备、第一设备接收到第二设备发送的第一请求或二设备的姿态发生变化等第二事件发生时,第一设备能够及时通过根据接收的声音信号,准确地确定其与第一设备之间的相对位置。
在一些示例中,所述方法还包括:
所述第二设备发出第三声音信号和第四声音信号;
所述第一设备响应于检测到第一事件,切换为通过所述第一麦克风和所述第三麦克风接收所述第三声音信号,通过所述第二麦克风接收第四声音信号;
所述第二设备根据所述第三声音信号的第三到达时刻,以及所述第四声音信号的第四到达时刻,确定所述第二设备与所述第一设备之间的第二相对位置。
在一些示例中,所述第一事件包括以下至少一项:
所述第一设备的姿态发生变化;
所述第一设备的显示模式发生变化;
所述第一设备接收到所述第二设备发送的第二请求,所述第二请求用于触发所述第一设备切换收音模式。
第一设备可以响应于检测到第一事件,切换接收声音信号的方式,因此在第一设备的姿态发生变化、第一设备的显示模式发生变化或第一设备接收到第二设备发送的第二请求等第一事件发生时,第一设备能够及时根据切换后的发声方式,准确地确定第一设备与其他设备之间的相对位 置,提高了检测第一设备与其他设备之间相对位置的准确性。
在一些示例中,所述第一设备还包括第四麦克风,所述第四麦克风位于所述第一平面的所述第二侧,且位于所述第二平面的所述第四侧;
所述第一设备通过所述第一麦克风和所述第二麦克风接收第一声音信号,通过所述第三麦克风接收第二声音信号,包括:
所述第一设备通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风和所述第四麦克风接收所述第二声音信号;
所述第一设备通过所述第一麦克风和所述第三麦克风发出第三声音信号,通过所述第二麦克风发出第四声音信号,包括:
所述第一设备通过所述第一麦克风和所述第三麦克风发出所述第三声音信号,通过所述第二麦克风和所述第四麦克风发出所述第四声音信号。
在一些示例中,第一设备可以通过第一麦克风和第二麦克风接收第二设备发出的第一声音信号,通过第三麦克风(和第四麦克风)接收第二设备发出的第二声音信号,且通过第一麦克风和第三麦克风接收第三设备发出的第三声音信号,通过第二麦克风(和第四麦克风)接收第三设备发出的第四声音信号,进而第一设备可以根据第一声音信号的第一到达时刻,以及第二声音信号的第二到达时刻,确定第二设备与第一设备之间的第一相对位置,根据第三声音信号的第三到达时刻,以及第四声音信号的第四到达时刻,确定第三设备与第一设备之间的第五相对位置。也即是,第一设备可以通过多组麦克风接收多组声音信号,从而能够确定多个设备与第一设备之间的相对位置,极大地提高了检测设备相对位置的效率。
第五方面,本申请实施例提供了一种系统,所述系统包括第一设备和第二设备,所述第一设备至少包括第一扬声器、第二扬声器和第三扬声器,所述第一扬声器和所述第二扬声器位于第一平面的第一侧,所述第三扬声器位于所述第一平面的第二侧,所述第一扬声器和所述第三扬声器位于第二平面的第三侧,所述第二扬声器位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
所述第一设备用于,通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号;
所述第二设备用于,接收所述第一声音信号和所述第二声音信号;所述第二设备根据所述第一声音信号的第一到达时刻,以及所述第二声音信号的第二到达时刻,确定所述第二设备与所述第一设备之间的第一相对位置。
第六方面,本申请实施例提供了一种系统,所述系统包括第一设备和第二设备,所述第一设备至少包括第一麦克风、第二麦克风和第三麦克风,所述第一麦克风和所述第二麦克风位于第一平面的第一侧,所述第三麦克风位于所述第一平面的第二侧,所述第一麦克风和所述第三麦克风位于第二平面的第三侧,所述第二麦克风位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
所述第二设备用于,发出第一声音信号和第二声音信号;
所述第一设备用于,通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风接收所述第二声音信号;所述第一设备根据所述第一声音信号的第一到达时刻,以及所述第二声音信号的第二到达时刻,确定所述第二设备与所述第一设备之间的第一相对位置。
第七方面,本申请实施例提供了一种装置,该装置具有实现上述各方面及上述各方面的可能实现方式中终端设备行为的功能。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块或单元。例如,收发模块或单元、处理模块或单元、获取模块或单元等。
第八方面,本申请实施例提供一种终端设备,包括:存储器和处理器,存储器用于存储计算机程序;处理器用于在调用计算机程序时执行上述第一方面中任一项所述的方法或第二方面任一项所述的方法。
第九方面,本申请实施例提供一种芯片系统,所述芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现上述第一方面中任一项所述的方法 或第二方面任一项所述的方法。
其中,所述芯片系统可以为单个芯片,或者多个芯片组成的芯片模组。
第十方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述第一方面中任一项所述的方法或第二方面任一项所述的方法。
第十一方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项所述的方法或第二方面任一项所述的方法。
可以理解的是,上述第五方面至第十一方面的有益效果可以参见上述第一方面、第二方面、第三方面或第四方面中的相关描述,在此不再赘述。
附图说明
图1为本申请实施例所提供的一种终端设备的结构示意图;
图2为本申请实施例所提供的另一种终端设备的结构示意图;
图3为本申请实施例所提供的另一种终端设备的结构示意图;
图4为本申请实施例所提供的一种系统架构图;
图5为本申请实施例所提供的另一种系统架构图;
图6为本申请实施例所提供的另一种系统架构图;
图7为本申请实施例所提供的另一种系统架构图;
图8为本申请实施例所提供的另一种系统架构图;
图9为本申请实施例所提供的另一种系统架构图;
图10为本申请实施例所提供的一种声音信号传播过程的示意图;
图11为本申请实施例所提供的一种声音信号传播过程的示意图;
图12为本申请实施例提供的一种检测设备间相对位置的方法的流程示意图;
图13为本申请实施例提供的另一种检测设备间相对位置的方法的流程示意图;
图14为本申请实施例提供的另一种检测设备间相对位置的方法的流程图;
图15为本申请实施例提供的一种对扬声器进行分组的方法的流程图;
图16为本申请实施例提供的另一种终端设备的结构示意图;
图17为本申请实施例提供的一种显示模式的示意图;
图18为本申请实施例提供的另一种显示模式的示意图;
图19为本申请实施例提供的另一种检测设备间相对位置的方法的流程示意图;
图20为本申请实施例提供的另一种检测设备间相对位置的方法的流程示意图;
图21为本申请实施例提供的另一种检测设备间相对位置的方法的流程图;
图22为本申请实施例提供的一种对麦克风进行分组的方法的流程图;
图23为本申请实施例提供的另一种终端设备的结构示意图;
图24为本申请实施例提供的一种多设备协同处理任务的方法的流程示意图;
图25为本申请实施例所提供的一种协同场景的示意图;
图26为本申请实施例所提供的另一种协同场景的示意图;
图27为本申请实施例所提供的另一种协同场景的示意图;
图28为本申请实施例所提供的另一种协同场景的示意图;
图29为本申请实施例所提供的另一种协同场景的示意图;
图30为本申请实施例所提供的另一种协同场景的示意图;
图31为本申请实施例所提供的另一种协同场景的示意图;
图32为本申请实施例提供的一种发出声音信号的方法的流程示意图;
图33为本申请实施例提供的一种接收声音信号的方法的流程示意图;
图34为本申请实施例提供的另一种检测设备间相对位置的方法的流程示意图;
图35为本申请实施例提供的另一种检测设备间相对位置的方法的流程示意图。
具体实施方式
本申请实施例提供的检测设备间相对位置的方法可以应用于手机、平板电脑、可穿戴设备、 车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等终端设备上,本申请实施例对终端设备的具体类型不作任何限制。
图1是本申请实施例提供的一例终端设备100的结构示意图。终端设备100可以包括处理器110、存储器120、通信模块130、声电换能器140和传感器150等。
其中,处理器110可以包括一个或多个处理单元,存储器120用于存储程序代码和数据。在本申请实施例中,处理器110可执行存储器120存储的计算机执行指令,用于对终端设备100的动作进行控制管理。
通信模块130可以用于终端设备100的各个内部模块之间的通信、或者终端设备100和其他外部终端设备之间的通信等。示例性的,如果终端设备100通过有线连接的方式和其他终端设备通信,通信模块130可以包括接口等,例如USB接口,USB接口可以是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口可以用于连接充电器为终端设备100充电,也可以用于终端设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他终端设备,例如AR设备等。
或者,通信模块130可以包括音频器件、射频电路、蓝牙芯片、无线保真(wireless fidelity,Wi-Fi)芯片、近距离无线通讯技术(near-field communication,NFC)模块等,可以通过多种不同的方式实现终端设备100与其他终端设备之间的交互。
声电换能器140可以用于声音信号与电信号之间的相互转换。在一些示例中,声电换能器140可以用于将声音信号转换为电信号,和/或,将电信号转换为声音信号。其中,当声电换能器140用于将电信号转换为声音信号时,该声电换能器140也可以被称为扬声器141;当声电换能器用于将声音信号转换为电信号时,该声电换能器140也可以被称为麦克风142。终端设备100可以包括多个声电换能器,多个声电换能器可以分布在终端设备100的不同位置。在一些示例中,多个声电换能器140可以分布在终端设备100的边框上。在一些示例中,多个声电换能器140可以以均匀地或者对称地分布在终端设备100上。在一些示例中,终端设备100的边框可以为矩形,且该边框包括两个相对的较长边和两个相对的较短边。
在一些示例中,终端设备100可以包括至少三个扬声器141。至少三个扬声器141位于终端设备100的不同位置,使得存在至少两个平面,对于每个平面,该平面的一侧都存在至少一个扬声器141。其中,任意两个平面不平行。在一些示例中,至少两个平面包括平面a和平面b,终端设备100的显示屏160包括一组相对的较长边和一组相对的较短边,平面a和平面b互相垂直,且平面a和平面b与显示屏160所在的平面垂直平面a与较长边平行,平面b与较短边平行;或者,平面a与较短边平行,平面b与较长边平行。
例如,如图2所示,终端设备100包括显示屏160、扬声器a、扬声器b、扬声器c和扬声器d。扬声器a、扬声器b、扬声器c和扬声器d设置在终端设备100的一组较短的边框上。平面a和平面b垂直,平面a和平面b又与显示屏160所在的平面垂直。对于平面a,扬声器a和扬声器c位于平面a的左侧,扬声器b和扬声器d位于平面a的右侧。对于平面b,扬声器a和扬声器b位于平面b的上侧,扬声器c和扬声器d位于平面b的下侧。
在一些示例中,终端设备100可以包括至少三个麦克风142。至少三个麦克风142位于终端设备100的不同位置,使得存在至少两个平面,对于每个平面,该平面的一侧都存在至少一个麦克风142。其中,任意两个平面不平行。在一些示例中,至少两个平面包括平面c和平面d,终端设备100的显示屏160包括一组相对的较长边和一组相对的较短边,平面a和平面b互相垂直,且平面a和平面b与显示屏160所在的平面垂直平面a与较长边平行,平面b与较短边平行;或者,平面a与较短边平行,平面b与较长边平行。
例如,如图3所示,终端设备100包括显示屏160、麦克风a、麦克风b、麦克风c和麦克风d。麦克风a、麦克风b、麦克风c和麦克风d设置在终端设备100的一组较短的边框上。平面c和平面d垂直,平面c和平面d又与显示屏160所在的平面垂直。对于平面c,麦克风a和麦克风c位于平面c的左侧,麦克风b和麦克风d位于平面c的右侧。对于平面d,麦克风a和麦克风b位于平面d的上侧,麦克风c和麦克风d位于平面d的下侧。
需要说明的是,若终端设备包括至少三个扬声器141和至少三个麦克风142,则至少三个扬声器141的数目与至少三个麦克风142的数目可以相同或不同,至少三个扬声器141所在的位置与至少三个麦克风142所在的位置可以相同或不同。
例如,结合图2和图3,终端设备包括扬声器a、扬声器b、扬声器c和扬声器d,还包括麦克风a、麦克风b、麦克风c和麦克风d。其中,扬声器a和麦克风a在同一位置,扬声器b和麦克风b在同一位置,扬声器c和麦克风c在同一位置,扬声器d和麦克风d在同一位置。平面a和平面c为同一平面,平面b和平面d为同一平面。对于平面a,扬声器a、扬声器c、麦克风a和麦克风c位于平面a的左侧,扬声器b、扬声器d、麦克风b和麦克风d位于平面a的右侧。对于平面b,扬声器a、扬声器b、麦克风a和麦克风b位于平面b的上侧,扬声器c、扬声器d、麦克风c和麦克风d位于平面b的下侧。
传感器150可以用于检测终端设备100的姿态。在一些示例中,传感器150可以包括陀螺仪传感器、加速度传感器和距离传感器等。
陀螺仪传感器可以用于确定终端设备100的运动姿态。在一些示例中,可以通过陀螺仪传感器确定终端设备100围绕三个轴(即,x,y和z轴)的角速度。
加速度传感器可检测终端设备100在各个方向上(一般为三轴)加速度的大小。当终端设备100静止时可检测出重力的大小及方向。在一些示例中,加速度传感器可以用于横竖屏切换。
距离传感器可以用于测量距离。在一些示例中,终端设备100可以根据距离传感器所测量的距离变化,确定该终端设备100是否发生移动,或者确定该终端设备100附近的其他终端设备是否发生移动。在一些示例中,距离传感器可以包括光传感器。
可选地,终端设备100还可以包括显示屏160,显示屏160可以显示人机交互界面中的图像或视频等。
应理解,除了图1中列举的各种部件或者模块之外,本申请实施例对终端设备100的结构不做具体限定。在本申请另一些实施例中,终端设备100还可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
终端设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构,本申请实施例不对该软件系统的类型进行限定。
请参照图4,为本申请实施例所提供的一种系统架构图。该系统包括设备A 210和设备B 220。设备A 210和设备B 220均可以为前述中的终端设备100。其中,图2可以理解为设备A 210和设备B 220的俯视图,设备A 210和设备B 220之间的位置关系类型可以为上下关系。
设备A 210可以为如图2所示的终端设备100。
设备B 220可以包括麦克风e。麦克风e可以分布在设备B 220的任意位置。在一些示例中,麦克风e可以分布在设备B 220的边框上。比如,如图4所示,扬声器C分布在设备B 220左侧较短的边上。
由图4可知,设备B 220位于设备A 210的上方,扬声器a到设备B 220的麦克风e的距离d1以及扬声器b到麦克风e的距离d2,小于扬声器c到麦克风e的距离d3以及扬声器d到麦克风e的距离d4。可以理解是,当设备B 220位于设备A 210的下方时(图4未示出),扬声器a到麦克风e的距离d1以及扬声器b到麦克风e的距离d2,大于扬声器c到麦克风e的距离d3以及扬声器d到麦克风e的距离d4。且当设备A 210与设备B 220之间的距离越远时,d1和d2之间的差值越小,d3和d4之间差值越小;当麦克风e的位置越接近与设备A 210的较短边平行的对称轴时,d1和d2之间的差值越小,d3和d4之间差值越小。
由于扬声器a、扬声器b、扬声器c和扬声器d到麦克风e之间的传播介质可以近似认为是相同的,因此,当扬声器a、扬声器b、扬声器c和扬声器d发出声音信号时,该声音信号传递麦克风e的传播速度v也是相同的,该声音信号从扬声器a传播至麦克风e之间的传播时长T1=d1/v,从扬声器b传播至麦克风e之间的传播时长为T2=d2/v,从扬声器c传播至麦克风e之间的传播时长为T3=d3/v,从扬声器d传播至麦克风e之间的传播时长为T4=d4/v。由于v=d1/T1=d2/T2=d3/T3=d4/T4,因此,T1、T2、T3和T4之间的大小关系,与d1、d2、d3和d4之间的大小关系 一致,也即是,可以基于该声音信号从扬声器a、扬声器b、扬声器c和扬声器d传递至麦克风e之间的传播时长,确定扬声器a、扬声器b、扬声器c和扬声器d至麦克风e之间的距离,进而确定设备B 220是在设备A 210的上方或下方。
那么,如果需要检测设备B 220在设备A 210的上方或下方,设备A 210可以通过扬声器组A发出声音信号A,通过扬声器组B发出声音信号B,其中,扬声器组A可以包括扬声器a和扬声器b中的至少一个,设备B 220基于麦克风e接收声音信号A和声音信号B,确定声音信号A从扬声器组A传播到麦克风e的传播时长TA,确定声音信号B从扬声器组B传播到麦克风e的传播时长TB,再基于TA和TB之间的差异,确定麦克风e距离扬声器组A更近或者距离扬声器组B更近。如果麦克风e更靠近扬声器组A,那么设备B 220更靠近设备A 210中扬声器组A所在的位置,再结合扬声器组A在设备A 210的上方或下方,即可确定设备B 220在设备A 210的上方或下方。相似的,如果麦克风e更靠近扬声器组B,那么设备B 220更靠近设备A 210中扬声器组B所在的位置,再结合扬声器组B在设备A 210的上方或下方,即可确定设备B 220在设备A 210的上方或下方。以图4为例,由于扬声器组A在设备A 210的上方,扬声器组B在设备A 210的下方,因此,当设备B 220更靠近设备A 210中扬声器组A所在的位置时,设备B 220位于设备A 210的上方。或者,可以理解的是,当设备B 220更靠近设备A 210中扬声器组B所在的位置时,设备B 220位于设备A 210的下方。
在一些示例中,扬声器组A包括扬声器a或扬声器b,那么TA=T1或TA=T2,相似的,扬声器组B包括扬声器c或扬声器d,TB=T3或TB=T4。
在一些示例中,扬声器组A包括扬声器a和扬声器b,扬声器组B包括扬声器c和扬声器d,那么TA可以与T1和T2正相关,TB可以与T3和T4正相关,且TA与T1和T2的正相关关系,与TB与T3和TA的正相关关系相同。在一些示例中,TA=(T1+T2)/2,TB=(T3+T4)/2。
需要说明的是,上述TA=T1、TA=T2、TB=T3、TB=T4、TA=(T1+T2)/2、TB=(T3+T4)/2,可以表示TA与T1以及T2之间的数学关系、TB与T3以及T4之间的数学关系,在实际应用中,设备B 220可以直接确定TA,而不是确定T1和T2再基于T1和T2确定TA,相似的,设备B 220可以直接确定TB,而不是确定T3和T4再基于T3和T4确定TB。确定TA和TB的方式可以参照下述图10和图11所示的方法。
还需要说明的是,上述声音信号A和声音信号B的区别可以包括发音时刻的不同和/或声音特征(比如频率等)的不同,从而使得接收的设备能够区分声音信号A的和声音信号B传播时长。在一些示例中,声音信号A和声音信号B的发音时刻相同,声音信号A和声音信号B的声音特征可以不同。在另一些示例中,声音信号A和声音信号B的发音时刻不相同,声音信号A和声音信号B的声音特征相同。
还需要说明的是,尽管图2中未示出,但可以理解的是,设备A 210也可以包括麦克风和/或更多的扬声器,相似的,设备B 220也可以包括扬声器和/或更多的麦克风。
请参照图5,为本申请实施例提供的一种系统架构图。其中,图5中设备A 210和设备B 220的位置关系类型可以为左右关系。
由图5可知,设备B 220位于设备A 210右方,扬声器a到设备B 220的麦克风e的距离d1以及扬声器c到麦克风e的距离d3,大于扬声器b到麦克风e的距离d2以及扬声器d到麦克风e的距离d4。可以理解的是,当设备B 220位于设备A 210的左方时(图5未示出),扬声器a到麦克风e的距离d1以及扬声器c到麦克风e的距离d3,小于扬声器b到麦克风e的距离d2以及扬声器d到麦克风e的距离d4。且当设备A 210与设备B 220之间的距离越远时,d1和d3之间的差值越小,d2和d4之间差值越小;当麦克风e的位置越接近与设备A 210的较长边平行的对称轴时,d1和d3之间的差值越小,d2和d4之间差值越小。
那么,如果需要检测设备B 220在设备A 210的左方或右方,其中,扬声器组A可以包括扬声器a和扬声器c中的至少一个,扬声器组B可以包括扬声器b和扬声器d中的至少一个,设备B 220基于麦克风e接收声音信号A和声音信号B,确定声音信号A从扬声器组A传播到麦克风e的传播时长TA,确定声音信号B从扬声器组B传播到麦克风e的传播时长TB,再基于TA和TB之间的差异,确定麦克风e距离扬声器组A更近或者距离扬声器组B更近。如果麦克风e更 靠近扬声器组A,那么设备B 220更靠近设备A 210中扬声器组A所在的位置,再结合扬声器组A在设备A 210的左方或右方,即可确定设备B 220在设备A 210的左方或右方。相似的,如果麦克风e更靠近扬声器组B,那么设备B 220更靠近设备A 210中扬声器组B所在的位置,再结合扬声器组B在设备A 210的左方或右方,即可确定设备B 220在设备A 210的左方或右方。以图5为例,由于扬声器组A在设备A 210的左方,扬声器组B在设备A 210的右方,因此,当设备B 220更靠近设备A 210中扬声器组B所在的位置时,设备B 220位于设备A 210的右方,或者可以理解的是,当设备B 220更靠近设备A 210中扬声器组A所在的位置时,设备B 220位于设备A 210的左方。
在一些示例中,扬声器组A包括扬声器a或扬声器c,那么TA=T1或TA=T3,相似的,扬声器组B包括扬声器b或扬声器d,TB=T2或TB=T4。
在一些示例中,扬声器组A包括扬声器a和扬声器c,扬声器组B包括扬声器b和扬声器d,那么TA可以与T1和T3正相关,TB可以与T2和T4正相关,且TA与T1和T3的正相关关系,与TB与T2和T4的正相关关系相同。在一些示例中,TA=(T1+T3)/2,TB=(T2+T4)/2。
需要说明的是,上述TA=T1、TA=T3、TB=T23、TB=T4、TA=(T1+T3)/2、TB=(T2+T4)/2,可以表示TA与T1以及T3之间的数学关系、TB与T2以及T4之间的数学关系,在实际应用中,设备B 220可以直接确定TA,而不是确定T1和T3再基于T1和T3确定TA,相似的,设备B 220可以直接确定TB,而不是确定T2和T4再基于T2和T4确定TB。确定TA和TB的方式可以参照下述图10所示的方法。
请参照图6,为本申请实施例提供的一种系统架构图。其中,图6所示的系统可以理解为图4和图5所示的系统的结合。图6中系统还包括设备C 230,设备C 230可以为前述中的终端设备100,设备C 230包括麦克风f,设备A 210和设备B 220的位置关系类型可以为上下关系,设备A 210和设备C 230的位置关系类型可以为左右关系。
设备A 210通过扬声器组A发出声音信号A,通过扬声器组B发出声音信号B。设备B 220通过麦克风e接收声音信号A和声音信号B,并确定声音信号A从设备A 210传播到设备B 220的传播时长TA,以及声音信号B从设备A 210传播到设备B 220的传播时长TB,根据TA和TB的大小关系,确定设备B 220在设备A 210的上方或下方。
设备A 210还通过扬声器组C发出声音信号C,通过扬声器组D发出声音信号D。设备C 230通过麦克风f接收声音信号C和声音信号D,并确定声音信号C从设备A 210传播到设备C 230的TC,以及声音信号D从设备A 210传播到设备C 230的TD,根据TC和TD的大小关系,确定C 230在设备A 210的左方或右方。
其中,扬声器组A可以包括扬声器a和扬声器b中的至少一个,扬声器组B可以包括扬声器c和扬声器d中的至少一个,扬声器组C可以包括扬声器a和扬声器c中的至少一个,扬声器组D可以包括扬声器b和扬声器d中的至少一个。
也即是,设备A 210可以包括多对扬声器组,每个扬声器组可以包括至少一个扬声器,每对扬声器组分别位于某一个平面的两侧,该平面使得该对扬声器组之间的位置符合一种位置关系类型,比如上下关系或左右关系等。每对扬声器组可以发出一组声音信号,一组声音信号包括两种声音信号,每种声音信号分别由不同扬声器组的扬声器发出,这两种声音信号的发音时刻和/或声音特征不同,从而使得其他设备,根据该两个声音信号,确定该其他设备与设备A 210之间具体的相对位置,提高了检测设备间相对位置的效率。
需要说明的是,声音信号C和声音信号D的区别,可以与声音信号A和声音信号B的区别相同。
请参照图7,为本申请实施例所提供的一种系统架构图。其中,设备A 210和设备B 220的位置关系类型可以为上下关系。
设备A 210可以为前述图3所示的终端设备100。
设备B 220可以包括扬声器e。扬声器e可以分布在设备B 220的任意位置。在一些示例中,扬声器e可以分布在设备B 220的边框上。比如,如图7所示,扬声器e分布在设备B 220左侧较短的边上。
由图7可知,设备B 220位于设备A 210的上方,麦克风a到设备B 220的扬声器e的距离d1以及麦克风b到扬声器e的距离d2,小于麦克风c到扬声器e的距离d3以及麦克风d到扬声器e的距离d4。可以理解的是,当设备B 220位于设备A 210的下方时(图7未示出),麦克风a到扬声器e的距离d1以及麦克风b到扬声器e的距离d2,大于麦克风c到扬声器e的距离d3以及麦克风d到扬声器e的距离d4。且当设备A 210与设备B 220之间的距离越远时,d1和d2之间的差值越小,d3和d4之间差值越小;当扬声器e的位置越接近与设备A 210的较短边平行的对称轴时,d1和d2之间的差值越小,d3和d4之间差值越小。
由于麦克风a、麦克风b、麦克风c和麦克风d到扬声器e之间的传播介质是相同的,因此,当扬声器e发出声音信号时,该声音信号传递到麦克风a、麦克风b、麦克风c和麦克风d的传播速度v也是相同的,该声音信号从扬声器e传播至麦克风a之间的传播时长T1=d1/v,从扬声器e传播至麦克风b之间的传播时长为T2=d2/v,从扬声器e传播至麦克风c之间的传播时长为T3=d3/v,从扬声器e传播至麦克风d之间的传播时长为T4=d4/v。由于v=d1/T1=d2/T2=d3/T3=d4/T4,因此,T1、T2、T3和T4之间的大小关系,与d1、d2、d3和d4之间的大小关系一致,也即是,可以基于该声音信号从扬声器e传递至麦克风a、麦克风b、麦克风c和麦克风d之间的传播时长,确定麦克风a、麦克风b、麦克风c和麦克风d至扬声器e之间的距离,进而确定设备B 220是在设备A 210的上方或下方。
那么,如果需要检测设备B 220在设备A 210的上方或下方,设备B 220可以通过扬声器e发出声音信号A和声音信号B,设备A 210基于麦克风组A接收声音信号A,基于麦克风组B接收声音信号B,其中,麦克风组A可以包括麦克风a和麦克风b中的至少一个,麦克风组B可以包括麦克风c和麦克风d中的至少一个,确定声音信号A从扬声器e传播到麦克风组A传播到的传播时长TA,确定声音信号B从扬声器e传播到麦克风组B的传播时长TB,再基于TA和TB之间的差异,确定扬声器e距离麦克风组A更近或者距离麦克风组B更近。如果扬声器e更靠近麦克风组A,那么设备B 220更靠近设备A 210中麦克风组A所在的位置,再结合麦克风组A在设备A 210的上方或下方,即可确定设备B 220在设备A 210的上方或下方。相似的,如果扬声器e更靠近麦克风组B,那么设备B 220更靠近设备A 210中麦克风组B所在的位置,再结合麦克风组B在设备A 210的上方或下方,即可确定设备B 220在设备A 210的上方或下方。以图7为例,由于麦克风组A在设备A 210的上方,麦克风组B在设备A 210的下方,因此,当设备B 220更靠近设备A 210中麦克风组A所在的位置时,设备B 220位于设备A 210的上方。或者,可以理解的是,当设备B 220更靠近设备A 210中麦克风组B所在的位置时,设备B 220位于设备A 210的下方。
在一些示例中,麦克风组A包括麦克风a和麦克风b中的至少一个,那么TA=T1或TA=T2,相似的,麦克风组B包括麦克风c和麦克风d中的至少一个,TB=T3或TB=T4。
在一些示例中,麦克风组A包括麦克风a和麦克风b,麦克风组B包括麦克风c和麦克风d,那么TA可以与T1和T2正相关,TB可以与T3和T4正相关,且TA与T1和T2的正相关关系,与TB与T3和T4的正相关关系相同。在一些示例中,TA=(T1+T2)/2,TB=(T3+T4)/2。
还需要说明的是,尽管图7中未示出,但可以理解的是,设备A 210也可以包括扬声器和/或更多的麦克风,相似的,设备B 220也可以包括麦克风和/或更多的扬声器。
请参照图8,为本申请实施例提供的一种系统架构图。其中,图8中设备A 210和设备B 220的位置关系类型可以为左右关系。
由图8可知,设备B 220位于设备A 210的右方,麦克风a到设备B 220的扬声器e的距离d1以及麦克风c到扬声器e的距离d3,大于麦克风b到扬声器e的距离d2以及麦克风d到扬声器e的距离d4。且可以理解的是,当设备B 220位于设备A 210的左方时,麦克风a到扬声器e的距离d1以及麦克风c到扬声器e的距离d3,小于麦克风b到扬声器e的距离d2以及麦克风d到扬声器e的距离d4。且当设备A 210与设备B 220之间的距离越远时,d1和d2之间的差值越小,d3和d4之间差值越小;当扬声器e的位置越接近与设备A 210的较短边平行的对称轴时,d1和d2之间的差值越小,d3和d4之间差值越小。
那么,如果需要检测设备B 220在设备A 210的左方或右方,设备B 220通过扬声器e发出 声音信号A和声音信号B,设备A 210基于麦克风组A接收声音信号A,基于麦克风组B接收声音信号B,其中,麦克风组A可以包括麦克风a和麦克风c中的至少一个,麦克风组B可以包括麦克风b和麦克风d中的至少一个,确定声音信号A从扬声器e传播到麦克风组A传播到的传播时长TA,确定声音信号B从扬声器e传播到麦克风组B的传播时长TB,再基于TA和TB之间的差异,确定扬声器e距离麦克风组A更近或者距离麦克风组B更近。如果扬声器e更靠近麦克风组A,那么设备B 220更靠近设备A 210中麦克风组A所在的位置,再结合麦克风组A在设备A 210的左方或右方,即可确定设备B 220在设备A 210的左方或右方。相似的,如果扬声器e更靠近麦克风组B,那么设备B 220更靠近设备A 210中麦克风组B所在的位置,再结合麦克风组B在设备A 210的左方或右方,即可确定设备B 220在设备A 210的左方或右方。以图8为例,由于麦克风组A在设备A 210的左方,麦克风组B在设备A 210的右方,因此,当设备B 220更靠近设备A 210中麦克风组B所在的位置时,设备B 220位于设备A 210的右方;或者,可以理解的,当设备B 220更靠近设备A 210中麦克风组A所在的位置时,设备B 220位于设备A 210的左方。
在一些示例中,麦克风组A包括麦克风a或麦克风c,那么TA=T1或TA=T3,相似的,麦克风组B包括麦克风b或麦克风d,TB=T2或TB=T4。
在一些示例中,麦克风组A包括麦克风a和麦克风b,麦克风组B包括麦克风c和麦克风d,那么TA可以与T1和T3正相关,TB可以与T2和T4正相关,且TA与T1和T3的正相关关系,与TB与T2和T4的正相关关系相同。在一些示例中,TA=(T1+T2)/2,TB=(T2+T4)/2。
请参照图9,为本申请实施例提供的一种系统架构图。其中,图9所示的系统可以理解为图7和图8所示的系统的结合。图9中系统还包括设备C 230,设备C 230可以为前述中的终端设备100,设备C 230包括扬声器D,设备A 210和设备B 220的位置关系类型可以为上下关系,设备A 210和设备C 230的位置关系类型可以为左右关系。
设备B 220通过扬声器e发出声音信号A和声音信号B。设备C 230通过扬声器D发出声音信号C和声音信号D。
设备A 210通过麦克风组A接收和麦克风组B接收声音信号A和声音信号B,并确定声音信号A从设备B 220传播到设备A 210的传播时长TA,以及声音信号B从设备B 220传播到设备A 210的传播时长TB,根据TA和TB的大小关系确定设备B 220在设备A 210的上方或下方。
设备A 210还通过麦克风组C接收和麦克风组D接收声音信号C和声音信号D,并确定声音信号C从设备C 230传播到设备A 210的传播时长TC,以及声音信号D从设备C 230传播到设备A 210的传播时长TD,根据TC和TD的大小关系确定设备C 230在设备A 210的左方或右方。
其中,麦克风组A可以包括麦克风a和麦克风b中的至少一个,麦克风组B可以包括麦克风c和麦克风d中的至少一个,麦克风组C可以包括麦克风a和麦克风c中的至少一个,麦克风组D可以包括麦克风b和麦克风d中的至少一个。
也即是,设备A 210可以包括多对麦克风组,每个麦克风组可以包括至少一个麦克风,每对麦克风组分别位于某一个平面的两侧,该平面使得该对麦克风组之间的位置符合一种位置关系类型,比如上下关系或左右关系等。每对麦克风组可以接收一组声音信号,一组声音信号包括两种声音信号,每种声音信号分别由不同麦克风组的麦克风接收,这两种声音信号的发音时刻和/或声音特征不同,从而可以根据该两个声音信号,确定该其他设备与设备A 210之间具体的相对位置,提高了检测设备间相对位置的效率。
结合上述图4-图9所示的系统可知,确定设备A 210与设备B 220之间相对位置的步骤,可以由接收声音信号的设备来执行,但可以理解的是,在实际应用中,也可以由接收声音信号的设备将确定设备A 210与设备B 220之间相对位置所需的数据发送至另一设备,再由另一设备基于所接收到的数据来确定设备A 210与设备B 220之间的相对位置。
例如,在如图4-图6所示的系统中,设备B 220可以将接收到声音信号A的到达时刻和声音信号B的达到时刻,发送给设备A 210,设备A 210根据声音信号A的到达时刻和声音信号B的达到时刻,确定声音信号A的传播时长TA和声音信号B的传播时长TB,再基于TA和TB的大小关系,确定设备B 220与设备A 210的相对位置。
又例如,在如图7-图9所示的系统中,设备A 210将接收到声音信号A的到达时刻和声音信号B的达到时刻,发送给设备B 220,设备B 220根据声音信号A的到达时刻和声音信号B的达到时刻,确定声音信号A的传播时长TA和声音信号B的传播时长TB,再基于TA和TB的大小关系,确定设备B 220与设备A 210的相对位置。
在一些示例中,上述声音信号A、声音信号B、声音信号C和声音信号D可以为超声波或次声波,从而减少用户听到声音信号A、声音信号B、声音信号C和声音信号D的可能,降低或避免检测设备间相对位置的过程对用户的打扰,提高用户体验。
通过上述图4-图9所示的系统,介绍了在确定设备A 210和设备B 220之间的相对位置的过程中,设备A 210和设备B 220的功能和角色。接下来请继续参照图10-图11,将说明如何确定声音信号A的传播时长TA。
请参照图10,设备A 210包括扬声器组A,扬声器组A包括扬声器a,即扬声器组A只包括一个扬声器。设备A 210在发音时刻1,通过扬声器a发出声音信号A,设备B 220在到达时刻1通过麦克风e接收到声音信号A,那么声音信号A的传播时长TA=T1=到达时刻1-发音时刻1。
请参照图11,设备A 210包括扬声器组A,扬声器组A包括扬声器a和扬声器c,也即是,扬声器组A包括多个扬声器。设备A 210在发音时刻1,通过扬声器a和扬声器c发出声音信号A,设备B 220在不同时刻接收两次声音信号A。
在一些示例中,设备B 220分别确定两次接收到声音信号A的到达时刻1a和到达时刻1b,基于到达时刻1a和到达时刻1b确定达到时刻1,比如,设备B 220可以确定达到时刻1=(到达时刻1a+到达时刻1b)/2,又或者,设备B 220可以将到达时刻1a和到达时刻1b中的任一个确定为达到时刻1。当确定到达时刻1时,再确定TA=到达时刻1-发音时刻1。
在另一些示例中,设备B 220将两次接收到的声音信号A进行合并,得到新的声音信号A,基于新的声音信号A确定达到时刻1,再确定TA=到达时刻1-发音时刻1。
其中,设备B 220在接收到任一声音信号时,可以将该声音信号中幅值最大的时刻确定该声音信号的到达时刻,当然在实际应用中,设备B 220也可以通过其他方式来确定该声音信号的到达时刻。
在一些示例中,设备A 210可以将发音时刻1通知给设备B 220,且本申请实施例不对设备A 210向设备B 220通知发音时刻1的方式进行限定。在一些示例中,设备A 210可以通过蓝牙、WIFI等近距离通信,将发音时刻1发送给设备B 220。在另一些实施例中,设备A 210可以通过对声音信号A进行调制,从而将发音时刻1携带在声音信号A中,在到达发音时刻1时发出声音信号A,设备B 220通过对声音信号A进行解调,从而得到发音时刻1。
另外,在图7-图9所示的系统中,设备A 210确定声音信号A的传播时长TA的方式,与前述图4-图6所示的系统中设备B 220确定声音信号A的传播时长TA的方式相似或相同。
在一些示例中,设备B 220包括麦克风组A,麦克风组A包括麦克风a,即麦克风组A只包括一个麦克风。设备B 220在发音时刻1,通过扬声器e发出声音信号A,设备A 210在到达时刻1通过麦克风a接收到声音信号A,那么声音信号A的传播时长TA=T1=到达时刻1-发音时刻1。
在一些示例中,设备B 220包括麦克风组A,麦克风组A包括麦克风a和麦克风c,即麦克风组A包括多个麦克风。设备B 220在发音时刻1,通过扬声器e发出声音信号A,设备A 210先后通过麦克风a和麦克风b接收到声音信号A。那么设备A 210可以分别确定两次接收到声音信号A的到达时刻1a和到达时刻1b,基于到达时刻1a和到达时刻1b确定达到时刻1,比如,设备A 210可以确定达到时刻1=(到达时刻1a+到达时刻1b)/2,又或者,设备A 210可以将到达时刻1a和到达时刻1b中的任一个确定为达到时刻1。当确定到达时刻1时,再确定TA=到达时刻1-发音时刻1。或者,设备A 210将两次接收到的声音信号A进行合并,得到新的声音信号A,基于新的声音信号A确定达到时刻1,再确定TA=到达时刻1-发音时刻1。
需要说明的是,设备A 210或设备B 220确定声音信号B的传播时长TB的方式,可以与确定TA的方式相同或相似。
下面以具体地实施例对本申请的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
请参照图12,为本申请实施例所提供的一种检测设备间相对位置的方法的流程图。其中,该方法可以用于图4-图6任一所示的系统。需要说明的是,该方法并不以图12以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S1201,设备A 210检测到第二事件。
其中,第二事件用于触发检测设备A 20与设备B 220之间的相对位置关系。且需要说明的是,第二事件可以为事先设置的事件。
在一些示例中,第二事件可以包括设备A 210与设备B 220建立通信连接。当设备A 210与设备B 220建立通信连接时,设备A 210与设备B 220之间的相对位置关系,可能会影响设备A 210和/或设备B 220的运行,因此,设备A 210可以检测设备A 210与设备B 220之间的相对位置关系。
在一些示例中,第二事件可以包括设备A 210发现设备B 220。在一些示例中,设备A 210可以在接收到设备B 220发送的特定信号时,确定发现设备B 220,比如设备A 210可以在接收到设备B 220发送的信标帧时,确定发现设备B 220,其中该信标帧可以用于指示设备B 220的热点名称等信息。在一些示例中,设备A 210可以在设备B 220从离线状态变为在线状态时,确定发现设备B 220。当然,在实际应用中,设备A 210也可以通过其他方式来发现设备B 220,本申请实施例不对设备A 210发现设备B 220的方式进行限定。
在一些示例中,第二事件可以包括设备A 210创建第一任务,其中,第一任务可以为依赖于设备A 210与设备B 220之间的相对位置关系而执行的任务。在一些示例中,第一任务可以为设备A 210与设备B 220协同处理的任务。
在一些示例中,第二事件可以包括设备A 210接收到设备B 220发送的第一请求,其中,第一请求可以用于触发检测设备A 210与设备B 220之间的相对位置关系。
在一些示例中,第二事件可以包括设备A 210的姿态发生变化。其中,该姿态发生变化可以包括设备A 210发生移动、晃动和旋转等。由于当设备A 210的姿态发生变化时,可能会导致设备A 210与设备B 220之间的相对位置发生变化,因此在设备A 210的姿态发生变化时,可以检测设备A 210与设备B 220之间的相对位置关系。在一些示例中,第二事件可以包括设备A 210的姿态变化幅度大于预设的幅度阈值的事件,其中,该幅度阈值可以用于说明设备A 210检测设备A 210与设备B 220之间的相对位置这一操作,对设备A 210的姿态变化的敏感程度。当该幅度阈值较小时,设备A 210可以在检测到设备A的姿态发生较小幅度变化时,检测设备A 210与设备B 220之间的相对位置关系,检测设备A 210与设备B 220之间的相对位置关系的频率较小;当该幅度阈值较大时,设备A 210可以在检测到设备A的姿态发生较大幅度变化时,检测设备A 210与设备B 220之间的相对位置关系,即检测设备A 210与设备B 220之间的相对位置关系的频率较大。在一些示例中,第二事件可以为设备A 210旋转的角度大于或等于90度。需要说明的是,本申请实施例不对设备A 210的姿态变化方式以及幅度阈值的大小进行限定。
在一些示例中,第一事件可以包括设备B 220的姿态发生变化。由于当设备B 220的姿态发生变化时,也可能会导致设备A 210与设备B 220之间的相对位置发生变化,因此设备A 210可以在检测到设备B 220的姿态发生变化时,检测设备A 210与设备B 220之间的相对位置关系。在一些示例中,第一事件可以为设备B 220的姿态变化幅度大于预设的幅度阈值的事件。
在一些示例中,第一事件可以包括设备A 210的显示模式发生变化。在一些示例中,显示模式可以包括横屏显示和竖屏显示,该显示模式发生变化可以包括从横屏显示变为竖屏显示,或者,从竖屏显示变为横屏显示。在一些示例中,显示模式可以包括主屏显示和副屏显示,该显示模式发生变化可以包括从主屏显示变为副屏显示,或者,从副屏显示变为主屏显示。在一些示例中,显示模式可以包括分屏显示和全屏显示,该显示模式变化可以包括从分屏显示变为主屏显示,或者,从主屏显示变为分屏显示。
需要说明的是,设备A 210也可以在其他情况下执行下述至少部分步骤,从而检测设备A 210与设备B 220之间的相对位置,因此S1201为可选的步骤。
S1202,设备A 210确定扬声器组A和扬声器组B。
其中,扬声器组A和扬声器组B均可以包括至少一个扬声器,且扬声器组A和扬声器组B所包括的扬声器的数目,可以小于或等于设备A 210所包括的扬声器的总数目。扬声器组A和扬声器组B可以分别位于某一个平面的两侧,扬声器A和扬声器组B之间相对位置为第三相对位置,第三相对位置所匹配的位置关系类型为第一位置关系类型。
第一位置关系类型可以为设备A 210包括的至少三个扬声器可以划分成的多种位置关系类型中的任一种。在一些示例中,多种位置关系类型可以包括上下关系和左右关系至少一种,当然,多种位置关系类型还可以包括其他更多的位置关系类型,比如可以包括左上与右下关系以及左下与右上关系中的至少一种。
第三相对位置可以为与第一位置关系类型匹配的相对位置。在一些示例中,第一位置关系类型包括两种相对位置,第三相对位置可以为该两种相对位置中的任一种。例如,在图4中,扬声器组A和扬声器组B之间的第三相对位置可以为扬声器组A在扬声器组B上方,与第三相对位置关系所匹配的第一位置关系类型可以为上下关系。在图5中,扬声器组A和扬声器组B之间的第三相对位置可以为扬声器组A在扬声器组B左方,与第三相对位置关系所匹配的第一位置关系类型可以为左右关系。
在一些示例中,设备A 210可以先确定扬声器组A和扬声器组B,所确定的扬声器组A和扬声器组B之间的相对位置即为第三相对位置,第三相对位置所匹配的位置关系类型即为第一位置关系类型。或者,在另一些示例中,设备A 210也可以先确定第一位置关系类型,再基于第一位置关系类型确定扬声器组A和扬声器组B,扬声器组A和扬声器B之间的相对位置即为第三相对位置。
在一些示例中,设备A 210存储有多对扬声器组,每对扬声器组包括分别位于一个平面两侧的两个扬声器组,设备A 210可以在多对扬声器组中确定扬声器组A和扬声器组B。
在一些示例中,设备A 210对设备A 210包括的至少三个扬声器中的全部或者部分扬声器分组,从而确定扬声器组A和扬声器组B。
需要说明的是,设备A 210确定扬声器组A和扬声器组B的方式,也可以参照下述图15所示的方法。
还需要说明的是,S1202为可选的步骤。
S1203,设备A 210通过扬声器组A发出声音信号A,通过扬声器组B发出声音信号B。
在一些示例中,设备A 210发出声音信号A的发音时刻1和发出声音信号B的发音时刻2可以相同,那么为了便于设备B 220区别声音信号A和声音信号B,声音信号A的声音特征和声音信号B的声音特征可以不同。在一些示例中,声音信号A的频率和声音信号B的频率可以不同。
在一些示例中,设备A 210发出声音信号A的发音时刻1和发出声音信号B的发音时刻2可以不同。由于发音时刻1和发音时刻2不同,设备B 220已经可以基于发音时刻1和发音时刻2区分声音信号A和声音信号B,因此,声音信号A的频率和声音信号B的频率可以相同,也可以不同。
在一些示例中,为了便于设备B 220更准确地接收声音信号A和声音信号B、提高后续检测设备A 210与设备B 220之间相对位置的准确性,设备A 210还可以向设备B 210发送第一配置信息。其中,第一配置信息可以用于指示发出声音信号A和声音信号B的方式;和/或,第一配置信息可以用于指示声音信号A的声音特征和声音信号B的声音特征;和/或,第一配置信息可以用于指示发出声音信号A的发音时刻1和发出声音信号B的发音时刻2。当然,在实际应用中,第一配置信息还可以用于指示更多或更少的与检测设备间相对位置相关的信息,比如第一配置信息可以用于指示第一位置关系类型和/或扬声器组A和扬声器组B之间的第三相对位置。在一些示例中,设备A 210可以通过调制将第一配置信息携带在声音信号A和/或声音信号B中。
需要说明的是,第一配置信息包括的至少部分信息,也可以是由设备A 210事先发送给设备B 220,或者设备B 220通过其他方式获取得到该至少部分信息,因此,设备A 210也可以不必在每次发出声音信号A和声音信号B时,都向设备B 220发送该至少部分信息。例如,设备A 210可以事先将声音信号A的声音特征和声音信号B的声音特征,发出声音信号A和声音信号B的方式等信息发送给设备B 220,或者,由相关技术人员在设备B 220出厂之前,就在设备B 220 中预先设置声音信号A的声音特征和声音信号B的声音特征,发出声音信号A和声音信号B的方式。
S1204,设备B 220基于接收到声音信号A的到达时刻1和接收到声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置。
由于设备A 210是通过处于第三相对位置的扬声器组A和扬声器组B发出声音信号A和声音信号B,那么设备B 220基于接收到声音信号A的到达时刻1和接收到声音信号B的到达时刻2,即可以确定声音信号A的传播时长TA和声音信号B传播时长TB之间的大小,进而确定设备B 220到扬声器组A的距离与设备B 220到扬声器组B的距离,再基于扬声器组A和扬声器组B的第三相对位置以及第一位置关系类型,准确地确定设备A 210与设备B 220之间的第一相对位置。其中,第一相对位置与第一位置关系类型匹配。
在一些示例中,设备B 220可以通过同一个麦克风(比如麦克风e)接收声音信号A和声音信号B,到达时刻1和到达时刻2分别为该麦克风接收到声音信号A和声音信号B的到达时刻。
在一些示例中,设备B 220可以通过自相关算法识别声音信号A和声信号B。在一些示例中,设备B 220可以获取设备A 210发送的声音信号A的声音特征和声音信号B的声音特征,当接收到的某个声音信号的声音特征与声音信号A的声音特征得到相似度大于预设的相似度阈值时,确定接收到声音信号A,当接收到的某个声音信号的声音特征与声音信号B的声音特征的相似度大于预设的相似度阈值时,确定接收到声音信号B。
在一些示例中,设备B 220可以将声音信号A的最大幅值对应的时刻确定为到达时刻1,将声音信号B的最大幅值对应的时刻确定为到达时刻2。当然,在实际应用中,设备B 220也可以通过其他方式来确定到达时刻1和到达时刻2,只要确定到达时刻1的方式和确定到达时刻2的方式相同即可。
在一些示例中,设备B 220可以接收设备A 210发送的第一配置信息。在一些示例中,设备B 220可以对声音信号A和/或声音信号B进行解调,从而得到携带在声音信号A或声音信号B中的第一配置信息。
在一些示例中,若设备A 210是同时发出声音信号A和声音信号B,而设备B 220是通过同一个麦克风接收声音信号A和声音信号B,则设备B 220可以对比到达时刻1和到达时刻2。如果到达时刻1小于到达时刻2,也即是TA小于TB,则设备B 220可以确定设备B 220到扬声器组A比设备B 220到扬声器组B更近。如果到达时刻1大于到达时刻2,也即是TA大于TB,则设备B 220可以确定设备B 220到扬声器组B比设备B 220到扬声器组A更近。在一些示例中,若设备A 210是在不同时刻发出声音信号A和声音信号B,而设备B 220是通过同一麦克风接收声音信号A和声音信号B,则设备B 220可以比较声音信号A的传播时长TA和声音信号B的传播时长TB,其中,TA=到达时刻1-发音时刻1,TB=到达时刻2-发音时刻2。如果TA小于TB,则设备B 220可以确定设备B 220到扬声器组A比设备B 220到扬声器组B更近。如果TA大于TB,则设备B 220可以确定设备B 220到扬声器组B比设备B 220到扬声器组A更近。
再结合扬声器组A和扬声器组B的第三相对位置以及第一位置关系类型,当设备B 220确定设备B 220距离扬声器组A更近或者距离扬声器组B更近时,也就确定了设备B 220与设备A 210之间的与该第一位置关系类型匹配的第一相对位置。例如,如图4所示,第一位置关系类型为上下关系,扬声器组A位于扬声器组B上方,那么设备B 220到扬声器组A比设备B 220到扬声器组B更近,也即是设备B 220处于设备A 210的上方;或者,可以理解的,设备B 220到扬声器组B比设备B 220到扬声器组A更近,也即是设备B 220处于设备A 210的下方。又例如,如图5所示,第一位置关系类型为左右关系,扬声器组A位于扬声器组B的左方,那么设备B 220到扬声器组B比设备B 220到扬声器组A更近,也即是设备B 220处于设备A 210的右方;或者,可以理解的,设备B 220到扬声器组A比设备B 220到扬声器组B更近,也即是设备B 220处于设备A 210的左方。
在一些示例中,设备B 220可以通过多个麦克风来接收声音信号A和声音信号B。在一些示例中,该多个麦克风之间的距离可以忽略不计,比如该多个麦克风为同一个麦克风阵列中不同的麦克风单元,同一声音信号传递到该多个麦克风的时长几乎相同,因此仍可以按照通过同一麦克 风接收到声音信号A和声音信号B来处理。在另一些示例中,该多个麦克风之间的距离比较大,同一声音信号传递到不同麦克风所需的时长差异较大,那么设备B 220可以分别通过多个麦克风接收声音信号A和声音信号B,并得到多个第一相对位置,当该多个第一相对位置相同时,确定第一相对位置有效,当该多个第一相对位置不同时,确定该多个第一相对位置无效并重新确定新的第一相对位置,也即是,可以通过比较基于多个麦克风接收声音信号A和声音信号B并确定第一相对位置,来确保第一相对位置的可靠性。或者,在另一些示例中,该多个麦克风之间的距离比较大,同一声音信号传递到不同麦克风所需的时长差异较大,那么设备B 220可以通过其中两个麦克风分别接收声音信号A和声音信号B,基于这两个麦克风之间的距离,对到达时刻1和/或到达时刻2进行校正,得到新的到达时刻1和新的到达时刻2,从而减少或避免由于这两个麦克风的位置对接收声音信号A和/或声音信号B的影响,之后,可以基于新的到达时刻1和新的到达时刻2,确定设备B 220与设备A 210之间的相对位置。
需要说明的是,在实际应用中,可以参照到达时间(time of arrival,TOA),基于声音信号A的到达时刻1和声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置。
S1205,设备B 220向设备A 210发送第一相对位置。
其中,S1204为可选的步骤。
在本申请实施例中,设备A 210可以响应于检测到第二事件,确定扬声器组A和扬声器组B,扬声器组A和扬声器组B之间的第三相对位置与第一位置关系类型匹配。当设备A 210通过扬声器组A发出声音信号A,通过扬声器组B发出声音信号B时,设备B 220可以基于接收到声音信号A的到达时刻1和接收到声音信号B的到达时刻2,确定设备A 210和设备B 220之间的、与第一位置关系类型匹配的第一相对位置,即实现了通过扬声器和麦克风准确地检测设备间的相对位置,不需要依赖雷达等组件,降低了检测设备间相对位置的成本。
在一些示例中,设备A 210在执行S1203播放声音信号A和声音信号B时,可以持续播放直至下一次检测到第二事件。在一些示例中,设备A 210可以在执行S1203开始播放声音信号A和声音信号B之后的第三预设时长时,停止播放声音信号A和声音信号B。当然,在实际应用中,设备A 210还可以通过其他方式确定停止播放声音信号A和声音信号B的时机,例如,设备A 210可以在接收到设备B 220发送的设备A 210与设备B 220之间的第一相对位置时,停止播放声音信号A和声音信号B。
请参照图13,为本申请实施例所提供的一种检测设备间相对位置的方法的流程图。其中,该方法可以用于图4-图6任一所示的系统,设备A 210可以通过多对扬声器组发出多组声音信号,使得能够确定设备B 220和设备C 230等多个不同位置的设备与设备A 210之间的相对位置。需要说明的是,该方法并不以图13以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S1301,设备A 210确定扬声器组A、扬声器组B、扬声器组C和扬声器组D。
其中,扬声器组C和扬声器组D均可以包括至少一个扬声器,且扬声器组C和扬声器组D所包括的扬声器的数目,可以小于或等于设备A 210所包括的扬声器的总数目。扬声器组C和扬声器组D可以分别位于某一个平面的两侧,扬声器C和扬声器组D之间相对位置为第四相对位置,第四相对位置所匹配的位置关系类型为第二位置关系类型。
需要说明的是,第三相对位置和第四相对位置不同,第二位置关系类型和第一位置关系类型不同。例如,在如图6所示的系统中,扬声器组A和扬声器组B之间的第一位置关系类型为上下关系,第三相对位置为扬声器组A在扬声器组B的上方,扬声器组C和扬声器组D之间的第二位置关系类型为左右关系,第四相对位置为扬声器组C在扬声器组D的左方。
需要说明的是,扬声器组A与扬声器组C或扬声器组D可以包括最多部分相同的扬声器,扬声器组B与扬声器组C或扬声器组D可以包括最多部分相同的扬声器。例如,在如图6所示的系统中,扬声器组A和扬声器组C都包括扬声器a,扬声器组A和扬声器组D都包括扬声器b,扬声器组B和扬声器组C都包括扬声器c,扬声器组B和扬声器组D都包括扬声器d。
需要说明的是,设备A 210在S1301确定扬声器组A和扬声器组B的方式,可以与S1202确 定扬声器组A和扬声器组B的方式相同,设备A 210在S1301确定扬声器组C和扬声器组D的方式,可以与S1202确定扬声器组A和扬声器组B的方式相似或相同,此处不再一一赘述。
S1302,设备A 210通过扬声器组A发出声音信号A,通过扬声器组B发出声音信号B,通过扬声器组C发出声音信号C,通过扬声器组D发出声音信号D。
在一些示例中,为了便于设备B 220准确地识别声音信号A和声音信号B,便于设备C 230准确地识别声音信号C和声音信号D,从而提高检测多设备之间相对位置的准确性和效率,声音信号A的声音特征、声音信号B的声音特征、声音信号C的声音特征和声音信号D的声音特征两两不同。
在一些示例中,设备A 210可以同时发出通过扬声器组A发出声音信号A,通过扬声器组B发出声音信号B,通过扬声器组C发出声音信号C,通过扬声器组D发出声音信号D。
在一些示例中,设备A 210可以先通过扬声器组A发出声音信号A,通过扬声器组B发出声音信号B,在间隔第一预设时长之后,再通过扬声器组C发出声音信号C,通过扬声器组D发出声音信号D,在又一次间隔第一预设时长之后,再次通过扬声器组A发出声音信号A,通过扬声器组B发出声音信号B,……,依次类推,从而循环发出声音信号A和声音信号B以及声音信号C和声音信号D。且需要说明的是,本申请实施例不对确定第一预设时长的方式以及第一预设时长的时长大小进行限定。
S1303a,设备B 220基于接收到声音信号A的到达时刻1和接收到声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置。
需要说明的是,设备B 220执行S1303a基于接收到声音信号A的到达时刻1和接收到声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置的方式,可以与执行S1203基于接收到声音信号A的到达时刻1和接收到声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置的方式相同,此处不再一一赘述。
在一些示例中,设备A 210向设备B 220发送第一配置信息,第一配置信息可以用于指示声音信号A的声音特征和声音信号B的声音特征,那么设备B 220可以基于声音信号A的声音特征识别声音信号A,基于声音信号B的声音特征识别声音信号B,并忽略声音信号C和声音信号D。当然,在实际应用中,第一配置信息也可以用于指示其他信息,比如第一配置信息可以用于指示发出声音信号A和声音信号B的方式;和/或,第一配置信息可以用于指示发出声音信号A的发音时刻1和发出声音信号B的发音时刻2。
S1303b,设备C 230基于接收到声音信号C的到达时刻3和接收到声音信号D的到达时刻4,确定设备A 210与设备C 230之间的第五相对位置。
其中,第五相对位置与第二位置关系类型匹配。
需要说明的是,设备C 230执行S1303b基于接收到声音信号C的到达时刻3和接收到声音信号D的到达时刻4,确定设备A 210与设备C 230之间的第五相对位置的方式,可以与设备B 220执行S1303a基于接收到声音信号A的到达时刻1和接收到声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置的方式相同或相似,此处不再一一赘述。
在一些示例中,设备A 210向设备C230发送第二配置信息,第二配置信息可以用于指示声音信号C的声音特征和声音信号D的声音特征,那么设备C 230可以基于声音信号C的声音特征识别声音信号C,基于声音信号D的声音特征识别声音信号D,并忽略声音信号A和声音信号B。当然,在实际应用中,第二配置信息也可以用于指示其他信息,比如,第二配置信息可以用于指示发出声音信号C和声音信号D的方式;和/或,第一配置信息可以用于指示发出声音信号C的发音时刻3和发出声音信号D的发音时刻4。
需要说明的是,设备A 210可以事先向设备C 230发送第二配置信息中的至少部分信息,或者,设备C 230可以通过其他方式获取得到该至少部分信息,因此,设备A 210也可以不必在每次发出声音信号C和声音信号D时,都向设备C 230发送该至少部分信息。
在一些示例中,设备A 210可以向设备B 220和设备C 230发送第四配置信息,第四配置信息指示发出声音信号A、声音信号B、声音信号C和声音信号D的方式;和/或,第四配置信息可以用于指示声音信号A的声音特征、声音信号B、声音信号C的声音特征和声音信号D的声音特 征;和/或,第四配置信息可以用于指示发出声音信号A的发音时刻1、发出声音信号B的发音时刻2、发出声音信号C的发音时刻3和声音信号D的发音时刻4。设备B 220可以接收声音信号C和声音信号D,进而确定设备B 220与设备A 210之间的第二相对位置,第二相对位置与第二位置关系类型匹配。设备C 230可以接收声音信号C和声音信号D,进而确定设备C 230与设备A 210之间的第五相对位置,第五相对位置与第二位置关系类型匹配。
需要说明的是,设备A 210可以事先向设备B 220和设备C 230发送第四配置信息中的至少部分信息,或者,设备B 220和设备C 230可以通过其他方式获取得到该至少部分信息,因此,设备A 210也可以不必在每次发出声音信号A、声音信号B、声音信号C和声音信号D时,都向设备B 220和设备C 230发送该至少部分信息。
在本申请实施例中,设备A 210可以通过多组扬声器发出多组声音信号,不同组扬声器之间的相对位置对应不同的位置关系类型,从而使得与设备A 210处于多种位置关系类型的多个设备,都能够确定其与设备A 210之间的相对位置,极大地提高了检测设备相对位置的效率。
请参照图14,为本申请实施例所提供的一种检测设备间相对位置的方法的流程图。其中,该方法可以用于图4-图6任一所示的系统中的设备A 210,设备A 210可以图12所示的方法之后,继续执行如图14所示的方法,从而改变发出声音信号的方式,使得对设备A 210与设备B 220之间的相对位置进行更新。需要说明的是,该方法并不以图14以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S1206,设备A 210检测到第一事件。
其中,第一事件可以用于触发设备A 210切换发声模式。且需要说明的是,第一事件可以为事先设置的事件。
在一些示例中,第一事件可以包括设备A 210的姿态发生变化。
在一些示例中,第一事件可以包括设备B 210的姿态发生变化。
在一些示例中,第一事件可以包括设备A 210的显示模式发生变化。
在一些示例中,第一事件可以包括设备A 210接收到设备B 220发送的第二请求,第二请求用于请求设备A 210切换发声模式。
需要说明的是,S1206为可选的步骤。比如在一些示例中,可以在设备A 210执行S1203之后的第二预设时长时,继续执行下述至少部分步骤。还需要说明的是,本申请实施例不对确定第二预设时长的方式以及第二预设时长的时长大小进行限定。
S1207,设备A 210确定扬声器组C和扬声器组D。
其中,扬声器组C和扬声器组D均可以包括至少一个扬声器,且扬声器组C和扬声器组D所包括的扬声器的数目,可以小于或等于设备A 210所包括的扬声器的总数目。扬声器组C和扬声器组D可以分别位于某一个平面的两侧,扬声器C和扬声器组D之间相对位置为第四相对位置,第四相对位置所匹配的位置关系类型为第二位置关系类型。
需要说明的是,第三相对位置和第四相对位置不同,第二位置关系类型和第一位置关系类型不同。
还需要说明的是,设备A 210执行S1207确定扬声器组C和扬声器组D的方式,可以与S1202确定扬声器组A和扬声器组B的方式相似或相同,此处不再一一赘述。
还需要说明的是,S1207为可选的步骤。
S1208,设备A 210通过扬声器组C发出声音信号C,通过扬声器组D发出声音信号D。
需要说明的是,S1208中设备A 210通过扬声器组C发出声音信号C,通过扬声器组D发出声音信号D的方式,也可以与S1203中设备A 210通过扬声器组A发出声音信号A,通过扬声器组B发出声音信号B的方式相同或相似,此处不再一一赘述。
S1209,设备B 220基于接收到声音信号C的到达时刻3和接收到声音信号D的到达时刻4,确定设备A 210与设备B 220之间的第二相对位置。
其中,第二相对位置可以与第二位置关系类型匹配。
需要说明的是,设备B 220执行S1209基于接收到声音信号C的到达时刻3和接收到声音信 号D的到达时刻4,确定设备A 210与设备B 220之间的第二相对位置的方式,可以与S1303b中设备C 230基于接收到声音信号C的到达时刻3和接收到声音信号D的到达时刻4,确定设备A 210与设备C 230之间的第五相对位置的方式相同或相似。
S1210,设备B 220向设备A 210发送第二相对位置。
需要说明的是,S1210为可选的步骤。
在本申请实施例中,设备A 210可以响应于检测到第一事件,切换发出声音信号的方式,从而使得设备B 220和设备A 210可以对设备B 220和设备A 210之间的相对位置进行更新,提高了检测设备B 220和设备A 210的相对位置的准确性。
例如,设备A 210在检测到与设备B 220建立通信连接时,通过扬声器组A播放声音信号A,通过扬声器组B播放声音信号B,其中,扬声器组A和扬声器组B 220的第三相对位置为扬声器组A在扬声器组B 220的左方,第三相对位置匹配的第一位置关系类型为左右关系。设备B 220基于声音信号A和声音信号B确定设备B 220与设备A 210之间的第一相对位置。但可能此时设备B 220在设备A 210的上方或下方,则设备B 220可能难以根据声音信号A和声音信号B确定第一相对位置,因此设备B 220向设备A 210发送第二请求,第二请求用于说明设备B 220确定与设备A 210之间的相对位置失败,或者,第二请求用于请求设备A 210切换发声模式。设备A 210在接收到第二请求时,通过扬声器组C播放声音信号C,通过扬声器组D播放声音信号D,其中,扬声器组C和扬声器组D的第四相对位置为扬声器组A在扬声器组B 220的上方。设备B 220基于声音信号C和声音信号D确定第二相对位置。
请参照图15,为本申请实施例所提供的一种对扬声器进行分组的方法的流程图。其中,该方法可以用于图4-图6任一所示的系统中的设备A 210。需要说明的是,该方法并不以图15以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S1501,设备A 210确定设备A 210的第一状态。
当设备A 210所处的状态不同时,设备A 210上的扬声器在设备A 210中所处的相对位置可能也不同,比如以图4中的设备A 210为例,扬声器a和扬声器c处于设备A 210的左方,扬声器b和扬声器d处于设备A 210的右方,而当设备A 210顺时针旋转90度时,扬声器c和扬声器d处于设备A 210的左方,扬声器a和扬声器b处于设备A 210的右方,如图16所示。因此,为了便于设备A 210确定当前各扬声器在设备A 210中的相对位置,设备A 210可以确定设备A 210的第一状态。
第一状态可以用于指示设备A 210所处的状态。在一些示例中,设备A 210的第一状态可以包括设备A 210的第一姿态或者设备A 210的第一显示模式。其中,设备A 210可以通过设备A 210中的传感器150确定设备A 210第一姿态。
需要说明的是,设备A 210的第一姿态和第一显示模式可以对应,也可以不对应。在一些示例中,当设备A 210的屏幕旋转开关打开,且设备A 210的显示屏所在的平面与水平面垂直(即设备A 210竖直放置)或接近垂直时,设备A 210的第一显示模式和第一姿态可以是对应的,当设备A 210处于第一显示模式时,设备A 210的显示屏中所显示内容的正方向,与设备A 210的重力方向平行或接近平行。其中,如图17所示,坐标轴y和坐标轴z所在的平面为水平面,坐标轴x的方向与重力方向平行且相反,当显示屏所在的平面与水平面垂直时,该显示屏所显示内容的正方向与该重力方向相反。设备A 210可以基于第一姿态,确定对应的第一显示模式。在另一些示例中,当设备A 210的屏幕旋转开关关闭,或者,设备A 210的显示屏所在的平面与水平面平行(即设备A 210水平放置)或接近平行时,设备A 210的第一显示模式和第一姿态可以不对应,设备A 210当前的第一显示模式可以预设的显示模式或者用户指定的显示模式。例如,如图18所示,坐标轴y和坐标轴z所在的平面为水平面,坐标轴x的方向与重力方向平行且相反,设备A 210的显示屏所在的平面与水平面平行,此时设备A 210的显示屏所显示内容的正方向,与设备A 210的重力方向垂直。
S1502,设备A 210确定第一位置关系类型。
第一位置关系类型可以用于指示设备A 210与设备B 220之间的相对位置所匹配的位置关系 类型。设备A 210通过与第一位置关系类型匹配的一对扬声器组发出声音信号时,与设备A 210处于第一位置关系类型的其他设备,可以根据该声音信号确定该设备与设备A 210之间的、与第一位置关系类型匹配的具体相对位置。
在一些示例中,设备A 210可以获取设备B 220发送第一位置关系类型。在一些示例中,设备A 210可以从设备B 220发送的第一请求或第二请求中获取第一位置关系类型。
在一些示例中,设备A 210可以基于第一任务确定第一位置关系类型。在一些示例中,设备A 210可以基于第一任务对应的应用程序,确定第一位置关系类型。例如,当第一任务为需要基于设备A 210和设备B 220处于左右关系而处理的任务时,第一位置关系类型可以为左右关系;当第一任务为需要基于设备A 210和设备B 220处于上下关系而处理的任务时,第一位置关系类型可以为上下关系。
需要说明的是,设备A 210也可以先执行S1502来确定第一位置关系类型,再执行S1501来确定第一状态,或者,设备A 210也可以同时执行S1501和S1502,来确定第一状态和第一位置关系类型,本申请实施例不对设备A 210确定第一位置关系类型和第一状态的次序进行限定。
S1503,设备A 210基于第一状态和第一位置关系类型,确定扬声器组A和扬声器组B。
设备A 210可以基于第一位置关系类型所指示的为检测设备A 210与设备B 220之间相对位置,实际所需的一对扬声器组之间的相对位置,在设备A 210处于第一状态时各扬声器在设备A 210上的相对位置,准确地确定扬声器组A和扬声器组B,扬声器组A和扬声器组之间的第三相对位置与第一位置关系类型匹配。
仍以图4中的设备A 210为例。假如设备A 210接收到设备B 220发送的第一请求,第一请求中携带的第一位置关系类型为上下关系,那么设备A 210可以确定需要划分处于上下关系的两个扬声器组。设备A 210为横屏模式,那么基于该横屏模式可以确定扬声器a和扬声器b当前处于上方,扬声器c和扬声器d当前处于下方。因此确定扬声器组A包括扬声器a和扬声器b,扬声器组B包括扬声器c和扬声器d。
又以图16所示的设备A 210为例,假如第一位置关系类型仍为上下关系。设备A 210为竖屏模式,那么基于该竖屏模式可以确定扬声器a和扬声器c当前处于上方,扬声器b和扬声器d当前处于下方。因此确定扬声器组A包括扬声器a和扬声器c,扬声器组B包括扬声器b和扬声器d。
在一些示例中,设备A 210中可以存储有显示模式和位置关系类型与扬声器组的对应关系,该对应关系中包括第一显示模式和第一位置关系类型对应的扬声器组A和扬声器组B,那么设备A 210在确定第一显示模式和第一目标位置关系类型时,可以从该对应关系中确定扬声器组A和扬声器组B。
例如,设备A 210存储的显示模式和位置关系类型与扬声器的对应关系可以如下表1所示。
表1
需要说明的是,本申请实施例仅以表1为例,对显示模式和位置关系类型与扬声器组的对应关系进行说明,而不对显示模式和位置关系类型与扬声器组的对应关系构成限定。
在一些示例中,设备A 210中可以存储有姿态和位置关系类型与扬声器组的对应关系,该对应关系中包括第一姿态和第一位置关系类型对应的扬声器组A和扬声器组B,那么设备A 210在确定第一姿态和第一位置关系类型时,可以从该对应关系中确定扬声器组A和扬声器组B。
例如,设备A 210存储的姿态和位置关系类型与扬声器的对应关系可以如下表2所示。
表2
需要说明的是,本申请实施例仅以表2为例,对姿态和位置关系类型与扬声器组的对应关系进行说明,而不对姿态和位置关系类型与扬声器组的对应关系构成限定。
在一些示例中,设备A 210中可以存储有各扬声器在设备A 210上的第一坐标,第一坐标可以为设备A 210处于预设的第二姿态时,该扬声器在设备A 210上的坐标,第一坐标可以理解为扬声器在设备A 210上的绝对坐标。当设备A 210确定第一姿态时,可以将与第二姿态对应的第一坐标变换为与第一姿态对应的第二坐标,第二坐标可以理解为与第一姿态对应的相对坐标。当设备A 210确定各扬声器的第二坐标时,即可以基于各扬声器的第二坐标和第一位置关系类型,确定扬声器组A和扬声器组B。
仍以图4所示中的设备A 210为例。假设图4中设备A 210所处的姿态为第二姿态,设备A 210的厂商在设备A 210出厂之前,对第二姿态以及各扬声器的第一坐标进行标定,其中,扬声器a的第一坐标可以为(-1,1),扬声器b的第一坐标可以为(1,1),扬声器c的第一坐标可以为(-1,-1),扬声器d的第一坐标可以为(1,-1),设备A 210基于各扬声器的第一坐标可以确定扬声器a处于左上方、扬声器b处于右上方,扬声器c处于左下方,扬声器d处于右下方。若设备A 210顺时针旋转了90度,处于如图16所示的第一姿态,那么设备A 210基于第一姿态,将各扬声器的第一坐标进行变换,得到扬声器a的第二坐标可以为(1,1),扬声器b的第二坐标可以为(1,-1),扬声器c的第二坐标可以为(1,-1),扬声器d的第二坐标可以为(-1,-1),设备A 210基于各扬声器的第二坐标可以确定扬声器a处于右上方、扬声器b处于左下方,扬声器c处于左上方,扬声器d处于左下方。
在本申请实施例中,设备A 210可以基于第一位置关系类型所指示的为检测设备A 210与设备B 220之间相对位置,实际所需的一对扬声器组之间的相对位置,在设备A 210处于第一状态时各扬声器在设备A 210上的相对位置,准确中确定扬声器组A和扬声器组B,提高了对扬声器进行分组的准确性,使得基于通过该对扬声器组所发出的声音信号,能够准确地确定设备A 210与设备B 220之间的、与第一位置关系类型匹配的相对位置,即提高了检测设备A 210与设备B 220之间的相对位置的准确性。
请参照图19,为本申请实施例所提供的一种检测设备间相对位置的方法的流程图。其中,该方法可以用于图7-图9任一所示的系统。需要说明的是,该方法并不以图19以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S1901,设备A 210检测到第二事件。
需要说明的是,设备A 210执行S1901检测到第二事件的方式,可以参见前述S1201中的相 关描述,此处不再一一赘述。
还需要说明的是,设备A 210也可以在其他情况下执行下述至少部分步骤,从而检测设备A 210与设备B 220之间的相对位置,因此S1901为可选的步骤。
S1902,设备A 210确定麦克风组A和麦克风组B。
其中,麦克风组A和麦克风组B均可以包括至少一个麦克风,且麦克风组A和麦克风组B所包括的麦克风的数目,可以小于或等于设备A 210所包括的麦克风的总数目。麦克风组A和麦克风组B可以分别位于某一个平面的两侧,麦克风A和麦克风组B之间相对位置为第三相对位置,第三相对位置所匹配的位置关系类型为第一位置关系类型。
第一位置关系类型可以为设备A 210包括的至少三个麦克风可以划分成的至少一种位置关系类型中的任一种。
例如,在图7中,麦克风组A和麦克风组B之间的第三相对位置,可以为麦克风组A在麦克风组B上方,第三相对位置匹配的第一位置关系类型可以为上下关系。在图8中,麦克风组A和麦克风组B之间的第一相对位置,可以为麦克风组A在麦克风组B左方,第三相对位置匹配的第一位置关系类型可以为左右关系。
在一些示例中,设备A 210可以先确定麦克风组A和麦克风组B,所确定的麦克风组A和麦克风组B之间的相对位置即为第三对位置,第三相对位置所匹配的位置关系类型即为第一位置关系类型。或者,在另一些示例中,设备A 210也可以先确定第一位置关系类型,再基于第一位置关系类型确定麦克风组A和麦克风组B,麦克风组A和麦克风组B之间的相对位置即为第三相对位置。
在一些示例中,设备A 210存储有多对麦克风组,每对麦克风组包括分别位于一个平面两侧的两个麦克风组,设备A 210可以在多对麦克风组中确定麦克风组A和麦克风组B。
在一些示例中,设备A 210对设备A 210包括的至少三个麦克风中的全部或者部分麦克风分组,从而确定麦克风组A和麦克风组B。
需要说明的是,设备A 210确定麦克风组A和麦克风组B的方式,可以参照下述图22所示的方法。
还需要说明的是,设备A 210执行S1902确定麦克风组A和麦克风组B的方式,可以与设备A 210执行S1202确定扬声器组A和扬声器组B的方式相似。
还需要说明的是,S1902为可选的步骤。
S1903,设备B 220发出声音信号A和声音信号B。
在一些示例中,设备A 210可以在确定麦克风组A和麦克风组B时,向设备B 220发送第三请求,第三请求用于请求发出声音信号A和声音信号B,设备B 220在接收到第三请求时发出声音信号A和声音信号B。或者,在另一些示例中,可以由设备B 220先发出声音信号A和声音信号B,并向设备A 210发送第一请求,设备A 210在接收到第一请求时执行S1901。
在一些示例中,设备B 220可以通过一个扬声器(比如扬声器e)发出声音信号A和声音信号B。或者,在另一些实施例中,设备B 220包括多个扬声器,且该多个扬声器之间的距离可以忽略不计,比如该多个扬声器为同一个扬声器阵列中不同的扬声器单元,同一声音信号从每个扬声器单元传递到同一麦克风所需的时长几乎相同,那么设备B 220可以通过该多个扬声器中一个以上的扬声器发出声音信号A和声音信号B。
在一些示例中,设备B 220可以向设备A 210发送第一配置信息。其中,第一配置信息以及发送第一配置信息的方式也可以参见前述S1202中的相关描述。
S1904,设备A 210基于通过麦克风组A接收声音信号A的到达时刻1以及通过麦克风组B接收声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置。
由于设备A 210是通过处于第一相对位置的麦克风组A和麦克风组B接收声音信号A和声音信号B,那么设备A 210基于接收到声音信号A的到达时刻1和接收到声音信号B的到达时刻2,即可以确定声音信号A的传播时长TA和声音信号B传播时长TB之间的大小,进而确定设备B 220到麦克风组A的距离与设备B 220到麦克风组B的距离,再基于麦克风组A和麦克风组B的第三相对位置以及第一位置关系类型,准确地确定设备A 210与设备B 220之间的第一相对位置。 其中,第一相对位置与第一位置关系类型匹配。
需要说明的是,设备A 210识别声音信号A和声音信号B的方式,以及,设备A 210基于到达时刻1和到达时刻2确定设备A 210与设备B 220之间的第一相对位置的方式,可以与前述S1204中设备B 220识别声音信号A和声音信号B的方式,以及,设备B 220基于到达时刻1和到达时刻2确定设备A 210与设备B 220之间的第一相对位置的方式相同或相似,此处不再一一赘述。
S1905,设备A 210向设备B 220发送第一相对位置。
其中,S1905为可选的步骤。
在本申请实施例中,设备A 210可以响应于检测到第二事件,确定麦克风组A和麦克风组B,麦克风组A和扬声器组B之间的第三相对位置与第一位置关系类型匹配。当设备B 220发出第一声音信号和第二声音信号时,设备A 210通过麦克风组A接收声音信号A,通过麦克风组B接收声音信号B,基于接收到声音信号A的到达时刻1和接收到声音信号B的到达时刻2,确定设备A 210和设备B 220之间的、与第一位置关系类型匹配的第一相对位置,即实现了通过声电换能器检测设备间的相对位置,不需要依赖雷达等组件,降低了检测设备间相对位置的成本。
在一些示例中,设备A 210在执行S1904接收声音信号A和声音信号B时,可以持续接收直至下一次检测到第二事件。在一些示例中,设备A 210可以在执行S1904开始接收声音信号A和声音信号B之后的第三预设时长时,停止接收声音信号A和声音信号B。当然,在实际应用中,设备A 210还可以通过其他方式确定停止接收声音信号A和声音信号B的时机,例如,设备A 210可以在确定设备A 210与设备B 220之间的第一相对位置时,停止接收声音信号A和声音信号B。
请参照图20,为本申请实施例所提供的一种检测设备间相对位置的方法的流程图。其中,该方法可以用于图7-图9任一所示的系统,设备A 210可以通过多对麦克风组接收多组声音信号,使得能够确定设备B 220和设备C 230等多个不同位置的设备与设备A 210之间的相对位置。需要说明的是,该方法并不以图20以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S2001,设备A 210确定麦克风组A、麦克风组B、麦克风组C和麦克风组D。
其中,麦克风组C和麦克风组D均可以包括至少一个麦克风,且麦克风组C和麦克风组D所包括的麦克风的数目,可以小于或等于设备A 210所包括的麦克风的总数目。麦克风组C和麦克风组D可以分别位于某一个平面的两侧,麦克风C和麦克风组D之间相对位置为第四相对位置,第四相对位置所匹配的位置关系类型为第二位置关系类型。
需要说明的是,第三相对位置和第四相对位置不同,第二位置关系类型和第一位置关系类型不同。例如,在如图9所示的系统中,麦克风组A和麦克风组B之间的第一位置关系类型为上下关系,第三相对位置为麦克风组A在麦克风组B的上方,麦克风组C和麦克风组D之间的第二位置关系类型为左右关系,第四相对位置为麦克风组C在麦克风组D的左方。
需要说明的是,设备A 210在S2001中确定麦克风组A和麦克风组B的方式,可以与S1901中在多个麦克风中确定麦克风组A和麦克风组B的方式相同,设备A 210在S2001中确定麦克风组C和麦克风组D的方式,可以与S1901中在多个麦克风中确定麦克风组A和麦克风组B的方式相似或相同,此处不再一一赘述。
还需要说明的是,麦克风组A与麦克风组C或麦克风组D可以包括最多部分相同的麦克风,麦克风组B与麦克风组C或麦克风组D可以包括最多部分相同的麦克风。例如,在如图9所示的系统中,麦克风组A和麦克风组C都包括麦克风a,麦克风组A和麦克风组D都包括麦克风b,麦克风组B和麦克风组C都包括麦克风c,麦克风组B和麦克风组D都包括麦克风d。
S2002a,设备B 220发出声音信号A和声音信号B。
在一些示例中,设备B 220向设备A 210发送第一配置信息,第一配置信息可以用于指示声音信号A的声音特征和声音信号B的声音特征,那么设备A 210可以基于声音信号A的声音特征识别声音信号A,基于声音信号B的声音特征识别声音信号B,并忽略声音信号C和声音信号D。当然,在实际应用中,第一配置信息也可以用于指示其他信息,比如第一配置信息可以用于指示发出声音信号A和声音信号B的方式;和/或,第一配置信息可以用于指示发出声音信号A的发 音时刻1和发出声音信号B的发音时刻2。
需要说明的是,S2002a中设备B 220发出声音信号A和声音信号B的方式,可以与前述S1903设备B 220发出声音信号A和声音信号B的方式相同。
S2002b,设备C 230发出声音信号C和声音信号D。
在一些示例中,设备C 230可以向设备A 210发送第二配置信息,第二配置信息可以用于指示声音信号C的声音特征和声音信号D的声音特征,那么设备A 210可以基于声音信号C的声音特征识别声音信号C,基于声音信号D的声音特征识别声音信号D,并忽略声音信号A和声音信号B。当然,在实际应用中,第二配置信息也可以用于指示其他信息,比如,第二配置信息可以用于指示发出声音信号C和声音信号D的方式;和/或,第一配置信息可以用于指示发出声音信号C的发音时刻3和发出声音信号D的发音时刻4。
在一些示例中,为了便于设备A210准确地识别声音信号A、声音信号B、C和声音信号D,从而提高检测多设备之间相对位置的准确性和效率,声音信号A的声音特征、声音信号B的声音特征、声音信号C的声音特征和声音信号D的声音特征两两不同。
需要说明的是,S2002b中设备B 220发出声音信号C和声音信号D的方式,可以与S2002a中设备B 220发出声音信号A和声音信号的方式相同。
S2003,设备A 210基于通过麦克风组A接收声音信号A的到达时刻1以及通过麦克风组B接收声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置,基于通过麦克风组C接收到声音信号C的到达时刻3和通过麦克风组D接收到声音信号D的到达时刻4,确定设备A 210与设备C 230之间的第五相对位置。
其中,第一相对位置与第一位置关系类型匹配,第五相对位置与第二位置关系类型匹配。
需要说明的是,设备A 210基于通过麦克风组A接收声音信号A的到达时刻1以及通过麦克风组B接收声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置的方式,可以与S1904中设备A 210基于通过麦克风组A接收声音信号A的到达时刻1以及通过麦克风组B接收声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置的方式相同;设备A 210基于通过麦克风组C接收到声音信号C的到达时刻3和通过麦克风组D接收到声音信号D的到达时刻4,确定设备A 210与设备C 230之间的第五相对位置的方式,可以与S1904中设备A 210基于通过麦克风组A接收声音信号A的到达时刻1以及通过麦克风组B接收声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置的方式相似,此处不再一一赘述。
在本申请实施例中,设备A 210可以通过多组麦克风接收多组声音信号,不同组麦克风之间的相对位置对应不同的位置关系类型,从而使得设备A 210能够确定处于多种位置关系类型的多个设备与设备A 210之间的相对位置,极大地提高了检测设备相对位置的效率。
请参照图21,为本申请实施例所提供的一种检测设备间相对位置的方法的流程图。其中,该方法可以用于图7-图9任一所示的系统中的设备A 210,设备A 210可以图19所示的方法之后,继续执行如图21所示的方法,从而改变接收声音信号的方式,使得对设备A 210与设备B 220之间的相对位置进行更新。需要说明的是,该方法并不以图23以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S1906,设备A 210检测到第一事件。
其中,第一事件可以用于触发设备A 210切换收音模式。且需要说明的是,第一事件可以为事先设置的事件。
在一些示例中,第一事件可以包括设备A 210的姿态发生变化。
在一些示例中,第一事件可以包括设备B 210的姿态发生变化。
在一些示例中,第一事件可以包括设备A 210的显示模式发生变化。
在一些示例中,第一事件可以包括设备A 210接收到设备B 220发送的第二请求,第二请求用于请求设备A 210切换收音模式。
需要说明的是,S1906为可选的步骤。比如在一些示例中,可以在设备A 210执行S1904之 后的第二预设时长时,继续执行下述至少部分步骤。还需要说明的是,本申请实施例不对确定第二预设时长的方式以及第二预设时长的时长大小进行限定。
S1907,设备A 210确定麦克风组C和麦克风组D。
其中,麦克风组C和麦克风组D均可以包括至少一个麦克风,且麦克风组C和麦克风组D所包括的麦克风的数目,可以小于或等于设备A 210所包括的麦克风的总数目。麦克风组A和麦克风组B可以分别位于某一个平面的两侧,麦克风A和麦克风组B之间相对位置为第四相对位置,第四相对位置所匹配的位置关系类型为第二位置关系类型。
需要说明的是,第三相对位置和第四相对位置不同,第二位置关系类型和第一位置关系类型不同。
还需要说明的是,S1207中设备A 210确定麦克风组C和麦克风组D的方式,可以与S1902确定麦克风组A和麦克风组B的方式相似或相同,此处不再一一赘述。
S1908,设备B 220发出声音信号C和声音信号D。
需要说明的是,S1908中设备B 220发出声音信号C和声音信号D的方式,可以与S1903中设备B 220发出声音信号A和声音信号D的方式相同或相似,此处不再一一赘述。
S1909,设备A 210基于通过麦克风组C接收到声音信号C的到达时刻3和通过麦克风组D接收到声音信号D的到达时刻4,确定设备A 210与设备B 220之间的第二相对位置。
需要说明的是,S1909中设备A 210基于通过麦克风组C接收到声音信号C的到达时刻3和通过麦克风组D接收到声音信号D的到达时刻4,确定设备A 210与设备B 220之间的第二相对位置的方式,可以与S1904中设备A 210基于通过麦克风组A接收声音信号A的到达时刻1以及通过麦克风组B接收声音信号B的到达时刻2,确定设备A 210与设备B 220之间的第一相对位置的方式相似,此处不再一一赘述。
S1910,设备A 210向设备B 220发送第二相对位置。
需要说明的是,S1910为可选的步骤。
在本申请实施例中,设备A 210可以响应于检测到第一事件,切换接收声音信号的方式,从而使得设备B 220和设备A 210可以对设备B 220和设备A 210之间的相对位置进行更新,提高了检测设备B 220和设备A 210的相对位置的准确性。
请参照图22,为本申请实施例所提供的一种对麦克风进行分组的方法的流程图。其中,该方法可以用于图7-图9任一所示的系统中的设备A 210。需要说明的是,该方法并不以图22以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S2201,设备A 210确定设备A 210的第一状态。
当设备A 210所处的状态不同时,设备A 210上的麦克风在设备A 210中所处的相对可能也不同,比如以图7中的设备A 210为例,在图7中,麦克风a和麦克风c处于设备A 210的左方,麦克风b和麦克风d处于设备A 210的右方,而当设备A 210顺时针旋转90度时,麦克风c和麦克风d处于设备A 210的左方,麦克风a和麦克风b处于设备A 210的右方,如图23所示。因此,为了便于设备A 21确定当前各麦克风在设备A 210中的相对位置,设备A 210可以确定第一状态。
第一状态可以用于指示设备A 210所处的状态。在一些示例中,设备A 210的第一状态可以包括设备A 210的第一姿态或者设备A 210的第一显示模式。
S2202,设备A 210确定第一位置关系类型。
第一位置关系类型可以用于指示设备A 210与设备B 220之间的相对位置所匹配的位置关系类型。与设备A 210处于第一位置关系类型的其他设备发出声音信号时,设备A 210可以通过与第一位置关系类型匹配的一对麦克风组接收声音信号,根据接收到的声音信号确定该设备与设备A 210之间的、与第一位置关系类型匹配的具体相对位置。
需要说明的是,设备A 210执行S2201-S2202确定第一状态和第一位置关系类型的方式,可以参见前述设备A 210执行S1501-S1502确定第一状态和第一位置关系类型中的相关描述。
S2203,设备A 210基于第一状态和第一位置关系类型,确定麦克风组A和麦克风组B。
设备A 210可以基于第一位置关系类型所指示的为检测设备A 210与设备B 220之间相对位 置,实际所需的相对位置,在设备A 210处于第一状态时各麦克风在设备A 210上的相对位置,准确地确定处于麦克风组A和麦克风组B,麦克风组A和麦克风组B之间的第三相对位置与第一位置关系类型匹配。
仍以图7中的设备A 210为例。假如设备A 210接收到设备B 220发送的第一请求,第一请求中携带的第一位置关系类型为上下关系,那么设备A 210可以确定需要划分处于上下关系的两个麦克风组。设备A 210为横屏模式,那么基于该横屏模式可以确定麦克风a和麦克风b当前处于上方,麦克风c和麦克风d当前处于下方。因此确定麦克风组A包括麦克风a和麦克风b,麦克风组B包括麦克风c和麦克风d。
又以图23所示的设备A 210为例,假如第一位置关系类型仍为上下关系。设备A 210为竖屏姿态,那么基于该竖屏模式可以确定麦克风a和麦克风c当前处于上方,麦克风b和麦克风d当前处于下方。因此确定麦克风组A包括麦克风a和麦克风c,麦克风组B包括麦克风b和麦克风d。
在一些示例中,设备A 210中可以存储有显示模式和位置关系类型与麦克风组的对应关系,该对应关系中包括第一显示模式和第一位置关系类型对应的麦克风组A和麦克风组B,那么设备A 210在确定第一显示模式和第一目标位置关系类型时,可以从该对应关系中确定麦克风组A和麦克风组B。
在一些示例中,设备A 210中可以存储有姿态和位置关系类型与麦克风组的对应关系,该对应关系中包括第一姿态和第一位置关系类型对应的麦克风组A和麦克风组B,那么设备A 210在确定第一姿态和第一位置关系类型时,可以从该对应关系中确定麦克风组A和麦克风组B。
在一些示例中,设备A 210中可以存储有各麦克风在设备A 210上的第一坐标,第一坐标可以为设备A 210处于预设的第二姿态时,该麦克风在设备A 210上的坐标,第一坐标可以理解为麦克风在设备A 210上的绝对坐标。当设备A 210确定第一姿态时,可以将与第二姿态对应的第一坐标变换为与第一姿态对应的第二坐标,第二坐标可以理解为与第一姿态对应的相对坐标。当设备A 210确定各麦克风的第二坐标时,即可以基于各麦克风的第二坐标和第一位置关系类型,确定麦克风组A和麦克风组B。
在本申请实施例中,由于第一位置关系类型可以指示即将划分的一对麦克风组之间的相对位置,第一状态可以用于确定设备A 210当前包括的各麦克风在设备A 210上的相对位置,因此,设备A 210可以基于第一状态和第一位置关系类型,准确地在多个麦克风中确定处于第一相对位置的麦克风组A和麦克风组B。
请参照图24,为本申请实施例所提供的一种多设备协同处理任务的方法的流程图。其中,多设备可以包括设备A 210和设备B 220,设备A 210可以为平板电脑,设备B 220可以为手机。需要说明的是,本申请实施例仅以设备A 210和设备B 220为例,对多设备协同处理任务的方式进行说明,而并不对该任务的具体内容以及协同处理该任务的设备数目和设备类型进行限定。还需要说明的是,该方法并不以图24以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S2401,设备A 210和设备B 220互相发现。
设备A 210和设备B 220可以通过蓝牙、WIFI等近距离通信互相发现对方,并建立通信连接,且需要说明的是,本申请实施例不对设备A 210和设备B 220互相发现对方的方式进行限定。
S2402,设备A 210和设备B 220确定设备A 210与设备B 220之间的第一相对位置。
其中,设备A 210和设备B 220,可以按照前述图12-图15所示方法及相关描述或者按照图19-图22所示的方法及相关描述,来确定设备A 210与设备B 220之间的第一相对位置。
S2403,设备A 210向设备B 220通知第一协同模式。
设备A 210可以根据第一相对位置确定与第一相对位置对应的第一协同模式,并向设备B 220通知第一协同模式,第一协同模式可以用于指示当设备A 210和设备B 22处于第一相对位置时,设备A 210和设备B 220协同运行的方式。
在一些示例中,设备A 210存储有相对位置与协同模式之间的对应关系,并根据第一相对位 置从该对应关系中获取相应的第一协同模式。其中,相对位置与协同模式之间的对应关系可以由设备A 210事先确定,且该对应关系中包括第一相对位置与第一协同模式的对应关系。当然,在实际应用中,设备A 210也可以通过其他方式来确定与第一相对位置对应的第一协同模式,本申请实施例不对确定与第一相对位置对应的第一协同模式的方式进行限定。
S2404,设备A 210和设备B 220基于第一协同模式,协同处理第一任务。
其中,第一任务可以为任一种任务,本申请实施例不对第一任务的种类进行限定。
在一些示例中,当以相对位置为设备B 220在设备A 210左侧时,设备A 210显示第一界面,设备B 220显示第二界面;当第一相对位置为设备B 220在设备A 210右侧时,设备A 210显示所述第一界面,设备B 220显示第三界面,其中,第二界面与第一界面相关联,第三界面与第一界面相关联。在一些示例中,第二界面包括的第一显示内容可以为第二界面包括的第二显示内容中的至少部分显示内容,从而用于展示该至少部分显示内容的细节,和/或,用于用户对该至少部分显示内容进行编辑。在一些示例中,第三界面包括的第三显示内容可以为与第二界面包括的第二显示内容关联的内容,比如,第三显示内容可以为第二显示内容的批注和解释等内容。
在一些示例中,第一界面可以为第二界面的下级界面(或者说第二界面为第一界面的上级界面),第三界面可以为第一界面的下级界面(或者说第一界面为第三界面的上级界面)。其中,对于任意两个界面,如果该两个界面包括上级界面和下级界面(在一些示例中,上级界面也可以称为父界面,下级界面也可以称为子界面),那么该下级界面可以是基于该上级界面生成的或者依赖于该上级界面而存在的。
在一些示例中,当第一相对位置为设备B 220在设备A 210上方时,设备A 210显示第一界面,设备B 220显示第二界面;当第一相对位置为设备B 220在设备A 210下方时,设备A 210显示所述第一界面,设备B 220显示第三界面。
需要说明的是,当设备A 210和设备B 220之间的相对位置不同时,设备A 210和设备B 220处理第一任务的协同模式可以相同也可以不同,且在某一相对位置下,设备A 210和设备B 220协同处理第一任务的协同模式也可以不限于本申请实施例所提到的几种模式。
S2405,设备A 210和设备B 220确定设备A 210与设备B 220之间第二相对位置。
在一些示例中,设备A 210和设备B 220可以在设备A 210的状态发生变化,或者设备B 220的移动时,执行S2405确定设备A 210与设备B 220之间的第二相对位置。当然,在实际应用中,设备A 210和设备B 220也可以在其他时机确定设备A 210与设备B 220之间的第二相对位置。
S2406,设备A 210向设备B 220通知第二协同模式。
S2407,设备A 210和设备B 220基于第二协同模式,协同处理第一任务。
需要说明的是,S2405-S2407中设备A 210和设备B 220所执行的操作,可以参见S2402-S2404中的相关描述,此处不再一一赘述。
还需要说明的是,S2405-S2407为可选的步骤。
在本申请实施例中,设备A 210和设备B 220可以确定设备A 210与设备B 220之间的第一相对位置,从而基于与第一相对位置对应的第一协同模式,协同处理第一任务,协同处理第一任务,即协同处理第一任务的方式与第一相对位置对应,提高了协同处理第一任务的可靠性和用户体验。且在设备A 210与设备B 220之间的相对位置变化为第二相对位置时,设备A 210和设备B 220也可以基于与第二相对位置对应的第二协同模式,协同处理第一任务。也即是,设备A 210和设备B 220协同处理第一任务的方式,可以与设备A 210与设备B 220之间的相对位置相匹配,提高了处理第一任务的准确性和灵活性,进而也提高了用户体验。
以下将以设备A 210和设备B 220协同处理演示文稿为例,对本申请实施例所提供的检测设备间相对位置的方法以及多设备协同处理任务的方法进行说明。请依次参照图25-图30,为本申请实施例所提供的显示界面的示意图。
如图25所示,设备A 210当前正在单独处理演示文稿,当设备B 220移动至与设备A 210一定距离范围内时,设备B 220发现设备A 210,并在控制中心界面包括的超级终端选项卡1400中显示设备A 210的图标和名称“我的平板”。
当设备B 220基于设备A 210的图标接收到用户的点击操作时,设备B 220向设备A 210发 送第一请求,从而请求检测设备A 210与设备B 220之间的相对位置。请继续参照图26,设备A 210在接收到第一请求时,显示提示信息1500,该提示信息1500包括:“手机P30请求加入”等文字信息、同意按钮和忽略按钮。若设备A 210基于忽略按钮接收到用户的点击操作则忽略该提示信息。若设备A 210基于同意按钮接收到用户的点击操作,则确定设备A 210当前的第一显示模式为横屏模式,并根据横屏模式确定第一位置关系类型为左右关系,进而根据横屏模式和左右关系,控制设备A 210当前左边的扬声器(即扬声器b和扬声器d)发出声音信号A,控制设备A 210当前右边的扬声器(即扬声器a和扬声器c)发出声音信号B,如图27所示。其中,声音信号A可以为左声道信号,声音信号B可以为右声道信号。同时设备B 220的超级终端选项卡1400中设备A 210的图标下方显示“加入中…”,以提示用户正在加入设备A 210处理的第一任务。设备B 220通过接收声音信号A和声音信号B,确定第一相对位置为设备B 220在设备A 210的左侧,并将第一相对位置发送给设备A 210。
设备A 210根据设备B 220在设备A 210的左侧,确定相应的第一协同模式为在设备B 220显示目录界面,在设备A 210显示具体的某一页文稿内容。其中,设备A 210的显示模式可以称为预览模式,设备B 220的显示模式可以称为工具模式。设备A 210向设备B 220发送演示文稿的目录,设备B 220接收并显示该演示文稿的目录如图28所示,由图28可知,设备A 210正在显示该演示文稿中的第2页1710。另外,设备B 220所显示的界面中还可以包括新增按钮1720,当设备B 220基于新增按钮1720接收到用户的点击操作时,可以将用户指定的内容(比如图片等)插入至该演示文稿中,从而新增一页。
接着,用户将设备B 220移动至设备A 210右侧,设备B 220检测到设备B 220的姿态发生了变化,因此再次向设备A 210发送第二请求。当设备A 210接收到第二请求时,再次通过当前左边的扬声器发出声音信号A,控制设备A 210当前右边的扬声器发出声音信号B,设备B 220再次基于声音信号A和声音信号B,确定第二相对位置为设备B 220在设备A 210右侧,并将第二相对位置发送给设备A 210。
设备A 210根据设备B 220在设备A 210右侧,确定第二协同模式为在设备A 210显示上级界面,在设备B 220显示下级界面,因此,在设备B 220显示与当前显示的演示文稿的页面对应的备注编辑界面,如图29所示。其中,设备A 210的显示模式可以称为编辑模式,设备B 220的显示模式可以为称为预览模式。设备B 220可以在该备注编辑界面接收用户提交的与该页面对应的备注信息。
接着用户将设备A 210翻转了90度,如图30所示,设备A 210根据该姿态变化,切换为竖屏模式显示,并再次通过左边的扬声器(即扬声器a和扬声器b)播放声音信号A,通过右边的扬声器(即扬声器c和扬声器d)播放声音信号B,其中,声音信号A可以为左声道信号,声音信号B可以为右声道信号。设备B 220再次根据声音信号A和声音信号B确定第二相对位置为设备B在设备A的左方,并将第二相对位置发送给设备A 210。设备A 210基于第二相对位置,确定第二协同模式为在设备B 220显示与当前显示的演示文稿的页面对应的备注编辑界面,在设备A 210显示某一页演示文稿并同步显示用户在设备B 220所添加的备注内容,如图30所示。
又或者,如图31所示,第一任务为文档编辑任务。设备A 210可以显示文档编辑页界面。
设备C 230处于设备A 210左侧,设备C 230可以协同显示与设备A 210所编辑的文档对应的批注。当设备C 230基于任一批注接收到用户的点击操作时,设备A 210可以跳转至该批注所在的位置。当设备A 210接收到用户新提交的批注时,可以将该批注发送给设备B 220进行显示。
设备B 220处于设备A 210右侧,设备B 220可以协同显示插入工具界面,设备B 220可以通过该插入工具界面接收用户提交的内容,并将该内容插入至设备A 210所编辑的文档。比如设备B 220可以在基于任一图像接收到用户的点击操作时,将该图像插入至设备A 210所编辑的文档中。
请参照图32,为本申请实施例所提供的一种发出声音信号的方法的流程图。其中,该方法可以用于第一设备,第一设备至少包括第一扬声器、第二扬声器和第三扬声器,第一扬声器和第二扬声器位于第一平面的第一侧,第三扬声器位于第一平面的第二侧,第一扬声器和第三扬声器位于第二平面的第三侧,第二扬声器位于第二平面的第四侧;第一平面和第二平面不平行。在一些 示例中,第一设备可以为如图4-图6中的设备A 210,第一扬声器、第二扬声器和第三扬声器可以为扬声器a、扬声器b、扬声器和扬声器d中的任意三个。在一些示例中,图12-图15和图24所示方法中的设备A 210可以基于本申请实施例所提供的方法发出声音信号,第一设备可以实现设备A 210所执行的至少部分操作。需要说明的是,该方法并不以图32以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S3201,第一设备通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器发出第二声音信号。
在一些示例中,第一设备还包括第四扬声器,第四扬声器位于第一平面的第二侧,且位于第二平面的第四侧。第一设备可以通过第一扬声器和所述第二扬声器发出第一声音信号,通过第三扬声器和第四扬声器发出第二声音信号。在一些示例中,第四扬声器可以为扬声器a、扬声器b、扬声器c和扬声器d中另一个。
在一些示例中,第一扬声器和第二扬声器可以为前述扬声器组A中的扬声器,第三扬声器和第四扬声器可以为前述扬声器组B中的扬声器,第一声音信号可以为前述中的声音信号A,第二声音信号可以为前述中的声音信号B。其中,第一设备通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器(和第四扬声器)发出第二声音信号的方式,可以参见前述S1201-S1203中的相关描述。
S3202,第一设备响应于检测到第一事件,切换为通过第一扬声器和第三扬声器发出第三声音信号,通过第二扬声器发出第四声音信号。
在一些示例中,第一设备还包括第四扬声器,第四扬声器位于第一平面的第二侧,且位于第二平面的第四侧。第一设备响应于检测到第一事件,切换为通过第一扬声器和第三扬声器发出第三声音信号,通过第二扬声器和第四扬声器发出所述第四声音信号。
在一些示例中,第一扬声器和第三扬声器可以为前述扬声器组C中的扬声器,第二扬声器和第四扬声器可以为前述扬声器组D中的扬声器,第三声音信号可以为前述中的声音信号C,第四声音信号可以为前述中的声音信号D。其中,第一设备响应于检测到第一事件,切换为通过第一扬声器和第三扬声器发出第三声音信号,通过第二扬声器(和第四扬声器)发出第四声音信号的方式,可以参见前述S1206-S1208中的相关描述。
在一些示例中,第一声音信号和第二声音信号中的至少一个,与第三声音信号和第四声音信号中的至少一个相同。
在一些示例中,第一事件包括以下至少一项:第一设备的姿态发生变化;第一设备的显示模式发生变化;第一设备与第二设备建立通信连接;第一设备发现第二设备;第一设备接收到第二设备发送的第一请求,第一请求用于触发第一设备检测第一设备和第二设备之间的相对位置关系;第一设备接收到第二设备发送的第二请求,第二请求用于请求触发第一设备切换发声模式。
其中,第二设备可以为前述图4-图6中的设备B 220,第二设备实现可以图12-图15和图24所示方法中设备B 220所执行的至少部分操作。
在一些示例中,第一设备的显示屏包括一组相对的较长边和一组相对的较短边,第一平面和第二平面互相垂直,且第一平面和第二平面与显示屏所在的平面垂直,第一平面与较长边平行,第二平面与较短边平行;或者,第一平面与较短边平行,第二平面与较长边平行。例如,第一平面和第二平面可以为前述图2中的平面a和平面b。
在本申请实施例中,第一设备至少包括第一扬声器、第二扬声器和第三扬声器,第一扬声器和第二扬声器位于第一平面的第一侧,第三扬声器位于第一平面的第二侧,第一扬声器和第三扬声器位于第二平面的第三侧,第二扬声器位于第二平面的第四侧;第一平面和第二平面不平行。第一设备可以先通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器发出第二声音信号,并响应于检测到第一事件,切换为通过第一扬声器和第三扬声器发出第三声音信号,通过第二扬声器发出第四声音信号。也即是可以响应于第一事件,切换发出声音信号的扬声器之间的相对位置,从而使得可以对第一设备与其他设备之间的相对位置进行更新,提高了检测第一设备与其他设备之间的相对位置的准确性。
请参照图33,为本申请实施例所提供的一种接收声音信号的方法的流程图。其中,该方法可以用于第一设备,第一设备至少包括第一麦克风、第二麦克风和第三麦克风,第一麦克风和第二麦克风位于第一平面的第一侧,第三麦克风位于第一平面的第二侧,第一麦克风和第三麦克风位于第二平面的第三侧,第二麦克风位于第二平面的第四侧;第一平面和第二平面不平行。在一些示例中,第一设备可以为如图7-图9中的设备A 210,第一麦克风、第二麦克风和第三麦克风可以为麦克风a、麦克风b、麦克风和麦克风d中的任意三个。在一些示例中,图19-图22以及图24所示方法中的设备A 210可以基于本申请实施例所提供的方法接收声音信号,第一设备可以实现设备A 210所执行的至少部分操作。需要说明的是,该方法并不以图33以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S3301,第一设备通过第一麦克风和第二麦克风接收第一声音信号,通过第三麦克风接收第二声音信号。
在一些示例中,第一设备还包括第四麦克风,第四麦克风位于第一平面的第二侧,且位于第二平面的第四侧。第一设备可以通过第一麦克风和第二麦克风接收第一声音信号,通过第三麦克风和第四麦克风接收第二声音信号。在一些示例中,第四麦克风可以为麦克风a、麦克风b、麦克风c和麦克风d中另一个。
在一些示例中,第一麦克风和第二麦克风可以为前述麦克风组A中的麦克风,第三麦克风和第四麦克风可以为前述麦克风组B中的麦克风,第一声音信号可以为前述中的声音信号A,第二声音信号可以为前述中的声音信号B。其中,第一设备通过第一麦克风和第二麦克风发出第一声音信号,通过第三麦克风(和第四麦克风)发出第二声音信号的方式,可以参见前述S1901-S1903中的相关描述。
S3302,第一设备响应于检测到第一事件,切换为通过第一麦克风和第三麦克风接收第三声音信号,通过第二麦克风接收第四声音信号。
在一些示例中,第一设备还包括第四麦克风,第四麦克风位于第一平面的第二侧,且位于第二平面的第四侧。第一设备可以响应于检测到第一事件,切换为通过第一麦克风和第三麦克风发出第三声音信号,通过第二麦克风和第四麦克风发出第四声音信号。
在一些示例中,第一麦克风和第三麦克风可以为前述麦克风组C中的麦克风,第二麦克风和第四麦克风可以为前述麦克风组D中的麦克风,第三声音信号可以为前述中的声音信号C,第四声音信号可以为前述中的声音信号D。其中,第一设备通过第一麦克风和第三麦克风发出第三声音信号,通过第二麦克风(和第四麦克风)发出第四声音信号的方式,可以参见前述S1906-S1909中的相关描述。
在一些示例中,第一事件包括以下至少一项:第一设备的姿态发生变化;第一设备的显示模式发生变化;第一设备与第二设备建立通信连接;第一设备发现第二设备;第一设备接收到第二设备发送的第一请求,第一请求用于触发第一设备检测第一设备和第二设备之间的相对位置关系;第一设备接收到第二设备发送的第二请求,第二请求用于请求触发第一设备切换收音模式。
其中,第二设备可以为前述图7-图9中的设备B 220,第二设备可以实现图19-图22和图24所示方法中设备B 220所执行的至少部分操作。
在一些示例中,第一设备的显示屏包括一组相对的较长边和一组相对的较短边,第一平面和第二平面互相垂直,且第一平面和第二平面与显示屏所在的平面垂直,第一平面与较长边平行,第二平面与较短边平行;或者,第一平面与较短边平行,第二平面与较长边平行。例如,第一平面和第二平面可以为前述图3中的平面c和平面d。
在本申请实施例中,第一设备至少包括第一麦克风、第二麦克风和第三麦克风,第一麦克风和第二麦克风位于第一平面的第一侧,第三麦克风位于第一平面的第二侧,第一麦克风和第三麦克风位于第二平面的第三侧,第二麦克风位于第二平面的第四侧;第一平面和第二平面不平行。第一设备可以先通过第一麦克风和第二麦克风接收第一声音信号,通过第三麦克风接收第二声音信号,并响应于检测到第一事件,切换为通过第一麦克风和第三麦克风接收第三声音信号,通过第二麦克风接收第四声音信号。也即是可以响应于第一事件,切换接收声音信号的麦克风之间的 相对位置,从而使得可以对第一设备与其他设备之间的相对位置进行更新,提高了检测第一设备与其他设备之间的相对位置的准确性。
请参照图34,为本申请实施例所提供的一种检测设备间相对位置的方法的流程图。其中,该方法可以用于包括第一设备和第二设备的系统,第一设备至少包括第一扬声器、第二扬声器和第三扬声器,第一扬声器和第二扬声器位于第一平面的第一侧,第三扬声器位于第一平面的第二侧,第一扬声器和第三扬声器位于第二平面的第三侧,第二扬声器位于第二平面的第四侧;第一平面和第二平面不平行。在一些示例中,第一设备可以为如图4-图6中的设备A 210,第一扬声器、第二扬声器和第三扬声器可以为扬声器a、扬声器b、扬声器和扬声器d中的任意三个,第二设备可以为如图4-图6中的设备B 220。第一设备可以实现如图12-图15以及图24所示方法中设备A 210所执行的至少部分操作,第二设备可以实现如图12-图15以及图24所示方法中设备B 220所执行的至少部分操作。需要说明的是,该方法并不以图34以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S3401,第一设备通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器发出第二声音信号。
在一些示例中,第一设备响应于检测到第二事件,通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器发出第二声音信号。
在一些示例中,第二事件包括下述任一项:第一设备与第二设备建立通信连接;第一设备发现第二设备;第一设备接收到第二设备发送的第一请求,第一请求用于触发第一设备检测第一设备和第二设备之间的相对位置关系。
在一些示例中,第一设备可以通过第三扬声器和第四扬声器发出第二声音信号。
S3402,第二设备接收第一声音信号和第二声音信号。
S3403,第二设备根据第一声音信号的第一到达时刻,以及第二声音信号的第二到达时刻,确定第二设备与第一设备之间的第一相对位置。
在一些示例中,第一扬声器和第二扬声器可以为前述扬声器组A中的扬声器,第三扬声器和第四扬声器可以为前述扬声器组B中的扬声器,第一声音信号可以为前述中的声音信号A,第二声音信号可以为前述中的声音信号B,第一到达时刻可以为前述中声音信号A的到达时刻1,第二到达时刻可以为前述中声音信号B的到达时刻2。
在一些示例中,第一设备响应于检测到第一事件,切换为通过第一扬声器和第三扬声器发出第三声音信号,通过第二扬声器发出第四声音信号,第二设备接收第三声音信号和第四声音信号,第二设备根据第三声音信号的第三到达时刻,以及第四声音信号的第四到达时刻,确定第二设备与第一设备之间的第二相对位置。
在一些示例中,第一事件包括以下至少一项:第一设备的姿态发生变化;第一设备的显示模式发生变化;第一设备接收到第二设备发送的第二请求,第二请求用于触发第一设备切换发音模式。
在一些示例中,第一扬声器和第三扬声器可以为前述扬声器组C中的扬声器,第二扬声器和第四扬声器可以为前述扬声器组D中的扬声器,第三声音信号可以为前述中的声音信号C,第四声音信号可以为前述中的声音信号D,第三到达时刻可以为前述中声音信号C的到达时刻3,第四到达时刻可以为前述中声音信号D的到达时刻4。
在一些示例中,第一设备通过第一扬声器和第三扬声器发出第三声音信号,通过第二扬声器发出第四声音信号,第二设备接收第三声音信号和第四声音信号,第三设备根据第三声音信号的第三到达时刻,以及第四声音信号的第四到达时刻,确定第三设备与第一设备之间的第五相对位置。在一些示例中,第三设备可以为前述中的设备C 230,第三设备可以实现图13-图15以及图24所示方法中设备C 230所执行的至少部分操作。
在本申请实施例中,第一设备至少包括第一扬声器、第二扬声器和第三扬声器,第一扬声器和第二扬声器位于第一平面的第一侧,第三扬声器位于第一平面的第二侧,第一扬声器和第三扬声器位于第二平面的第三侧,第二扬声器位于第二平面的第四侧;第一平面和第二平面不平行。 第一设备可以通过第一扬声器和第二扬声器发出第一声音信号,通过第三扬声器发出第二声音信号,第二设备可以接收第一声音信号和第二声音信号,并根据第一声音信号的第一到达时刻,以及第二声音信号的第二到达时刻,确定第二设备与第一设备之间的第一相对位置。即实现了通过扬声器和麦克风准确地检测设备间的相对位置,不需要依赖雷达等组件,降低了检测设备间相对位置的成本。
请参照图35,为本申请实施例所提供的一种检测设备间相对位置的方法的流程图。其中,该方法可以用于包括第一设备和第二设备的系统,第一设备至少包括第一麦克风、第二麦克风和第三麦克风,第一麦克风和第二麦克风位于第一平面的第一侧,第三麦克风位于第一平面的第二侧,第一麦克风和第三麦克风位于第二平面的第三侧,第二麦克风位于第二平面的第四侧;第一平面和第二平面不平行。在一些示例中,第一设备可以为如图7-图9中的设备A 210,第一麦克风、第二麦克风和第三麦克风可以为麦克风a、麦克风b、麦克风和麦克风d中的任意三个,第二设备可以为如图7-图9中的设备B 220。第一设备可以实现如图19-图22以及图24所示方法中设备A 210所执行的至少部分操作,第二设备可以实现如图19-图22以及图24所示方法中设备B 220所执行的至少部分操作。需要说明的是,该方法并不以图35以及以下所述的具体顺序为限制,应当理解,在其它实施例中,该方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。该方法包括如下步骤:
S3501,第二设备发出第一声音信号和第二声音信号。
在一些示例中,第一设备响应于检测到第二事件,通过第一麦克风和第二麦克风接收第一声音信号,通过第三麦克风接收第二声音信号。
在一些示例中,第二事件包括下述任一项:第一设备与第二设备建立通信连接;第一设备发现第二设备;第一设备接收到第二设备发送的第一请求,第一请求用于触发第一设备检测第一设备和第二设备之间的相对位置关系。
S3502,第一设备通过第一麦克风和第二麦克风接收第一声音信号,通过第三麦克风接收第二声音信号。
S3503,第一设备根据第一声音信号的第一到达时刻,以及第二声音信号的第二到达时刻,确定第二设备与第一设备之间的第一相对位置。
在一些示例中,第一麦克风和第二麦克风可以为前述麦克风组A中的麦克风,第三麦克风和第四麦克风可以为前述麦克风组B中的麦克风,第一声音信号可以为前述中的声音信号A,第二声音信号可以为前述中的声音信号B,第一到达时刻可以为前述中声音信号A的到达时刻1,第二到达时刻可以为前述中声音信号B的到达时刻2。
在一些示例中,第二设备发出第三声音信号和第四声音信号,第一设备响应于检测到第一事件,切换为通过第一麦克风和第三麦克风接收第三声音信号,通过第二麦克风接收第四声音信号,第二设备根据第三声音信号的第三到达时刻,以及第四声音信号的第四到达时刻,确定第二设备与所述第一设备之间的第二相对位置。
在一些示例中,第一事件包括以下至少一项:第一设备的姿态发生变化;第一设备的显示模式发生变化;第一设备接收到第二设备发送的第二请求,第二请求用于触发第一设备切换收音模式。
在一些示例中,第一麦克风和第三麦克风可以为前述麦克风组C中的麦克风,第二麦克风和第四麦克风可以为前述麦克风组D中的麦克风,第三声音信号可以为前述中的声音信号C,第四声音信号可以为前述中的声音信号D,第三到达时刻可以为前述中声音信号C的到达时刻3,第四到达时刻可以为前述中声音信号D的到达时刻4。
在一些示例中,第三设备发出第三声音信号和第四声音信号,第一设备通过第一麦克风和第三麦克风接收第三声音信号,通过第二麦克风接收第四声音信号,第二设备根据第三声音信号的第三到达时刻,以及第四声音信号的第四到达时刻,确定第三设备与第一设备之间的第五相对位置。在一些示例中,第三设备可以为前述中的设备C 230,第三设备可以实现图19-图22以及图24所示方法中设备C 230所执行的至少部分操作。
在本申请实施例中,第一设备至少包括第一麦克风、第二麦克风和第三麦克风,第一麦克风 和第二麦克风位于第一平面的第一侧,第三麦克风位于第一平面的第二侧,第一麦克风和第三麦克风位于第二平面的第三侧,第二麦克风位于第二平面的第四侧;第一平面和第二平面不平行。第二设备可以发出第一声音信号和第二声音信号。第一设备可以通过第一麦克风和第二麦克风接收第一声音信号,通过第三麦克风接收第二声音信号,并根据第一声音信号的第一到达时刻,以及第二声音信号的第二到达时刻,确定第二设备与第一设备之间的第一相对位置。即实现了通过扬声器和麦克风准确地检测设备间的相对位置,不需要依赖雷达等组件,降低了检测设备间相对位置的成本。
基于同一发明构思,本申请实施例还提供了一种终端设备,终端设备包括:存储器和处理器,存储器用于存储计算机程序;处理器用于在调用计算机程序时执行上述方法实施例所述的方法中设备A 210和/或设备B 220所执行的操作。
本实施例提供的终端设备可以执行上述方法实施例中设备A 210和/或设备B 220所执行的操作其实现原理与技术效果类似,此处不再赘述。
基于同一发明构思,本申请实施例还提供了一种芯片系统。该芯片系统设置于终端设备,该芯片系统包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现上述方法实施例所述的方法中设备A 210和/或设备B 220所执行的操作。
其中,该芯片系统可以为单个芯片,或者多个芯片组成的芯片模组。
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述方法实施例所述的方法中设备A 210和/或设备B 220所执行的操作。
本申请实施例还提供一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行时实现上述方法实施例所述的方法中设备A 210和/或设备B 220所执行的操作。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为 “当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (31)

  1. 一种发出声音信号的方法,其特征在于,应用于第一设备,所述第一设备至少包括第一扬声器、第二扬声器和第三扬声器;
    所述第一扬声器和所述第二扬声器位于第一平面的第一侧,所述第三扬声器位于所述第一平面的第二侧,所述第一扬声器和所述第三扬声器位于第二平面的第三侧,所述第二扬声器位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
    所述方法包括:
    所述第一设备通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号;
    所述第一设备响应于检测到第一事件,切换为通过所述第一扬声器和所述第三扬声器发出第三声音信号,通过所述第二扬声器发出第四声音信号。
  2. 根据权利要求1所述的方法,其特征在于,所述第一设备还包括第四扬声器,所述第四扬声器位于所述第一平面的所述第二侧,且位于所述第二平面的所述第四侧;
    所述第一设备通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号,包括:
    所述第一设备通过所述第一扬声器和所述第二扬声器发出所述第一声音信号,通过所述第三扬声器和所述第四扬声器发出所述第二声音信号;
    所述第一设备响应于检测到第一事件,切换为通过所述第一扬声器和所述第三扬声器发出第三声音信号,通过所述第二扬声器发出第四声音信号,包括:
    所述第一设备响应于检测到所述第一事件,切换为通过所述第一扬声器和所述第三扬声器发出所述第三声音信号,通过所述第二扬声器和所述第四扬声器发出所述第四声音信号。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一声音信号和所述第二声音信号中的至少一个,与所述第三声音信号和所述第四声音信号中的至少一个相同。
  4. 根据权利要求1-3任一所述的方法,其特征在于,所述第一事件包括以下至少一项:
    所述第一设备的姿态发生变化;
    所述第一设备的显示模式发生变化;
    所述第一设备与第二设备建立通信连接;
    所述第一设备发现所述第二设备;
    所述第一设备接收到所述第二设备发送的第一请求,所述第一请求用于触发所述第一设备检测所述第一设备和所述第二设备之间的相对位置关系;
    所述第一设备接收到所述第二设备发送的第二请求,所述第二请求用于请求触发所述第一设备切换发声模式。
  5. 根据权利要求1-4任一所述的方法,其特征在于,所述第一设备的显示屏包括一组相对的较长边和一组相对的较短边;
    所述第一平面和所述第二平面互相垂直,且所述第一平面和所述第二平面与所述显示屏所在的平面垂直;
    所述第一平面与所述较长边平行,所述第二平面与所述较短边平行;或者,所述第一平面与所述较短边平行,所述第二平面与所述较长边平行。
  6. 一种接收声音信号的方法,其特征在于,应用于第一设备,所述第一设备至少包括第一麦克风、第二麦克风和第三麦克风;
    所述第一麦克风和所述第二麦克风位于第一平面的第一侧,所述第三麦克风位于所述第一平面的第二侧,所述第一麦克风和所述第三麦克风位于第二平面的第三侧,所述第二麦克风位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
    所述方法包括:
    所述第一设备通过所述第一麦克风和所述第二麦克风接收第一声音信号,通过所述第三麦克风接收第二声音信号;
    所述第一设备响应于检测到第一事件,切换为通过所述第一麦克风和所述第三麦克风接收第三声音信号,通过所述第二麦克风接收第四声音信号。
  7. 根据权利要求6所述的方法,其特征在于,所述第一设备还包括第四麦克风,所述第四麦克风位于所述第一平面的所述第二侧,且位于所述第二平面的所述第四侧;
    所述第一设备通过所述第一麦克风和所述第二麦克风接收第一声音信号,通过所述第三麦克风接收第二声音信号,包括:
    所述第一设备通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风和所述第四麦克风接收所述第二声音信号;
    所述第一设备响应于检测到第一事件,切换为通过所述第一麦克风和所述第三麦克风发出第三声音信号,通过所述第二麦克风发出第四声音信号,包括:
    所述第一设备响应于检测到第一事件,切换为通过所述第一麦克风和所述第三麦克风发出所述第三声音信号,通过所述第二麦克风和所述第四麦克风发出所述第四声音信号。
  8. 根据权利要求6或7所述的方法,其特征在于,所述第一事件包括以下至少一项:
    所述第一设备的姿态发生变化;
    所述第一设备的显示模式发生变化;
    所述第一设备与第二设备建立通信连接;
    所述第一设备发现所述第二设备;
    所述第一设备接收到所述第二设备发送的第一请求,所述第一请求用于触发所述第一设备检测所述第一设备和所述第二设备之间的相对位置关系;
    所述第一设备接收到所述第二设备发送的第二请求,所述第二请求用于请求触发所述第一设备切换收音模式。
  9. 根据权利要求6-8任一所述的方法,其特征在于,所述第一设备的显示屏包括一组相对的较长边和一组相对的较短边;
    所述第一平面和所述第二平面互相垂直,且所述第一平面和所述第二平面与所述显示屏所在的平面垂直;
    所述第一平面与所述较长边平行,所述第二平面与所述较短边平行;或者,所述第一平面与所述较短边平行,所述第二平面与所述较长边平行。
  10. 一种检测设备间相对位置的方法,其特征在于,应用于包括第一设备和第二设备的系统,所述第一设备至少包括第一扬声器、第二扬声器和第三扬声器;
    所述第一扬声器和所述第二扬声器位于第一平面的第一侧,所述第三扬声器位于所述第一平面的第二侧,所述第一扬声器和所述第三扬声器位于第二平面的第三侧,所述第二扬声器位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
    所述方法包括:
    所述第一设备通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号;
    所述第二设备接收所述第一声音信号和所述第二声音信号;
    所述第二设备根据所述第一声音信号的第一到达时刻,以及所述第二声音信号的第二到达时刻,确定所述第二设备与所述第一设备之间的第一相对位置。
  11. 根据权利要求10所述的方法,其特征在于,所述第一设备通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号,包括:
    所述第一设备响应于检测到第二事件,通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号。
  12. 根据权利要求11所述的方法,其特征在于,所述第二事件包括下述任一项:
    所述第一设备与所述第二设备建立通信连接;
    所述第一设备发现所述第二设备;
    所述第一设备接收到所述第二设备发送的第一请求,所述第一请求用于触发所述第一设备检测所述第一设备和所述第二设备之间的相对位置关系。
  13. 根据权利要求10-12任一所述的方法,其特征在于,所述方法还包括:
    所述第二设备向所述第一设备通知所述第一相对位置。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    所述第一设备和所述第二设备以与所述第一相对位置相对应的第一协同模式,协同处理第一任务。
  15. 根据权利要求14所述的方法,其特征在于,所述第一设备和所述第二设备以与所述第一相对位置相对应的第一协同模式,协同处理第一任务,包括:
    当所述第一相对位置为所述第二设备在所述第一设备左侧时,所述第一设备显示第一界面,所述第二设备显示第二界面;
    当所述第一相对位置为所述第二设备在所述第一设备右侧时,所述第一设备显示所述第一界面,所述第二设备显示第三界面;
    所述第二界面与所述第一界面相关联,所述第三界面与所述第一界面相关联。
  16. 根据权利要求10-15任一所述的方法,其特征在于,所述方法还包括:
    所述第一设备响应于检测到第一事件,切换为通过所述第一扬声器和所述第三扬声器发出第三声音信号,通过所述第二扬声器发出第四声音信号;
    所述第二设备接收所述第三声音信号和所述第四声音信号;
    所述第二设备根据所述第三声音信号的第三到达时刻,以及所述第四声音信号的第四到达时刻,确定所述第二设备与所述第一设备之间的第二相对位置。
  17. 根据权利要求16所述的方法,其特征在于,所述第一事件包括以下至少一项:
    所述第一设备的姿态发生变化;
    所述第一设备的显示模式发生变化;
    所述第一设备接收到所述第二设备发送的第二请求,所述第二请求用于触发所述第一设备切换发音模式。
  18. 根据权利要求16或17所述的方法,其特征在于,所述第一声音信号和所述第二声音信号中的至少一个,与所述第三声音信号和所述第四声音信号中的至少一个相同。
  19. 根据权利要求10-18任一所述的方法,其特征在于,第一发音时刻和第二发音时刻相同,且所述第一声音信号的声音特征和所述第二声音信号的声音特征不同;或,
    所述第一发音时刻和所述第二发音时刻不同,且所述第一声音信号的声音特征和所述第二声音信号的声音特征相同;
    其中,所述第一发音时刻为所述第一设备发出所述第一声音信号的时刻,所述第二发音时刻为所述第一设备发出所述第二声音信号的时刻。
  20. 根据权利要求10-19任一所述的方法,其特征在于,所述第一设备的显示屏包括一组相对的较长边和一组相对的较短边;
    所述第一平面和所述第二平面互相垂直,且所述第一平面和所述第二平面与所述显示屏所在的平面垂直;
    所述第一平面与所述较长边平行,所述第二平面与所述较短边平行;或者,所述第一平面与所述较短边平行,所述第二平面与所述较长边平行。
  21. 根据权利要求10-20任一所述的方法,其特征在于,所述第一声音信号和所述第二声音信号为超声波信号。
  22. 一种检测设备间相对位置的方法,其特征在于,应用于包括第一设备和第二设备的系统,所述第一设备至少包括第一麦克风、第二麦克风和第三麦克风;
    所述第一麦克风和所述第二麦克风位于第一平面的第一侧,所述第三麦克风位于所述第一平面的第二侧,所述第一麦克风和所述第三麦克风位于第二平面的第三侧,所述第二麦克风位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
    所述方法包括:
    所述第二设备发出第一声音信号和第二声音信号;
    所述第一设备通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三 麦克风接收所述第二声音信号;
    所述第一设备根据所述第一声音信号的第一到达时刻,以及所述第二声音信号的第二到达时刻,确定所述第二设备与所述第一设备之间的第一相对位置。
  23. 根据权利要求22所述的方法,其特征在于,所述第一设备通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风接收所述第二声音信号,包括:
    所述第一设备响应于检测到第二事件,通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风接收所述第二声音信号。
  24. 根据权利要求23所述的方法,其特征在于,所述第二事件包括下述任一项:
    所述第一设备与所述第二设备建立通信连接;
    所述第一设备发现所述第二设备;
    所述第一设备接收到所述第二设备发送的第一请求,所述第一请求用于触发所述第一设备检测所述第一设备和所述第二设备之间的相对位置关系。
  25. 根据权利要求22-24任一所述的方法,其特征在于,所述方法还包括:
    所述第二设备发出第三声音信号和第四声音信号;
    所述第一设备响应于检测到第一事件,切换为通过所述第一麦克风和所述第三麦克风接收所述第三声音信号,通过所述第二麦克风接收第四声音信号;
    所述第二设备根据所述第三声音信号的第三到达时刻,以及所述第四声音信号的第四到达时刻,确定所述第二设备与所述第一设备之间的第二相对位置。
  26. 根据权利要求25所述的方法,其特征在于,所述第一事件包括以下至少一项:
    所述第一设备的姿态发生变化;
    所述第一设备的显示模式发生变化;
    所述第一设备接收到所述第二设备发送的第二请求,所述第二请求用于触发所述第一设备切换收音模式。
  27. 一种系统,其特征在于,所述系统包括第一设备和第二设备,所述第一设备至少包括第一扬声器、第二扬声器和第三扬声器,所述第一扬声器和所述第二扬声器位于第一平面的第一侧,所述第三扬声器位于所述第一平面的第二侧,所述第一扬声器和所述第三扬声器位于第二平面的第三侧,所述第二扬声器位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
    所述第一设备用于,通过所述第一扬声器和所述第二扬声器发出第一声音信号,通过所述第三扬声器发出第二声音信号;
    所述第二设备用于,接收所述第一声音信号和所述第二声音信号;所述第二设备根据所述第一声音信号的第一到达时刻,以及所述第二声音信号的第二到达时刻,确定所述第二设备与所述第一设备之间的第一相对位置。
  28. 一种系统,其特征在于,所述系统包括第一设备和第二设备,所述第一设备至少包括第一麦克风、第二麦克风和第三麦克风,所述第一麦克风和所述第二麦克风位于第一平面的第一侧,所述第三麦克风位于所述第一平面的第二侧,所述第一麦克风和所述第三麦克风位于第二平面的第三侧,所述第二麦克风位于所述第二平面的第四侧;所述第一平面和所述第二平面不平行;
    所述第二设备用于,发出第一声音信号和第二声音信号;
    所述第一设备用于,通过所述第一麦克风和所述第二麦克风接收所述第一声音信号,通过所述第三麦克风接收所述第二声音信号;所述第一设备根据所述第一声音信号的第一到达时刻,以及所述第二声音信号的第二到达时刻,确定所述第二设备与所述第一设备之间的第一相对位置。
  29. 一种终端设备,其特征在于,包括:存储器和处理器,所述存储器用于存储计算机程序;所述处理器用于在调用所述计算机程序时执行如权利要求1-5任一所述的方法或如权利要求6-9任一所述的方法。
  30. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-5任一所述的方法或如权利要求6-9任一所述的方法。
  31. 一种计算机程序产品,其特征在于,当所述计算机程序产品在终端设备上运行时,使得所述终端设备执行如权利要求1-5任一所述的方法或如权利要求6-9任一所述的方法。
PCT/CN2023/110850 2022-08-26 2023-08-02 发出、接收声音信号以及检测设备间相对位置的方法 WO2024041341A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211035615.9A CN117665705A (zh) 2022-08-26 2022-08-26 发出、接收声音信号以及检测设备间相对位置的方法
CN202211035615.9 2022-08-26

Publications (1)

Publication Number Publication Date
WO2024041341A1 true WO2024041341A1 (zh) 2024-02-29

Family

ID=90012475

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/110850 WO2024041341A1 (zh) 2022-08-26 2023-08-02 发出、接收声音信号以及检测设备间相对位置的方法

Country Status (2)

Country Link
CN (1) CN117665705A (zh)
WO (1) WO2024041341A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106842131A (zh) * 2017-03-17 2017-06-13 浙江宇视科技有限公司 麦克风阵列声源定位方法及装置
CN107124540A (zh) * 2016-02-25 2017-09-01 中兴通讯股份有限公司 采集处理方法、装置及系统
CN111480133A (zh) * 2018-11-22 2020-07-31 华为技术有限公司 判断两个终端设备相对位置的方法和装置
CN113056925A (zh) * 2018-08-06 2021-06-29 阿里巴巴集团控股有限公司 声源位置检测的方法和装置
CN113905302A (zh) * 2021-10-11 2022-01-07 Oppo广东移动通信有限公司 触发提示信息的方法、装置以及耳机
US20220091244A1 (en) * 2019-01-18 2022-03-24 University Of Washington Systems, apparatuses, and methods for acoustic motion tracking
WO2022156566A1 (zh) * 2021-01-25 2022-07-28 华为技术有限公司 设备交互方法、位置确定方法、电子设备及芯片系统
CN114910867A (zh) * 2021-02-10 2022-08-16 华为技术有限公司 设备间的相对位置检测方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124540A (zh) * 2016-02-25 2017-09-01 中兴通讯股份有限公司 采集处理方法、装置及系统
CN106842131A (zh) * 2017-03-17 2017-06-13 浙江宇视科技有限公司 麦克风阵列声源定位方法及装置
CN113056925A (zh) * 2018-08-06 2021-06-29 阿里巴巴集团控股有限公司 声源位置检测的方法和装置
CN111480133A (zh) * 2018-11-22 2020-07-31 华为技术有限公司 判断两个终端设备相对位置的方法和装置
US20220091244A1 (en) * 2019-01-18 2022-03-24 University Of Washington Systems, apparatuses, and methods for acoustic motion tracking
WO2022156566A1 (zh) * 2021-01-25 2022-07-28 华为技术有限公司 设备交互方法、位置确定方法、电子设备及芯片系统
CN114910867A (zh) * 2021-02-10 2022-08-16 华为技术有限公司 设备间的相对位置检测方法及装置
CN113905302A (zh) * 2021-10-11 2022-01-07 Oppo广东移动通信有限公司 触发提示信息的方法、装置以及耳机

Also Published As

Publication number Publication date
CN117665705A (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
US10200788B2 (en) Spatial audio apparatus
US9900477B2 (en) Terminal device and method for controlling thereof
US10394331B2 (en) Devices and methods for establishing a communicative coupling in response to a gesture
US9632649B2 (en) Methods and devices to allow common user interface mode based on orientation
CN103105926A (zh) 多传感器姿势识别
US20130111369A1 (en) Methods and devices to provide common user interface mode based on images
US20130201097A1 (en) Methods and devices to provide common user interface mode based on sound
CN111542128B (zh) 一种基于uwb的设备交互方法、装置及设备
US11051147B2 (en) Electronic apparatus and method of outputting content by the electronic apparatus
EP3554186A1 (en) Apparatus and method for setting network of another device
WO2012029943A1 (ja) 電子機器及びデータ送信方法
WO2022088974A1 (zh) 一种遥控方法、电子设备及系统
KR102653252B1 (ko) 외부 객체의 정보에 기반하여 시각화된 인공 지능 서비스를 제공하는 전자 장치 및 전자 장치의 동작 방법
WO2021052333A1 (zh) 消息回复方法、装置及移动终端
WO2024041341A1 (zh) 发出、接收声音信号以及检测设备间相对位置的方法
US10327089B2 (en) Positioning an output element within a three-dimensional environment
US9338578B2 (en) Localization control method of sound for portable device and portable device thereof
CN112905328B (zh) 任务处理方法、装置及计算机可读存储介质
JP2023549843A (ja) コンテンツをキャストするためのシステムおよび方法
WO2019183904A1 (zh) 自动识别音频中不同人声的方法
KR20160011906A (ko) 이동 단말기
KR20180031238A (ko) 이동 단말기 및 그 제어방법
US11289112B2 (en) Apparatus for tracking sound source, method of tracking sound source, and apparatus for tracking acquaintance
KR101645185B1 (ko) 휴대 단말기 및 그 동작 방법
JP2023172683A (ja) デバイス、情報処理システム、情報処理方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23856431

Country of ref document: EP

Kind code of ref document: A1