CN112099754A - Method for obtaining introduction information and intelligent equipment - Google Patents

Method for obtaining introduction information and intelligent equipment Download PDF

Info

Publication number
CN112099754A
CN112099754A CN202010402833.6A CN202010402833A CN112099754A CN 112099754 A CN112099754 A CN 112099754A CN 202010402833 A CN202010402833 A CN 202010402833A CN 112099754 A CN112099754 A CN 112099754A
Authority
CN
China
Prior art keywords
sound
detection module
sound signal
determining
smart device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010402833.6A
Other languages
Chinese (zh)
Inventor
刘广松
杨青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Touchair Technology Co ltd
Original Assignee
Suzhou Touchair Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Touchair Technology Co ltd filed Critical Suzhou Touchair Technology Co ltd
Priority to CN202010402833.6A priority Critical patent/CN112099754A/en
Publication of CN112099754A publication Critical patent/CN112099754A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves

Abstract

The invention provides a method for acquiring introduction information and intelligent equipment. Detecting a first sound signal which directly reaches a first sound detection module; detecting a second sound signal which directly reaches a second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device; determining a time difference between a reception time of the first sound signal and a reception time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference; when the relative angle is within a predetermined angle range, the introduction information corresponding to the identification is acquired from the server. The introduction information may be obtained based on the angular positioning of the smart device.

Description

Method for obtaining introduction information and intelligent equipment
Technical Field
The embodiment of the invention relates to the technical field of positioning, in particular to a method for acquiring introduction information and intelligent equipment.
Background
Various social activities such as exhibition activities, meeting activities, group building activities and the like always exist in daily work and life, and when the activities of a plurality of people, especially the activities of a plurality of strangers, lead to embarrassment and even unnecessary misunderstanding in the communication process, the introduction information such as names, sexes, ages, companies, positions and the like of the partners cannot be well remembered frequently. In addition, for group building activities and multi-person collaborative activities, each person has a current role or task within a certain time, and the people are easily confused when the number of the people is large, so that the smooth progress and the expected effect of the activities are influenced.
Disclosure of Invention
The embodiment of the invention provides a method for acquiring introduction information and intelligent equipment.
The technical scheme of the embodiment of the invention is as follows:
a smart device, comprising: a first sound detection module for detecting a first sound signal that reaches the first sound detection module directly; a second sound detection module for detecting a second sound signal that reaches the second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device; an angle determining module, configured to determine a time difference between a receiving time of the first sound signal and a receiving time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference;
and the introduction information acquisition module is used for acquiring the introduction information corresponding to the identifier from the server when the relative angle is within a preset angle range.
A smart device, comprising: a first sound detection module for detecting a first sound signal that reaches the first sound detection module directly; a second sound detection module for detecting a second sound signal that reaches the second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device; an angle determining module, configured to determine a time difference between a receiving time of the first sound signal and a receiving time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference; the distance determining module is used for determining the distance between the intelligent device and the sound generating device; and the introduction information acquisition module is used for acquiring the introduction information corresponding to the identifier from the server when the relative angle is within a preset angle range and the distance is less than a preset threshold value.
A method for obtaining introduction information is suitable for intelligent equipment comprising a first sound detection module and a second sound detection module, and comprises the following steps: detecting a first sound signal which directly reaches a first sound detection module; detecting a second sound signal which directly reaches a second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device; determining a time difference between a reception time of the first sound signal and a reception time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference; when the relative angle is within a predetermined angle range, the introduction information corresponding to the identification is acquired from the server.
A method for obtaining introduction information is suitable for intelligent equipment comprising a first sound detection module and a second sound detection module, and comprises the following steps: detecting a first sound signal that is directed to the first sound detection module; detecting a second sound signal that is directed to the second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device; determining a time difference between a reception time of the first sound signal and a reception time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference; determining a distance between a smart device and the sound generating device; and when the relative angle is within a preset angle range and the distance is smaller than a preset threshold value, obtaining the introduction information corresponding to the identification from the server.
A computer readable storage medium having stored therein computer readable instructions for performing the method of obtaining introductory information as set forth in any one of the preceding claims.
As can be seen from the above technical solutions, in the embodiments of the present invention, a first sound signal that directly reaches a first sound detection module is detected; a second sound signal which directly reaches the second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating equipment, and the first sound signal and the second sound signal respectively comprise the identification of the sound generating equipment; determining a time difference between a reception time of the first sound signal and a reception time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and a time difference; when the relative angle is within a predetermined angle range, the introduction information corresponding to the identification is acquired from the server. Therefore, the method and the device can quickly acquire the introduction information corresponding to the identification of the sound production device based on the angle positioning of the intelligent device. Moreover, the introduction information can be quickly acquired by further considering the distance factor.
Drawings
Fig. 1 is an exemplary flowchart of a method for determining a relative angle between smart devices according to the present invention.
Fig. 2 is a schematic diagram illustrating the principle of relative angle determination between smart devices according to the present invention.
FIG. 3 is a schematic diagram of the calculation of relative angles between smart devices according to the present invention.
Fig. 4 is a first exemplary diagram of determining a pair of direct signals according to the present invention.
Fig. 5 is a second exemplary diagram illustrating the determination of a pair of direct signals according to the present invention.
Fig. 6 is a schematic diagram of a first exemplary arrangement of a first sound detection module and a second sound detection module in a smart device according to the present invention.
Fig. 7 is a schematic diagram of a second exemplary arrangement of a first sound detection module and a second sound detection module in a smart device according to the present invention.
Fig. 8 is a schematic diagram of the relative positioning of a first smart device and a second smart device in accordance with the present invention.
FIG. 9 is a schematic diagram showing relative angles in a smart device interface according to the present invention.
FIG. 10 is a flowchart illustrating an exemplary process for relative positioning between smart devices according to the present invention.
FIG. 11 is a flowchart illustrating a first method of obtaining introductory information according to the present invention.
FIG. 12 is a flowchart illustrating a second method for obtaining introductory information according to the present invention.
Fig. 13 is a first structural diagram of the smart device of the present invention.
Fig. 14 is a second structural diagram of the smart device of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the accompanying drawings.
In order to realize the relative direction positioning between the intelligent devices by using software without additionally adding hardware, so that the relative positioning has universality, the devices of different manufacturers can realize interoperation and mutual compatibility, and the innovative application of the intelligent devices is explored on the basis of the interoperation and the compatibility, the embodiment of the invention provides a sound (preferably ultrasonic) based relative direction identification scheme between the intelligent devices, the hardware is not required to be additionally added, the software can be used for realizing the relative direction identification between the two intelligent devices, and the positioning result is accurate and reliable.
First, an intelligent device (intelligent device) refers to any device, apparatus or machine having computing processing capabilities. Fig. 1 is an exemplary flowchart of a method for determining a relative angle between smart devices according to the present invention. The method is applicable to a first intelligent device which comprises a first sound detection module and a second sound detection module. The first sound detection module and the second sound detection module are fixedly installed in the first intelligent device. For example, the first sound detection module may be implemented as one microphone or a set of microphone arrays arranged in the first smart device. Likewise, the second sound detection module may be implemented as one microphone or a set of microphone arrays arranged in the first smart device different from the first sound detection module.
As shown in fig. 1, the method includes:
step 101: enabling the first sound detection module to detect a first sound signal sent by the second intelligent device and directly reaching the first sound detection module, and enabling the second sound detection module to detect a second sound signal sent by the second intelligent device and directly reaching the second sound detection module, wherein the first sound signal and the second sound signal are sent by the second intelligent device at the same time.
Here, the second smart device may emit one sound signal or emit a plurality of sound signals at the same time.
Such as: when the second intelligent device sends out a sound signal, the first sound detection module and the second sound detection module in the second intelligent device respectively detect the sound signal. Wherein: the detection signal, which is detected by the first sound detection module and is directly transmitted to the first sound detection module, is determined as a first sound signal; the detection signal detected by the second sound detection module, which is the sound signal that reaches the first sound detection module, is determined as the second sound signal. For another example, when the second smart device emits multiple sound signals simultaneously, such as an ultrasonic signal and an audible sound signal. A first sound detection module in the second smart device is adapted to detect ultrasonic signals and a second sound detection module is adapted to detect audible sound signals. The first sound detection module detects the ultrasonic signal, and the second sound detection module detects the audible sound signal. Wherein: the detection signal, which is detected by the first sound detection module and through which the ultrasonic signal reaches the first sound detection module, is determined as a first sound signal; the detection signal detected by the second sound detection module, at which the audible sound signal reaches the second sound detection module, is determined to be a second sound signal.
In other words, the first sound signal and the second sound signal may be respective detection signals of the first sound detection module and the second sound detection module for the same sound signal emitted by the second smart device. Or, the first sound signal and the second sound signal may be respective detection signals of different sound signals emitted by the first sound detection module and the second sound detection module simultaneously for the second smart device.
Step 102: a time difference between the moment of reception of the first sound signal and the moment of reception of the second sound signal is determined.
Here, the first smart device (e.g., a CPU in the first smart device) may record the reception timing of the first sound signal and the reception timing of the second sound signal, and calculate a time difference between the two.
Step 103: and determining a relative angle between the first intelligent device and the second intelligent device based on the distance between the first sound detection module and the second sound detection module and the time difference.
For example, step 103 may be performed by the CPU of the first smart device. In one embodiment, determining the relative angle between the first smart device and the second smart device in step 103 includes: based on
Figure BDA0002490146320000061
Determining theta; wherein arcsin is an arcsine function, D is t × c, t is the time difference, c is the propagation speed of sound, and D is the distance between the first sound detection module and the second sound detection module; determining a relative angle between a first smart device and a second smart device based on θ
Figure BDA0002490146320000062
Wherein
Figure BDA0002490146320000063
The value of the time difference determined in step 102 may be a positive number or a negative number. When the value of the time difference is positive, the receiving time of the second sound signal is earlier than the receiving time of the first sound signal, so that the relative angle phi between the first intelligent device and the second intelligent device is generally an acute angle; when in useWhen the time difference is negative, the receiving time of the first sound signal is earlier than the receiving time of the second sound signal, so the relative angle phi between the first smart device and the second smart device is generally obtuse.
In an embodiment of the present invention, the first sound signal is a signal that is directly transmitted to the first sound detection module from the second smart device, and the second sound signal is a signal that is directly transmitted to the second sound detection module from the second smart device. In fact, either the first sound detection module or the second sound detection module may receive a signal that is emitted from the second smart device and is not direct (e.g., a reflection or multiple emissions past an obstacle). Therefore, how to determine the direct signal from the received multiple signals has a significant meaning.
The applicant found that: typically, the received signal stream (steam) of each sound detection module comprises a direct channel and a reflected channel. The direct channel can be determined simply and conveniently according to the following principle: the signal strength of the direct channel is typically strongest among all the signals detected by the sound detection module. Thus, in one embodiment, the method further comprises: the method comprises the steps that a first sound detection module receives sound signals with the intensity larger than a preset threshold value in a preset time window in sound signal streams of second intelligent equipment, and the sound signals are determined to be the first sound signals; and determining that the sound signal with the intensity larger than the preset threshold value in the preset time window in the sound signal stream of the second intelligent device is received by the second sound detection module as the second sound signal.
Fig. 4 is a first exemplary diagram of determining a pair of direct signals according to the present invention. In fig. 4, the sound signal stream detected by the first sound detection module is steam1, the steam1 contains a plurality of pulse signals varying along time (T), and the threshold value of the predetermined signal strength is T. It can be seen that the signal strength of the pulse signal 50 in steam1 is greater than the threshold value T over the range of time window 90. The sound signal stream detected by the second sound detection module is steam2, the steam2 contains a plurality of pulse signals varying along time (T), and the threshold value of the predetermined signal strength is also T. It can be seen that the signal strength of the pulse signal 60 in steam2 is greater than the threshold value T over the range of time window 90. Thus, the pulse signal 50 is determined to be the first sound signal; the pulse signal 60 is a second sound signal.
In addition, the applicant has also found that: the direct channel can be accurately determined by comprehensively considering the following two principles: principle (1), among all signals detected by the sound detection module, the signal strength of the direct channel is generally strongest; principle (2), joint discrimination: the distance difference d converted from the arrival time difference of two direct channel signals (the first sound signal and the second sound signal) should not be larger than the distance between the first sound detection module and the second sound detection module. Thus, in one embodiment, the method further comprises: determining sound signals with the intensity larger than a preset threshold value in a sound signal stream of second intelligent equipment detected by a first sound detection module to form a first candidate signal set; determining sound signals with the intensity larger than the preset threshold value in the sound signal flow of the second intelligent device detected by the second sound detection module to form a second candidate signal set; determining a respective time difference between a time of receipt of each sound signal in the first candidate signal set and a time of receipt of each sound signal in the second candidate signal set; and determining a pair of sound signals with the time difference smaller than M as the first sound signal and the second sound signal, wherein M is (D/c), D is the distance between the first sound detection module and the second sound detection module, and c is the propagation speed of sound.
Fig. 5 is a second exemplary diagram illustrating the determination of a pair of direct signals according to the present invention. In fig. 5, the sound signal stream detected by the first sound detection module is steam1, the steam1 contains a plurality of pulse signals varying along time (T), and the threshold value of the predetermined signal strength is T. It can be seen that in steam1, the signal strength of the pulse signal 50 is greater than the threshold value T, and therefore the first set of candidate signals contains the pulse signal 50. The sound signal stream detected by the second sound detection module is steam2, the steam1 contains a plurality of pulse signals varying along time (T), and the threshold value of the predetermined signal strength is also T. It can be seen that in steam2, the signal strength of both pulse signal 60 and pulse signal 70 is greater than the threshold value T, and therefore the second set of candidate signals includes pulse signal 60 and pulse signal 70.
Furthermore, a time difference d1 between the reception instants of the pulse signal 50 in the first candidate signal set and the pulse signal 60 in the second candidate signal set is determined, and a time difference d2 between the reception instants of the pulse signal 50 in the first candidate signal set and the pulse signal 70 in the second candidate signal set is determined. Assuming that D1 is smaller than M and D2 is larger than M, where M ═ D/c, D is the distance between the first and second sound detection modules, and c is the propagation speed of sound. Therefore, the pulse signal 50 of the pair of sound signals related to d1 is determined as the first sound signal, and the pulse signal 60 of the pair of sound signals is determined as the second sound signal.
Preferably, the first and second sound signals are ultrasonic waves having a code division multiple access format and contain a media access control address (MAC) of the second smart device. Accordingly, the first smart device can accurately identify the source of the sound signal based on the MAC address of the second smart device contained in the sound signal. When a plurality of sound sources emitting sound signals exist in the environment, the first intelligent device can accurately determine the relative angle with the sound source by using two direct signals from the same sound source without being interfered by other sound sources based on the extraction of the MAC address in the sound signals.
The embodiment of the invention also provides a relative angle determination method between the intelligent devices. The method is applicable to a first intelligent device, wherein the first intelligent device comprises a first sound detection module and a second sound detection module, and the method comprises the following steps: determining a first moment when an ultrasonic signal sent by second intelligent equipment directly reaches a first sound detection module; determining a second moment when the ultrasonic signal directly reaches the second sound detection module; determining a time difference between the first time and the second time; and determining a relative angle between the first intelligent device and the second intelligent device based on the distance between the first sound detection module and the second sound detection module and the time difference. In one embodiment, determining the relative angle between the first smart device and the second smart device comprises: based on
Figure BDA0002490146320000081
Determining theta; wherein arcsin is an arcsine function, D is t × c, t is a time difference, c is a sound propagation speed, and D is a distance between the first sound detection module and the second sound detection module; determining a relative angle between a first smart device and a second smart device based on θ
Figure BDA0002490146320000082
Wherein
Figure BDA0002490146320000083
In one embodiment, the method further comprises at least one of the following processes:
(1) determining the ultrasonic signal with the intensity larger than a preset threshold value in a preset time window in the ultrasonic signal stream of the second intelligent device received by the first sound detection module as the ultrasonic signal directly reaching the first sound detection module, and determining the time of receiving the ultrasonic signal directly reaching the first sound detection module as the first time; and determining the ultrasonic signal with the intensity larger than the preset threshold value in the preset time window in the ultrasonic signal flow of the second intelligent device received by the second sound detection module as the ultrasonic signal of the direct second sound detection module, and determining the time of receiving the ultrasonic signal of the direct second sound detection module as the second time.
(2) Determining ultrasonic signals with the intensity larger than a preset threshold value in ultrasonic signal streams of the second intelligent device detected by the first sound detection module to form a first candidate signal set; determining the ultrasonic signals with the intensity larger than the preset threshold value in the ultrasonic signal flow of the second intelligent device detected by the second sound detection module to form a second candidate signal set; determining a respective time difference between the time of receipt of each ultrasonic signal in the first candidate signal set and the time of receipt of each ultrasonic signal in the second candidate signal set; the receiving time of a pair of ultrasonic signals with the time difference smaller than M is determined as a first time and a second time, wherein M is (D/c), D is the distance between the first sound detection module and the second sound detection module, and c is the propagation speed of sound.
The principle and calculation process of the relative positioning of the present invention are exemplarily explained as follows. Fig. 2 is a schematic diagram illustrating the principle of relative angle determination between smart devices according to the present invention. FIG. 3 is a schematic diagram of the calculation of relative angles between smart devices according to the present invention. As shown in fig. 2, a microphone a1 disposed at the bottom of smart device a emits an ultrasonic signal containing the MAC address of smart device a, and smart device B (not shown in fig. 2) has two microphones, microphone B1 and microphone B2, respectively, disposed at a distance. Wherein: the microphone b1 receives the direct signal L1 of the ultrasonic signal, and the microphone b2 receives the direct signal L2 of the ultrasonic signal. The ultrasonic signals reach the indirect signals of the microphone b1 and the microphone b2 after being transmitted by the obstacles, and do not participate in the subsequent relative angle calculation. Because the intelligent equipment is small, especially when two intelligent equipment are far away from each other, the direct signal L1、L2Can be considered as parallel lines.
As shown in FIG. 3, L1、L2Direct signals (not signals reflected by obstacles) received by the microphone B1 and the microphone B2 of the smart device B, respectively; d is the distance between microphone b1 and microphone b 2. For example, if the microphone B1 and the microphone B2 are respectively disposed at the upper and lower ends of the smart device B, D may be the length of the smart device B; from microphone b2 to direct signal L1Making a vertical line, wherein the distance between the vertical foot and the microphone b1 is d, and d is L1And L2Using a correlation algorithm of the signals, the direct signal L can be determined1Relative to the direct signal L2D may be calculated based on the delay time difference t, where d is t × c, and c is the propagation speed of sound in a medium (such as air); theta is an auxiliary angle, wherein
Figure BDA0002490146320000101
Therefore, the relative angle of the intelligent device A and the intelligent device B can be calculated
Figure BDA0002490146320000102
Wherein
Figure BDA0002490146320000103
Preferably, smart device a and smart device B may be implemented as at least one of: a smart phone; a tablet computer; a smart watch; a smart bracelet; an intelligent sound box; a smart television; an intelligent earphone; smart robots, and the like.
The first sound detection module and the second sound detection module may be arranged at a plurality of locations of the smart device. Fig. 6 is a schematic diagram of a first exemplary arrangement of a first sound detection module and a second sound detection module in a smart device according to the present invention. In fig. 6, the first sound detection module 18 and the second sound detection module 19 are respectively disposed at both ends of the smart device in the length direction, and thus the length D of the smart device can be directly determined as the distance between the first sound detection module 18 and the second sound detection module 19. Fig. 7 is a schematic diagram of a second exemplary arrangement of a first sound detection module and a second sound detection module in a smart device according to the present invention. In fig. 7, the first sound detection module 18 and the second sound detection module 19 are respectively disposed at both ends of the smart device in the width direction, and thus the width D of the smart device can be directly determined as the distance between the first sound detection module 18 and the second sound detection module 19.
The above exemplary descriptions have been provided for the arrangement of the first sound detection module and the second sound detection module in the smart device, and those skilled in the art will appreciate that such descriptions are merely exemplary and are not intended to limit the scope of the embodiments of the present invention. In fact, currently, a smart device usually has two sets of microphones, and the two sets of microphones can be applied to the embodiment of the present invention as the first sound detection module and the second sound detection module without changing the smart device in terms of hardware.
The following describes a typical example of calculating a relative angle between smart devices using ultrasound based on an embodiment of the present invention. Fig. 8 is a schematic diagram of the relative positioning of a first smart device and a second smart device in accordance with the present invention. FIG. 10 is a flowchart illustrating an exemplary process for relative positioning between smart devices according to the present invention. In fig. 7, respective processing paths of two combined microphones detecting sound signals are illustrated, in which an Analog-to-Digital Converter (ADC) is a device converting an Analog signal of a continuous variable into a discrete Digital signal; a band-pass filter (BPF) is a device that allows waves of a particular frequency band to pass while shielding other frequency bands. The ultrasonic-based relative direction identification step between two intelligent devices comprises the following steps:
the first step is as follows: the first smart device transmits a location signal in ultrasound format containing the Mac address of the smart device 1.
The second step is that: and the two groups of microphones of the second intelligent device respectively detect the positioning signals, resolve the Mac address from the respective detected positioning signals, and confirm that the respective detected positioning signals originate from the same sound source based on the Mac address.
The third step: the second intelligent device calculates the distance difference d between two direct signals of the positioning signal based on the time difference between the two direct signals detected by the two groups of microphones contained in the second intelligent device.
The fourth step: second smart device computing
Figure BDA0002490146320000111
The incident angle of the signal
Figure BDA0002490146320000112
I.e. the relative angle of the first smart device and the second smart device, where D is the distance between the two sets of microphones in the second smart device.
The fifth step: the second intelligent device displays the relative angle on the display interface of the second intelligent device
Figure BDA0002490146320000113
Thereby prompting the user for the relative orientation of the first smart device. For example, fig. 9 is a schematic diagram showing relative angles in an interface of a smart device according to the present invention.
For example, assume that in the environment shown in fig. 8, the first smart device is embodied as a smart speaker and the first smart device is embodied as a smart phone.
The method comprises the following steps: the intelligent sound box transmits an ultrasonic signal, wherein the ultrasonic signal comprises a Mac address of the intelligent sound box and is a signal based on a CDMA (code division multiple access) technical framework.
Step two: the two sets of microphone arrays of the smart phone receive the ultrasonic signals and solve a Mac address of the smart sound box, and meanwhile, the smart phone solves a distance difference d between two direct signals of the two sets of microphone arrays. Wherein: suppose that in the respective received signal streams stream1 and stream2 of the two groups of microphone arrays, there are direct signals whose signal intensity peaks are greater than the threshold value T, respectively, and thus the principle 1 is satisfied; further assume the arrival time difference of the two direct signals
Figure BDA0002490146320000121
Calculating d corresponding to the Δ t, wherein
Figure BDA0002490146320000122
The two sets of microphone distances D are known (i.e. the handset length), assuming 0.145m, and D < D is visible, thus satisfying principle 2. Therefore, the two direct signals can be selected to calculate the relative angle, where d is 0.014 (m).
Step three: smartphone computing
Figure BDA0002490146320000123
Then the angle of incidence of the signal
Figure BDA0002490146320000124
The smart phone displays an angle of 84.4 degrees on a display screen of the smart phone, namely the smart sound box is in the direction of 84.4 degrees of the smart phone.
The embodiment of the invention also provides a method for rapidly acquiring the introduction information of the interested object by using intelligent equipment (such as a smart phone, a smart headset and the like) based on the relative angle calculation mode. FIG. 11 is a flowchart illustrating a first method of obtaining introductory information according to the present invention. The method shown in fig. 11 is applicable to a smart device arranged with a first sound detection module and a second sound detection module. The first sound detection module and the second sound detection module have a fixed distance. The first and second sound detection modules may be implemented as microphones or microphone arrays, respectively.
As shown in fig. 11, the method includes:
step 1101: the intelligent equipment detects a first sound signal which directly reaches the first sound detection module; detecting a second sound signal which directly reaches a second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device.
The sound emitting device may be implemented as a smart device held by another user, or as a microphone fixed near the exhibit. The sound emitting device may continuously transmit a sound signal containing an identification of the sound emitting device. Preferably, the sound signal transmitted by the sound emitting device is in an ultrasonic format. For example, the sound generating apparatus has previously transmitted introduction information corresponding to the identifier of the sound generating apparatus to a server on the network side via wireless communication means such as 3G, 4G, 5G, wifi, bluetooth, infrared communication, and the like. For example, the identification of the sound emitting device may be embodied as a MAC address of the sound emitting device or a user identification of the sound emitting device (e.g., a cell phone number, an instant messaging number, etc.). The introduction information may include user electronic business cards (e.g., name, gender, age, company, position, etc.), textual introductions of the exhibit, pictorial introductions of the exhibit, audio introductions of the exhibit, or video introductions of the exhibit, etc.
Step 1102: the intelligent equipment determines the time difference between the receiving time of the first sound signal and the receiving time of the second sound signal; and determining a relative angle between the intelligent device and the sound generating device based on the distance between the first sound detection module and the second sound detection module and the time difference.
The manner in which the smart device calculates this relative angle can be referred to as described with respect to FIG. 1
Figure BDA0002490146320000131
Determination method, wherein the smart device in the method shown in fig. 11 corresponds to the first smart device in the method shown in fig. 1The sound generating device in the method shown in fig. 11 corresponds to the second intelligent device in the method shown in fig. 1, and the specific calculation process is not described again in the present invention. Specifically, first based on
Figure BDA0002490146320000132
Determining theta; wherein arcsin is an arcsine function, D is t × c, t is the time difference, c is the propagation speed of sound, and D is the distance between the first sound detection module and the second sound detection module; then determining the relative angle between the intelligent device and the sound production device based on theta
Figure BDA0002490146320000133
Wherein
Figure BDA0002490146320000134
Step 1103: when the relative angle is within a preset angle range, the intelligent device acquires introduction information corresponding to the identification from the server.
Preferably, the interval of the predetermined angle range (i.e., the difference between the maximum value of the angle in the range and the minimum value of the angle in the range) is equal to or less than a predetermined value (e.g., 15 degrees), so as to prevent an easy malfunction due to an excessively large angle range. For example, the angle range is set to 0 to 15 degrees, and so on. Alternatively, the angular range may be a single value, such as zero degrees or 90 degrees, and so on. When the relative angle is within a predetermined angle range, the smart device determines that the user desires to obtain the introductory information corresponding to the identification. At this time, the smart device acquires introduction information corresponding to the identifier from a server on the network side through wireless communication methods such as 3G, 4G, 5G, wifi, bluetooth, infrared communication, and the like. For example, the smart device sends a query instruction containing the identifier to the server, and the server retrieves the introduction information corresponding to the identifier and returns the introduction information to the smart device. The smart device may then present the introductory information. For example, the introduction information may be presented on a display screen of the smart device or played back by voice using a microphone of the smart device. For example, when the introduction information is an electronic business card of the user, the electronic business card is displayed on a display interface of the intelligent device, and the like. And when the introduction information is the exhibit audio introduction, the exhibit audio introduction is played by utilizing the voice playing capability of the intelligent equipment.
FIG. 12 is a flowchart illustrating a second method for obtaining introductory information according to the present invention. The method shown in fig. 12 is applicable to a smart device arranged with a first sound detection module and a second sound detection module. The first sound detection module and the second sound detection module have a fixed distance. The first and second sound detection modules may be implemented as microphones or microphone arrays, respectively. As shown in fig. 12, the method includes:
step 1201: the intelligent equipment detects a first sound signal which directly reaches the first sound detection module; detecting a second sound signal which directly reaches a second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device.
The sound emitting device may be implemented as a smart device held by another user, or as a microphone fixed near the exhibit. The sound emitting device may continuously transmit a sound signal containing an identification of the sound emitting device. Preferably, the sound signal transmitted by the sound emitting device is in an ultrasonic format. The sound generating apparatus transmits introduction information corresponding to the identification of the sound generating apparatus to a server on the network side in advance via a wireless communication method such as 3G, 4G, 5G, wifi, bluetooth, or infrared communication. For example, the identification of the sound emitting device may be embodied as a MAC address of the sound emitting device or a user identification of the sound emitting device (e.g., a cell phone number, an instant messaging number, etc.). The introduction information may include user electronic business cards (e.g., name, gender, age, company, position, etc.), textual introductions of the exhibit, pictorial introductions of the exhibit, audio introductions of the exhibit, or video introductions of the exhibit, etc.
Step 1202: the intelligent equipment determines the time difference between the receiving time of the first sound signal and the receiving time of the second sound signal; and determining a relative angle between the intelligent device and the sound generating device based on the distance between the first sound detection module and the second sound detection module and the time difference.
The manner in which the smart device calculates this relative angle can be referred to as described with respect to FIG. 1
Figure BDA0002490146320000141
In the determining method of (1), the intelligent device in the method shown in fig. 12 corresponds to the first intelligent device in the method shown in fig. 1, and the sound generating device in the method shown in fig. 12 corresponds to the second intelligent device in the method shown in fig. 1, and details of a specific calculation process in the present invention are not repeated.
Step 1203: determining a distance between a smart device and the sound emitting device. Here, the smart device may determine the distance to the sound emitting device based on a variety of ways. E.g. based on a sound localization (preferably ultrasound localization) approach, etc.
Example 1: the smart device maintains time synchronization with the sound generating device, the first sound signal further comprising a transmission time T1 of the first sound signal, wherein the smart device determining the distance between the smart device and the sound generating device comprises: a controller in the intelligent equipment calculates the distance L between the intelligent equipment and the sound production equipment; wherein L ═ (T2-T1) xc; c is the speed of sound propagation in air; t2 is the reception timing of the first sound signal.
Example 2: the smart device maintains time synchronization with the sound generating device, the second sound signal further comprising a transmission time T3 of the second sound signal, wherein the smart device determining the distance between the smart device and the sound generating device comprises: a controller in the intelligent equipment calculates the distance L between the intelligent equipment and the sound production equipment; wherein L ═ (T4-T3) xc; c is the speed of sound propagation in air; t4 is the reception time of the second sound signal.
Example 3: and determining the distance between the intelligent device and the sound generating device at the rotation stop point based on the rotation angle of the intelligent device and the relative angle between the intelligent device and the sound generating device at the rotation stop point. Specifically, the smart device is moved from a first location point T while centering around a fixed point A1Rotate to a second position point T2Determining the rotation angle of the intelligent equipment; wherein the smart device is turned to the secondPosition point T2A process of determining that the relative angle between the smart device and the sound emission device has changed to zero or that the relative angle undergoes a change to zero and then continues to change to an angle α, where α is not more than 180 degrees, based on a difference in reception time of the direct sound signal transmitted by the first sound detection module and the second sound detection module arranged on the smart device with respect to the sound emission device arranged at the position point B; and determining the distance between the intelligent device and the sound production device based on the relative angle and the rotation angle. For example, at the second position point T2Here, the relative angle is zero; based on the relative angle and the rotation angle, determining the distance between the smart device and the sound generating device comprises: based on
Figure BDA0002490146320000151
Determining when the smart device is at a first location point T1Distance R between intelligent device and sound production device1(ii) a Wherein R is2The distance between the fixed point A and the intelligent equipment is;
Figure BDA0002490146320000152
for the intelligent device at the first position point T1The relative angle between the intelligent device and the sound generating device is correct; psi1For angle of rotation, #1Is angle T1And AB. For another example: at a second position point T2Where the relative angle is a; the determining the distance between the smart device and the sound generating device based on the relative angle and the rotation angle comprises: based on
Figure BDA0002490146320000153
Determining when the smart device is at the second location point T2The distance R between the intelligent device and the sound production device1(ii) a Wherein R is2The distance between the fixed point A and the intelligent equipment is;
Figure BDA0002490146320000161
for the intelligent device at the second position point T2The relative angle between the intelligent device and the sound generating device is correct; psi1For said angle of rotation, psi1Is less than T2AB。
Example 4: when the smart device moves from the first position point to the second position point in a non-rotating manner, the distance between the smart device and the sound generating device at the second position point is determined based on the relative angle between the smart device and the sound generating device at the first position point and the relative angle between the smart device and the sound generating device at the second position point, wherein the smart device at the second position point and the smart device at the first position point are in the same direction. Specifically, when the smart device is at a first position point, determining a relative angle 1 between the smart device and the sound generating device based on a difference in receiving time of a direct sound signal transmitted by a first sound detection module and a second sound detection module arranged on the smart device with respect to the sound generating device; when the intelligent device moves to a second position point, determining a relative angle 2 between the intelligent device and the sound generating device based on the receiving time difference of the first sound detection module and the second sound detection module aiming at the direct sound signal sent by the sound generating device; wherein the smart device at the second location point is in the same orientation as the smart device at the first location point; based on the relative angle 1 and the relative angle 2, the relative position of the smart device with respect to the sound emitting device is determined. Preferably, the relative angle 1 is phi1The relative angle 2 is phi2(ii) a Based on the relative angle 1 and the relative angle 2, determining the relative position of the smart device with respect to the sound emitting device comprises: determination of R2Wherein
Figure BDA0002490146320000162
Wherein R is2Is the distance between the second location point and the sound generating device; c is the propagation speed of sound; Δ T is a difference between a detection time within a detection time window for the sound signal of the direct first sound detection module by the first sound detection module at the first position and a detection time within a detection time window for the sound signal of the direct first sound detection module by the first sound detection module at the second position, or a detection time within a detection time window for the sound signal of the direct second sound detection module by the second sound detection module at the first position and a detection time within a detection time window for the sound signal of the direct second sound detection module by the second sound detection module at the second positionAnd the difference value between the detection moments in the detection time windows of the sound signals of the two sound detection modules.
While the above exemplary description describes exemplary embodiments in which the smart device calculates the distance to the sound generating device, those skilled in the art will appreciate that this description is merely exemplary and is not intended to limit the scope of embodiments of the present invention, for example, the smart device may determine the distance to the sound generating device by infrared ranging, bluetooth ranging, non-time synchronized ultrasonic ranging, and the like.
Step 1204: and when the relative angle is within a preset angle range and the distance is smaller than a preset threshold value, obtaining the introduction information corresponding to the identification from the server.
Preferably, the interval of the predetermined angle range (i.e., the difference between the maximum value of the angle in the range and the minimum value of the angle in the range) is equal to or less than a predetermined value (e.g., 15 degrees), so as to prevent an easy malfunction due to an excessively large angle range. For example, the angle range is set to 0 to 15 degrees, and so on. Alternatively, the angular range may be a single value, such as zero degrees or 90 degrees, and so on. When the relative angle is within a predetermined angle range and the distance is less than a predetermined threshold value (e.g., 5 meters), the smart device determines that the user desires to obtain the introductory information corresponding to the identification. At this time, the intelligent device acquires introduction information corresponding to the identifier from a server on the network side in a 3G, 4G, 5G, wifi, bluetooth, infrared communication or other modes. For example, the smart device sends a query instruction containing the identifier to the server, and the server retrieves the introduction information corresponding to the identifier and returns the introduction information to the smart device. The smart device may then present the introductory information. For example, the introduction information may be presented on a display screen of the smart device or played back by voice using a microphone of the smart device. For example, when the introduction information is an electronic business card of the user, the electronic business card is displayed on a display interface of the intelligent device, and the like. And when the introduction information is the exhibit audio introduction, the exhibit audio introduction is played by utilizing the voice playing capability of the intelligent equipment.
Typical examples of embodiments of the present invention are described below. 1. In an exhibition, an exhibitor can associate own mobile phone identification (such as a mobile phone number) with own electronic business card, and correspondingly store the information of the electronic business card and the mobile phone identification in a server. When the audience is interested in the exhibitor, the electronic business card of the exhibitor can be quickly acquired from the server by arranging the distance and the relative angle between the mobile phone of the audience and the mobile phone of the exhibitor. 2. In the conference, the conference participant a can associate the own mobile phone identifier (such as a mobile phone number) with the own electronic business card information, and correspondingly store the electronic business card information and the mobile phone identifier in the server. When the conference participants B are interested in the conference participants A, the electronic business cards of the conference participants A can be rapidly acquired from the server by positioning the distance and the relative angle between the mobile phones of the conference participants A, so that the conference participants can rapidly know each other and realize high-efficiency social contact. 3. In a scenic spot or a museum, sound production equipment is fixed near an exhibit. And the identification of the sound generating equipment is associated with the exhibit introduction information, and the identification of the sound generating equipment and the exhibit introduction information are correspondingly stored in the server. When the relative angle between the tourist and the exhibit reaches the preset range and the distance is smaller than the threshold value, the exhibit introduction information is automatically acquired and played from the server, the effect of private tour guide is created, and the intelligent degree is improved.
Fig. 13 is a first structural diagram of the smart device of the present invention. The intelligent device includes: a first sound detection module for detecting a first sound signal that reaches the first sound detection module directly; a second sound detection module for detecting a second sound signal that reaches the second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device; an angle determining module, configured to determine a time difference between a receiving time of the first sound signal and a receiving time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference; an introduction information acquisition module for acquiring the information corresponding to the identifier from the server when the relative angle is within a predetermined angle rangeAnd (6) introducing information. In one embodiment, the angle determination module is configured to determine the angle based on
Figure BDA0002490146320000181
Determining theta; wherein arcsin is an arcsine function, D is t × c, t is the time difference, c is the propagation speed of sound, and D is the distance between the first sound detection module and the second sound detection module; determining a relative angle between the smart device and the sound emitting device based on θ
Figure BDA0002490146320000182
Wherein
Figure BDA0002490146320000183
In one embodiment, the introduction information includes a user's electronic business card, a textual introduction to an exhibit, an image introduction to an exhibit, an audio introduction to an exhibit, or a video introduction to an exhibit, among others.
Fig. 14 is a second structural diagram of the smart device of the present invention. The intelligent device includes: a first sound detection module for detecting a first sound signal that reaches the first sound detection module directly; a second sound detection module for detecting a second sound signal that reaches the second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device; an angle determining module, configured to determine a time difference between a receiving time of the first sound signal and a receiving time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference; the distance determining module is used for determining the distance between the intelligent device and the sound generating device; and the introduction information acquisition module is used for acquiring the introduction information corresponding to the identifier from the server when the relative angle is within a preset angle range and the distance is less than a preset threshold value. In one embodiment, the smart device is time-synchronized with the sound generating device, the first sound signal further comprises a sending time T1 of the first sound signal, wherein the distance determining module is configured to determine a distance L between the smart device and the sound generating device; wherein L ═ (T2-T1) xc; c is the speed of sound propagation in air; t2 is the reception timing of the first sound signal. In one embodiment, the smart device is time-synchronized with the sound generating device, and the second sound signal further comprises a transmission time T3 of the second sound signal, wherein the distance determining module is configured to determine a distance L between the smart device and the sound generating device; wherein L ═ (T4-T3) xc; c is the speed of sound propagation in air; t4 is the reception time of the second sound signal. In one embodiment, the distance determination module is configured to determine the distance between the smart device and the sound generating device at the rotation stop point based on the rotation angle of the smart device and the relative angle between the smart device and the sound generating device at the rotation stop point. In one embodiment, the distance determining module is configured to determine the distance between the smart device and the sound emitting device at the second location point based on a relative angle between the smart device and the sound emitting device at the first location point and a relative angle between the smart device and the sound emitting device at the second location point when the smart device is non-rotatably moved from the first location point to the second location point, wherein the smart device at the second location point is in the same direction as the smart device at the first location point. Preferably, the smart device may include: a smart phone; a tablet computer; a smart watch; a smart bracelet; smart headsets, etc.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process implemented in the above embodiments of the present invention, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A smart device, comprising:
a first sound detection module for detecting a first sound signal that reaches the first sound detection module directly;
a second sound detection module for detecting a second sound signal that reaches the second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device;
an angle determining module, configured to determine a time difference between a receiving time of the first sound signal and a receiving time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference;
and the introduction information acquisition module is used for acquiring the introduction information corresponding to the identifier from the server when the relative angle is within a preset angle range.
2. The smart device of claim 1,
an angle determination module for determining based on
Figure FDA0002490146310000011
Determining theta; wherein arcsin is an arcsine function, D is t × c, t is the time difference, c is the propagation speed of sound, and D is the distance between the first sound detection module and the second sound detection module; determining a relative angle between the smart device and the sound emitting device based on θ
Figure FDA0002490146310000012
Wherein
Figure FDA0002490146310000013
3. The smart device of claim 1,
the introduction information comprises user electronic business cards, exhibit character introduction, exhibit image introduction, exhibit audio introduction or exhibit video introduction.
4. A smart device, comprising:
a first sound detection module for detecting a first sound signal that reaches the first sound detection module directly;
a second sound detection module for detecting a second sound signal that reaches the second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device;
an angle determining module, configured to determine a time difference between a receiving time of the first sound signal and a receiving time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference;
the distance determining module is used for determining the distance between the intelligent device and the sound generating device;
and the introduction information acquisition module is used for acquiring the introduction information corresponding to the identifier from the server when the relative angle is within a preset angle range and the distance is less than a preset threshold value.
5. The smart device of claim 4,
the intelligent device and the sound generating device keep time synchronization, the first sound signal further comprises a sending time T1 of the first sound signal, and the distance determining module is used for determining the distance L between the intelligent device and the sound generating device; wherein L ═ (T2-T1) xc; c is the speed of sound propagation in air; t2 is the reception time of the first sound signal; or
The intelligent device and the sound generating device keep time synchronization, the second sound signal further comprises a sending time T3 of the second sound signal, and the distance determining module is used for determining the distance L between the intelligent device and the sound generating device; wherein L ═ (T4-T3) xc; c is the speed of sound propagation in air; t4 is the reception time of the second sound signal;
or
The distance determining module is used for determining the distance between the intelligent device and the sound generating device at the rotation stopping point based on the rotation angle of the intelligent device and the relative angle between the intelligent device and the sound generating device at the rotation stopping point; or
A distance determining module for determining a distance between the smart device and the sound emitting device at a second location point based on a relative angle between the smart device and the sound emitting device at the first location point and a relative angle between the smart device and the sound emitting device at the second location point when the smart device non-rotatably moves from the first location point to the second location point, wherein the smart device at the second location point is in the same direction as the smart device at the first location point.
6. A method for obtaining introduction information, the method being applied to an intelligent device including a first sound detection module and a second sound detection module, the method comprising:
detecting a first sound signal which directly reaches a first sound detection module; detecting a second sound signal which directly reaches a second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device;
determining a time difference between a reception time of the first sound signal and a reception time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference;
when the relative angle is within a predetermined angle range, the introduction information corresponding to the identification is acquired from the server.
7. A method of obtaining introductory information according to claim 6,
determining the relative angle between the smart device and the sound emitting device includes:
based on
Figure FDA0002490146310000031
Determining theta; wherein arcsin is an arcsine function, D is t × c, t is the time difference, c is the propagation speed of sound, and D is the distance between the first sound detection module and the second sound detection module; determining a relative angle between the smart device and the sound emitting device based on θ
Figure FDA0002490146310000032
Wherein
Figure FDA0002490146310000033
8. A method for obtaining introduction information, the method being applied to an intelligent device including a first sound detection module and a second sound detection module, the method comprising:
detecting a first sound signal that is directed to the first sound detection module; detecting a second sound signal that is directed to the second sound detection module; the first sound signal and the second sound signal are simultaneously sent by the same sound generating device, and the first sound signal and the second sound signal respectively contain the identification of the sound generating device;
determining a time difference between a reception time of the first sound signal and a reception time of the second sound signal; determining a relative angle between the smart device and the sound generating device based on a distance between the first sound detection module and the second sound detection module and the time difference;
determining a distance between a smart device and the sound generating device;
and when the relative angle is within a preset angle range and the distance is smaller than a preset threshold value, obtaining the introduction information corresponding to the identification from the server.
9. A method of obtaining introductory information according to claim 8,
the smart device is time-synchronized with the sound generating device, the first sound signal further comprises a transmission time T1 of the first sound signal, wherein determining the distance between the smart device and the sound generating device comprises: determining a distance L between a smart device and the sound generating device; wherein L ═ (T2-T1) xc; c is the speed of sound propagation in air; t2 is the reception time of the first sound signal; or
The smart device remaining time-synchronized with the sound generating device, the second sound signal further comprising a transmission time T3 of the second sound signal, wherein determining the distance between the smart device and the sound generating device comprises: determining a distance L between a smart device and the sound generating device; wherein L ═ (T4-T3) xc; c is the speed of sound propagation in air; t4 is the reception time of the second sound signal; or
Determining a distance between a smart device and the sound emitting device comprises: determining the distance between the intelligent device and the sound generating device at the rotation stop point based on the rotation angle of the intelligent device and the relative angle between the intelligent device and the sound generating device at the rotation stop point; or
Determining a distance between a smart device and the sound emitting device comprises: when the smart device is non-rotatably moved from a first location point to a second location point, a distance between the smart device and the sound emitting device at the second location point is determined based on a relative angle between the smart device and the sound emitting device at the first location point and a relative angle between the smart device and the sound emitting device at the second location point, wherein the smart device at the second location point is in the same direction as the smart device at the first location point.
10. A computer-readable storage medium having computer-readable instructions stored thereon for performing the method of obtaining introductory information according to any one of claims 6-9.
CN202010402833.6A 2020-05-13 2020-05-13 Method for obtaining introduction information and intelligent equipment Withdrawn CN112099754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010402833.6A CN112099754A (en) 2020-05-13 2020-05-13 Method for obtaining introduction information and intelligent equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010402833.6A CN112099754A (en) 2020-05-13 2020-05-13 Method for obtaining introduction information and intelligent equipment

Publications (1)

Publication Number Publication Date
CN112099754A true CN112099754A (en) 2020-12-18

Family

ID=73750088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010402833.6A Withdrawn CN112099754A (en) 2020-05-13 2020-05-13 Method for obtaining introduction information and intelligent equipment

Country Status (1)

Country Link
CN (1) CN112099754A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160986A1 (en) * 2021-01-30 2022-08-04 华为技术有限公司 Angle determination method, electronic device, and chip system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060277571A1 (en) * 2002-07-27 2006-12-07 Sony Computer Entertainment Inc. Computer image and audio processing of intensity and input devices for interfacing with a computer program
CN101201399A (en) * 2007-12-18 2008-06-18 北京中星微电子有限公司 Sound localization method and system
CN107977907A (en) * 2017-11-20 2018-05-01 珠海市魅族科技有限公司 Sight spot detail information inspection method, device, computer installation, storage medium
CN108446015A (en) * 2018-01-30 2018-08-24 浙江凡聚科技有限公司 Exhibition exhibiting method based on mixed reality and exhibition system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060277571A1 (en) * 2002-07-27 2006-12-07 Sony Computer Entertainment Inc. Computer image and audio processing of intensity and input devices for interfacing with a computer program
CN101201399A (en) * 2007-12-18 2008-06-18 北京中星微电子有限公司 Sound localization method and system
CN107977907A (en) * 2017-11-20 2018-05-01 珠海市魅族科技有限公司 Sight spot detail information inspection method, device, computer installation, storage medium
CN108446015A (en) * 2018-01-30 2018-08-24 浙江凡聚科技有限公司 Exhibition exhibiting method based on mixed reality and exhibition system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160986A1 (en) * 2021-01-30 2022-08-04 华为技术有限公司 Angle determination method, electronic device, and chip system

Similar Documents

Publication Publication Date Title
KR100974044B1 (en) Distance measurement system, distance measurement method, information processing device, program, and recording medium
US9554091B1 (en) Identifying conference participants and active talkers at a video conference endpoint using user devices
US8130978B2 (en) Dynamic switching of microphone inputs for identification of a direction of a source of speech sounds
US9319532B2 (en) Acoustic echo cancellation for audio system with bring your own devices (BYOD)
US8717402B2 (en) Satellite microphone array for video conferencing
CN106375902A (en) Audio enhancement via opportunistic use of microphones
US11706348B2 (en) Systems and methods for providing headset voice control to employees in quick-service restaurants
Uddin et al. RF-Beep: A light ranging scheme for smart devices
CN112099754A (en) Method for obtaining introduction information and intelligent equipment
CN112098943A (en) Positioning method of wearable device and intelligent device
CN112098936A (en) Method for positioning intelligent equipment and intelligent equipment
US20160275960A1 (en) Voice enhancement method
US10362397B2 (en) Voice enhancement method for distributed system
CN112098935A (en) Method for searching intelligent equipment and intelligent equipment
CN112098948A (en) Indoor positioning method and intelligent equipment
CN104185131A (en) communication system and transmission method thereof
CN112098930A (en) Method for searching vehicle and intelligent equipment
CN112596028A (en) Voting device, voting method and computer readable storage medium
CN112098949B (en) Method and device for positioning intelligent equipment
CN112098950B (en) Method and device for positioning intelligent equipment
CN112098937B (en) Positioning method of intelligent equipment and intelligent equipment
CN112098942B (en) Positioning method of intelligent equipment and intelligent equipment
CN112100527B (en) Method and device for displaying intelligent equipment
CN112100526B (en) Method and device for identifying intelligent equipment
CN112098944A (en) Intelligent device positioning method and intelligent device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20201218

WW01 Invention patent application withdrawn after publication