WO2022088435A1 - 生成控制指令的方法和装置 - Google Patents
生成控制指令的方法和装置 Download PDFInfo
- Publication number
- WO2022088435A1 WO2022088435A1 PCT/CN2020/137438 CN2020137438W WO2022088435A1 WO 2022088435 A1 WO2022088435 A1 WO 2022088435A1 CN 2020137438 W CN2020137438 W CN 2020137438W WO 2022088435 A1 WO2022088435 A1 WO 2022088435A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- instruction
- detection module
- smart device
- sound
- smart
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 125
- 238000001514 detection method Methods 0.000 claims abstract description 471
- 230000005236 sound signal Effects 0.000 claims abstract description 259
- 230000008859 change Effects 0.000 claims description 217
- 230000033001 locomotion Effects 0.000 claims description 43
- 230000004044 response Effects 0.000 claims description 23
- 239000004984 smart glass Substances 0.000 claims description 12
- 238000004891 communication Methods 0.000 claims description 11
- 238000007689 inspection Methods 0.000 claims 2
- 230000002452 interceptive effect Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 61
- 230000008569 process Effects 0.000 description 18
- 230000006870 function Effects 0.000 description 12
- 238000004091 panning Methods 0.000 description 10
- 230000003993 interaction Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000002604 ultrasonography Methods 0.000 description 6
- 238000003491 array Methods 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 210000002310 elbow joint Anatomy 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 238000012850 discrimination method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30076—Arrangements for executing specific machine instructions to perform miscellaneous control operations, e.g. NOP
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S3/00—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
- G01S3/80—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
- G01S3/802—Systems for determining direction or deviation from predetermined direction
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S3/00—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
- G01S3/80—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
- G01S3/802—Systems for determining direction or deviation from predetermined direction
- G01S3/808—Systems for determining direction or deviation from predetermined direction using transducers spaced apart and measuring phase or time difference between signals therefrom, i.e. path-difference systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
Definitions
- Embodiments of the present invention relate to the technical field of information equipment, and more particularly, to a method and apparatus for generating a control instruction.
- the human-computer interaction process is actually a process of input and output. People input instructions to the computer in various ways, and the computer presents the output results to the user after processing.
- the forms of input and output between people and computers are diverse, so the forms of interaction are also diverse.
- these interaction methods usually rely on user trigger actions on the display interface.
- the embodiments of the present invention provide a method and an apparatus for generating a control command.
- a method for generating a control instruction comprising: when it is determined that a relative angle between a wearable device worn by a user and a smart device changes, generating a control instruction, wherein: based on a first sound detection module included in the wearable device and the second sound detection module determines the relative angle according to the respective detection operations of the sound signal sent from the smart device; sends the control command to the smart device, so that the smart device executes the control command.
- the wearable device is a head wearable device adapted to be worn on the head, and the head wearable device includes a smart headset or smart glasses.
- control instruction includes at least one of the following: an instruction for switching pictures; an instruction for switching articles; an instruction for switching video; an instruction for switching audio; Commands; commands for switching perspectives; commands for switching interfaces.
- a device for generating a control instruction comprising: a generating module for generating a control instruction when it is determined that the relative angle between a wearable device worn by a user and a smart device changes, wherein: based on the information contained in the wearable device The first sound detection module and the second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the smart device;
- the sending module is configured to send the control instruction to the smart device, so that the smart device executes the control instruction.
- a method for generating a control command comprising: when it is determined that a relative angle between a smart device and a wearable device worn by a user changes, generating a control command, wherein based on a first sound detection module included in the smart device and the second sound detection module for respective detection operations of the sound signal sent from the wearable device, to determine the relative angle; in response to the control instruction, perform an operation corresponding to the control instruction in the smart device.
- the wearable device is a head wearable device adapted to be worn on the head, and the head wearable device includes a smart earphone or smart glasses.
- control instruction includes at least one of the following: an instruction for switching pictures; an instruction for switching articles; an instruction for switching video; an instruction for switching audio; Commands; commands for switching perspectives; commands for switching interfaces.
- An apparatus for generating a control instruction comprising: a generating module for generating a control instruction when it is determined that a relative angle between a smart device and a wearable device worn by a user changes, wherein based on the information contained in the smart device The first sound detection module and the second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the wearable device; the operation of the control instruction.
- a method for switching songs comprising: when it is determined that a relative angle between a wearable device worn by a user and a smart device changes, generating a song switching instruction, wherein based on a first sound detection contained in the wearable device The module and the second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the smart device; send the song switching instruction to the smart device, so that the smart device responds to the song switching instruction to execute the song Toggle action.
- a device for switching songs comprising: an instruction generation module configured to generate a song switching instruction when it is determined that a relative angle between a wearable device worn by a user and a smart device changes, wherein based on the instructions contained in the wearable device
- the first sound detection module and the second sound detection module in the device determine the relative angle according to the respective detection operations of the sound signal sent from the smart device;
- the instruction sending module is used to send the song switching instruction to the smart device, thereby being sent by the smart device.
- the smart device responds to the song switching instruction to perform a song switching operation.
- a wearable device comprising: a first sound detection module; a second sound detection module; a control module for generating a song switching instruction when it is determined that the relative angle between a wearable device worn by a user and a smart device changes , wherein the relative angle is determined based on the respective detection operations of the first sound detection module and the second sound detection module included in the wearable device for the sound signal sent from the smart device;
- the song switching instruction is sent, so that the smart device responds to the song switching instruction to perform a song switching operation.
- a method for switching songs comprising: when it is determined that a relative angle between a smart device and a wearable device worn by a user changes, generating a song switching instruction, wherein based on a first sound detection module included in the smart device and the second sound detection module for respective detection operations of the sound signal sent from the wearable device, to determine the relative angle; in response to the song switching instruction, perform the song switching operation in the smart device.
- a device for switching songs comprising: an instruction generation module configured to generate a song switching instruction when it is determined that a relative angle between a smart device and a wearable device worn by a user changes, wherein based on a device included in the smart device The first sound detection module and the second sound detection module determine the relative angle for the respective detection operations of the sound signal sent from the wearable device; the song switching module is used to respond to the song switching instruction, in the smart device Perform the song switching operation.
- a smart device comprising: a first sound detection module; a second sound detection module; a control module for generating a song switching instruction when it is determined that the relative angle between the smart device and a wearable device worn by a user changes, The relative angle is determined based on the respective detection operations of the first sound detection module and the second sound detection module included in the smart device with respect to the sound signal sent from the wearable device; the song switching module is used to respond to the The song switching instruction is described, and the song switching operation is performed in the smart device.
- a virtual house viewing method comprising: when it is determined that a relative angle between a wearable device worn by a user and a smart device changes, generating a viewing angle change instruction, wherein based on a first sound detection contained in the wearable device The module and the second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the smart device; send the view angle change instruction to the smart device, so that the smart device adjusts the angle in the smart device based on the view angle change instruction.
- a virtual house viewing device comprising: an instruction generation module configured to generate a viewing angle change instruction when it is determined that the relative angle between a wearable device worn by a user and a smart device changes, wherein based on the information contained in the wearable device The first sound detection module and the second sound detection module in the device determine the relative angle according to the respective detection operations of the sound signal sent from the smart device; the instruction sending module is used to send the view angle change instruction to the smart device, so that the Based on the viewing angle change instruction, the smart device adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the smart device.
- a wearable device comprising: a first sound detection module; a second sound detection module; and a control module for generating a viewing angle change instruction when it is determined that the relative angle between the wearable device and the smart device changes, wherein based on the first A sound detection module and a second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the smart device; the communication module is configured to send the viewing angle change instruction to the smart device, so that the smart device can Based on the viewing angle change instruction, the viewing angle for displaying the panoramic image of the house in the display interface of the smart device is adjusted.
- a virtual house viewing method comprising: when it is determined that a relative angle between a smart device and a wearable device worn by a user changes, generating a viewing angle change instruction, wherein based on a first sound detection module included in the smart device and the second sound detection module for respective detection operations of the sound signal sent from the wearable device, to determine the relative angle; based on the angle of view change instruction, to adjust the angle of view for displaying the panoramic image of the house in the display interface of the smart device.
- a virtual house viewing device comprising: an instruction generation module configured to generate a viewing angle change instruction when it is determined that the relative angle between a smart device and a wearable device worn by a user changes, wherein based on the information contained in the smart device
- the first sound detection module and the second sound detection module determine the relative angle for the respective detection operations of the sound signal sent from the wearable device
- the viewing angle adjustment module is used to adjust the angle of the smart device based on the viewing angle change instruction.
- the viewing angle of the panoramic image showing the house in the display interface.
- a smart device comprising: a first sound detection module; a second sound detection module; and a control module for generating a viewing angle change instruction when it is determined that the relative angle between the smart device and a wearable device worn by a user changes,
- the relative angle is determined based on the respective detection operations of the first sound detection module and the second sound detection module included in the smart device with respect to the sound signal sent from the wearable device;
- the viewing angle change command adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the smart device.
- a computer-readable storage medium in which computer-readable instructions are stored, and the computer-readable instructions are used to execute the above-mentioned method for generating control instructions, or the above-mentioned method for switching songs, or the above-mentioned virtual How to see the house.
- the present invention can conveniently generate control instructions without triggering operations on the display interface by the user, thereby providing a brand-new, virtual and interactive control mode.
- FIG. 1 is an exemplary flowchart of a method for determining a relative angle between smart devices according to the present invention.
- FIG. 2 is a schematic diagram of the principle of determining relative angles between smart devices according to the present invention.
- FIG. 3 is a schematic diagram of the calculation principle of the relative angle between the smart devices of the present invention.
- FIG. 4 is a first exemplary schematic diagram of determining a pair of direct signals according to the present invention.
- FIG. 5 is a second exemplary schematic diagram of determining a pair of direct signals according to the present invention.
- FIG. 6 is a schematic diagram of a first exemplary arrangement of the first sound detection module and the second sound detection module of the present invention in a smart device.
- FIG. 7 is a schematic diagram of a second exemplary arrangement of the first sound detection module and the second sound detection module in the smart device of the present invention.
- FIG. 8 is a schematic diagram of relative positioning of the first smart device and the second smart device according to the present invention.
- FIG. 9 is a schematic diagram showing relative angles in a smart device interface according to the present invention.
- FIG. 10 is an exemplary process flow chart of relative positioning between smart devices according to the present invention.
- FIG. 11 is an exemplary flowchart of a method for generating a control instruction according to the present invention.
- FIG. 12 is a flowchart of a first exemplary method of switching songs of the present invention.
- FIG. 13 is a schematic diagram of the user wearing the smart earphone in the initial position according to the present invention.
- FIG. 14 is a schematic diagram of a user wearing a smart headset tilting his head to the left according to the present invention.
- FIG. 15 is a schematic diagram of a user wearing a smart headset tilting his head to the right according to the present invention.
- FIG. 16 is a schematic diagram of the user wearing the smart bracelet in the initial position according to the present invention.
- FIG. 17 is a schematic diagram of the user wearing the smart bracelet and turning his arm to the left according to the present invention.
- FIG. 18 is a schematic diagram of the user wearing the smart bracelet and turning his arm to the right according to the present invention.
- FIG. 19 is a structural diagram of a first exemplary apparatus for switching songs according to the present invention.
- FIG. 20 is a flowchart of a second exemplary method of switching songs of the present invention.
- FIG. 21 is a schematic diagram of the user wearing the smart earphone in the initial position according to the present invention.
- FIG. 22 is a schematic diagram of the present invention when the user wears the smart earphone and pans to the left.
- FIG. 23 is a schematic diagram of the present invention when the user wears the smart earphone and pans to the right.
- FIG. 24 is a schematic diagram of the user wearing the bracelet in the initial position according to the present invention.
- FIG. 25 is a schematic diagram of the present invention when the user wears the wristband and pans to the left.
- FIG. 26 is a schematic diagram of the present invention when the user wears the bracelet and translates to the right.
- FIG. 27 is a structural diagram of a second exemplary apparatus for switching songs according to the present invention.
- FIG. 28 is a first exemplary flowchart of the virtual house viewing method of the present invention.
- FIG. 29 is a schematic diagram of the user wearing the smart earphone in the initial position according to the present invention.
- FIG. 30 is a schematic diagram of a user wearing a smart headset tilting his head to the left according to the present invention.
- FIG. 31 is a schematic diagram of a user wearing a smart headset tilting his head to the right according to the present invention.
- FIG. 32 is a first exemplary structural diagram of the virtual house viewing device of the present invention.
- FIG. 33 is a second exemplary flowchart of the virtual house viewing method of the present invention.
- FIG. 34 is a schematic diagram of the user wearing the smart earphone in the initial position according to the present invention.
- FIG. 35 is a schematic diagram of the present invention when the user wears the smart headset and pans to the left.
- FIG. 36 is a schematic diagram of the present invention when the user wears the smart headset and pans to the right.
- FIG. 37 is a second exemplary structural diagram of the virtual house viewing device of the present invention.
- the embodiment of the invention proposes a sound (preferably ultrasound)-based relative direction recognition solution between smart devices, without additional hardware, software can be used to realize the relative direction recognition between two smart devices, and the positioning result is accurate and reliable.
- FIG. 1 is an exemplary flowchart of a method for determining a relative angle between smart devices according to the present invention.
- the method is applicable to a first smart device, and the first smart device includes a first sound detection module and a second sound detection module.
- the first sound detection module and the second sound detection module are fixedly installed in the first smart device.
- the first sound detection module may be implemented as a microphone or a set of microphone arrays arranged in the first smart device.
- the second sound detection module may be implemented as a microphone or a set of microphone arrays arranged in the first smart device different from the first sound detection module.
- the method includes:
- Step 101 Enable the first sound detection module to detect the first sound signal sent by the second smart device and reach the first sound detection module, and enable the second sound detection module to detect the sound signal sent by the second smart device and reach the second sound detection module.
- the second sound signal wherein the first sound signal and the second sound signal are simultaneously sent out by the second smart device.
- the second smart device can send out one sound signal or a plurality of sound signals at the same time.
- the first sound detection module and the second sound detection module in the second smart device detect the sound signal respectively.
- the detection signal detected by the first sound detection module and the sound signal directly reaching the first sound detection module is determined as the first sound signal
- the sound signal detected by the second sound detection module and the sound signal directly reaching the first sound detection module The detection signal is determined as the second sound signal.
- the second smart device sends out multiple sound signals at the same time, for example, sends out an ultrasonic signal and an audible sound signal.
- the first sound detection module in the second smart device is adapted to detect ultrasonic signals
- the second sound detection module is adapted to detect audible sound signals.
- the first sound detection module detects the ultrasonic signal
- the second sound detection module detects the audible sound signal.
- the detection signal detected by the first sound detection module, the ultrasonic signal directly reaching the first sound detection module is determined as the first sound signal
- the audible sound signal detected by the second sound detection module the audible sound signal directly reaches the second sound detection module
- the detection signal of the module is determined as the second sound signal.
- the first sound signal and the second sound signal may be the respective detection signals of the first sound detection module and the second sound detection module for the same sound signal sent by the second smart device.
- the first sound signal and the second sound signal may be the respective detection signals of the first sound detection module and the second sound detection module for different sound signals simultaneously emitted by the second smart device.
- Step 102 Determine the time difference between the time when the first sound signal is received and the time when the second sound signal is received.
- the first smart device (for example, the CPU in the first smart device) may record the reception time of the first sound signal and the reception time of the second sound signal, and calculate the time difference between the two.
- Step 103 Based on the distance and time difference between the first sound detection module and the second sound detection module, determine the relative angle between the first smart device and the second smart device.
- step 103 may be performed by the CPU of the first smart device.
- the relative angle between the first smart device and the second smart device in The value of the time difference determined in step 102 may be a positive number or a negative number.
- the relative angle ⁇ between the first smart device and the second smart device is usually an acute angle; when the time difference When the value of is negative, the reception time of the first sound signal is earlier than the reception time of the second sound signal, so the relative angle ⁇ between the first smart device and the second smart device is usually an obtuse angle.
- the first sound signal is a signal from the second smart device to the first sound detection module
- the second sound signal is a signal from the second smart device to the second sound detection module.
- both the first sound detection module and the second sound detection module may receive non-direct signals from the second smart device (for example, one reflection or multiple transmissions through obstacles). Therefore, how to determine the direct signal from the received multiple signals is significant.
- the received signal stream (steam) of each sound detection module includes a direct channel and a reflected channel.
- the direct channel can be simply and conveniently determined according to the following principle: among all the signals detected by the sound detection module, the signal strength of the direct channel is generally the strongest.
- the method further includes: determining a sound signal whose intensity is greater than a predetermined threshold within a predetermined time window in the sound signal stream of the second smart device received by the first sound detection module as the sound signal a first sound signal; determine a sound signal whose intensity is greater than the predetermined threshold value in the sound signal stream of the second smart device received by the second sound detection module as the second sound signal within the predetermined time window .
- FIG. 4 is a first exemplary schematic diagram of determining a pair of direct signals according to the present invention.
- the sound signal stream detected by the first sound detection module is steam1
- steam1 includes a plurality of pulse signals that vary along time (t)
- the threshold value of the predetermined signal strength is T. It can be seen that within the range of the time window 90, the signal strength of the pulse signal 50 in steam1 is greater than the threshold value T.
- the sound signal stream detected by the second sound detection module is steam2, steam2 includes a plurality of pulse signals that vary along time (t), and the threshold value of the predetermined signal strength is also T. It can be seen that within the range of the time window 90, the signal strength of the pulse signal 60 in steam2 is greater than the threshold value T. Therefore, it is determined that the pulse signal 50 is the first sound signal; the pulse signal 60 is the second sound signal.
- the direct channel can be accurately determined by comprehensively considering the following two principles: principle (1), among all the signals detected by the sound detection module, the signal strength of the direct channel is generally the strongest; principle (2) ), joint discrimination method: the distance difference d converted from the arrival time difference of two direct channel signals (the first sound signal and the second sound signal) should not be greater than the distance between the first sound detection module and the second sound detection module .
- the method further includes: determining a sound signal whose strength is greater than a predetermined threshold value in the sound signal stream of the second smart device detected by the first sound detection module, so as to form a first candidate signal set;
- FIG. 5 is a second exemplary schematic diagram of determining a pair of direct signals according to the present invention.
- the sound signal stream detected by the first sound detection module is steam1
- steam1 includes a plurality of pulse signals varying along time (t)
- the threshold value of the predetermined signal strength is T. It can be seen that in steam1, the signal strength of the pulse signal 50 is greater than the threshold value T, so the first candidate signal set includes the pulse signal 50.
- the sound signal stream detected by the second sound detection module is steam2, steam1 includes a plurality of pulse signals that vary along time (t), and the threshold value of the predetermined signal strength is also T.
- the second candidate signal set includes the pulse signal 60 and the pulse signal 70.
- the time difference d1 between the reception instants of the pulse signal 50 in the first candidate signal set and the pulse signal 60 in the second candidate signal set is determined, and the pulse signal 50 in the first candidate signal set and the pulse in the second candidate signal set are determined
- the first sound signal and the second sound signal are ultrasonic waves in a code division multiple access format and include a media access control address (MAC) of the second smart device.
- MAC media access control address
- the first smart device can accurately identify the source of the sound signal based on the MAC address of the second smart device included in the sound signal.
- the first smart device can accurately use the two direct signals from the same sound source to determine the relative angle to the sound source based on the MAC address extracted from the sound signal, while No interference from other sound sources.
- the embodiment of the present invention also provides a method for determining a relative angle between smart devices.
- the method is applicable to a first smart device, where the first smart device includes a first sound detection module and a second sound detection module, and the method includes: determining the first moment when the ultrasonic signal sent by the second smart device reaches the first sound detection module; determine the second moment when the ultrasonic signal reaches the second sound detection module; determine the time difference between the first moment and the second moment; determine the first intelligent sound based on the distance and the time difference between the first sound detection module and the second sound detection module The relative angle between the device and the second smart device.
- the method further includes at least one of the following processes: (1), receiving the ultrasonic signal stream of the second smart device by the first sound detection module, the intensity of which is greater than a predetermined threshold value within a predetermined time window
- the ultrasonic signal is determined to be the ultrasonic signal that directly reaches the first sound detection module, and the moment when the ultrasonic signal directly to the first sound detection module is received is determined as the first moment; the second sound detection module receives the second intelligent device.
- the ultrasonic signal whose intensity is greater than the predetermined threshold value in the predetermined time window is determined to be an ultrasonic signal directly to the second sound detection module, and the ultrasonic signal directly to the second sound detection module will be received.
- the moment of the signal is determined as the second moment.
- the distance between the detection module and the second sound detection module, c is the propagation speed of the sound.
- FIG. 2 is a schematic diagram of the principle of determining relative angles between smart devices according to the present invention.
- FIG. 3 is a schematic diagram of the calculation principle of the relative angle between the smart devices of the present invention.
- the microphone a1 arranged at the bottom of the smart device A transmits an ultrasonic signal
- the ultrasonic signal contains the MAC address of the smart device A
- the smart device B (not shown in FIG. 2 ) has two microphones arranged spaced apart, respectively are microphone b1 and microphone b2.
- the microphone b1 receives the direct signal L1 of the ultrasonic signal
- the microphone b2 receives the direct signal L2 of the ultrasonic signal.
- the ultrasonic signal is transmitted through the obstacle and reaches the indirect signal of the microphone b1 and the microphone b2, and does not participate in the subsequent relative angle calculation. Since the smart devices are small, especially when the two smart devices are far apart, the direct signals L 1 and L 2 can be regarded as parallel lines.
- L 1 and L 2 represent the direct signals received by the microphone b1 and the microphone b2 of the smart device B respectively (not the signal reflected by obstacles); D is the distance between the microphone b1 and the microphone b2.
- D can be the length of the smart device B ;
- ⁇ is the auxiliary angle, where Therefore, the relative angle of smart device A and smart device B can be calculated in Preferably, the smart device A and the smart device B can be implemented as at least one of the following: a smart phone; a tablet computer;
- FIG. 6 is a schematic diagram of a first exemplary arrangement of the first sound detection module and the second sound detection module of the present invention in a smart device.
- the first sound detection module 18 and the second sound detection module 19 are respectively arranged at both ends of the smart device in the length direction, so the length D of the smart device can be directly determined as the first sound detection module 18 and the second sound detection module 19.
- the distance between the two sound detection modules 19 is a schematic diagram of a second exemplary arrangement of the first sound detection module and the second sound detection module in the smart device of the present invention. In FIG.
- the first sound detection module 18 and the second sound detection module 19 are respectively arranged at both ends of the smart device in the width direction, so the width D of the smart device can be directly determined as the first sound detection module 18 and the second sound detection module 19. The distance between the two sound detection modules 19 .
- the above exemplarily describes the schematic diagram of the arrangement of the first sound detection module and the second sound detection module in the smart device.
- Those skilled in the art can realize that this description is only exemplary and is not intended to limit the implementation of the present invention. protected range.
- current smart devices usually have two sets of microphones, which can be used as the first sound detection module and the second sound detection module in the embodiments of the present invention without changing the hardware of the smart device.
- the following describes a typical example of calculating the relative angle between smart devices using ultrasound based on an embodiment of the present invention.
- FIG. 8 is a schematic diagram of relative positioning of the first smart device and the second smart device according to the present invention.
- Fig. 10 is an exemplary process flow chart of relative positioning between smart devices of the present invention.
- the respective processing paths of the two combined microphones for detecting sound signals are shown, wherein the analog-to-digital converter (Analog-to-Digital Converter, ADC) converts the continuous variable analog signal into a discrete digital signal
- a band-pass filter (BPF) is a device that allows waves in a specific frequency band to pass while shielding other frequency bands.
- the steps of identifying the relative direction between two smart devices based on ultrasound include:
- Step 1 The first smart device transmits a positioning signal in an ultrasound format, where the positioning signal includes the Mac address of the smart device 1 .
- Step 2 The two sets of microphones of the second smart device detect the positioning signals respectively, parse out the Mac addresses from the respective detected positioning signals, and confirm that the respective detected positioning signals originate from the same sound source based on the Mac addresses.
- Step 3 The second smart device calculates the distance difference d between the two direct signals for the positioning signal based on the time difference between the two direct signals detected by the two groups of microphones included in the second smart device.
- Step 4 Second Smart Device Calculation Then the signal incident angle That is, the relative angle between the first smart device and the second smart device, where D is the distance between the two groups of microphones in the second smart device.
- Step 5 The second smart device displays the relative angle on its own display interface Thus, the user is prompted for the relative direction of the first smart device.
- FIG. 9 is a schematic diagram showing relative angles in a smart device interface according to the present invention.
- the first smart device is embodied as a smart speaker
- the first smart device is embodied as a smart phone.
- Step 1 The smart speaker transmits an ultrasonic signal, the ultrasonic signal includes the Mac address of the smart speaker, and is a signal based on a CDMA code division multiple access technology architecture.
- Step 2 The two sets of microphone arrays of the smartphone receive the ultrasonic signal and calculate the Mac address of the smart speaker.
- the smartphone calculates the distance difference d between the two direct signals of the two sets of microphone arrays.
- Step 3 Smartphone Computing Then the angle of incidence of the signal The smartphone displays an angle of 84.4° on its own display screen, that is, the smart speaker is in the 84.4° direction of the smartphone.
- the relative distance between the two smart devices can be further obtained by using the method for identifying the relative direction between the two smart devices.
- at least two smart devices there are at least two smart devices, at least one smart device a is used to transmit an ultrasonic positioning signal, the ultrasonic positioning signal contains the MAC address of the smart device a; at least one smart device b is used to receive the ultrasonic positioning signal And solve the signal incident angle, and calculate the relative distance to the smart device a after further movement.
- the embodiment of the present invention also proposes a technical solution in which a control instruction can be conveniently generated based on the above-mentioned relative angle calculation method without a user triggering operation on the display interface.
- FIG. 11 is an exemplary flowchart of a method for generating a control instruction according to the present invention.
- the method includes: Step 80 : when it is determined that the relative angle between the wearable device worn by the user and the smart device changes, a control instruction is generated, wherein: based on a first value included in the wearable device The relative angle is determined by the sound detection module and the second sound detection module for respective detection operations of the sound signal sent from the smart device.
- Step 82 Send the control instruction to the smart device, so that the smart device executes the control instruction.
- control instruction is adapted to control the smart device to perform any predetermined operation. Therefore, after sensing the relative motion sent by the wearable device and the smart device based on the change of the relative angle, a control instruction can be generated to control the smart device.
- the control instructions may include at least one of the following: instructions for switching pictures; instructions for switching articles; instructions for switching videos; instructions for switching audio (such as songs); instructions for switching emails ; commands for switching perspectives; commands for switching interfaces; etc.
- the wearable device is a head wearable device adapted to be worn on the head, and the head wearable device includes a smart earphone or smart glasses. Therefore, when the relative motion sent by the head and the smart device is sensed based on the change of the relative angle, a control command can be generated.
- the present invention when a user is browsing pictures, texts, web pages, videos, etc., he wants to release his hands and does not have to hold the smart device all the time; or when a user is giving a speech or meeting, he needs to easily switch daily work requirements such as PPT, and the present invention is applied.
- the embodiment generates control instructions, the present invention can easily switch and flip through graphic and video without the user's triggering operation on the display interface, and provides a brand-new, virtual and interactive browsing experience.
- an instruction for switching to the previous sequential content of the current browsing content is generated; when the change corresponds to the movement of the wearable device to the right relative to the smart device, an instruction for switching to the previous content of the current browsing content is generated; An instruction to switch to the next sequential content of the currently browsed content, etc.
- a user wearing a head wearable device turns his head to the left to switch to the next picture/next page novel/next page PPT/next Douyin mini Video; when the user turns his head to the right, he can switch to the previous picture/previous page of novels/previous page of PPT/previous vibrato video, etc.
- the embodiments of the present invention can also be applied to realize the control of the mobile phone interface sliding. For example, when the head is turned to the left, the main interface (APP interface) of the mobile phone slides to the left; when the head is turned to the right, the main interface (APP interface) of the mobile phone slides to the right, and so on.
- control instruction may be implemented as a song switching instruction, so that the smart device executes the song switching instruction to implement the song switching operation.
- control instruction may be implemented as a view angle change instruction, so that the smart device adjusts the view angle for displaying the panoramic image of the house in the display interface of the smart device based on the view angle change instruction.
- determining the relative angle includes:
- the first sound detection module in the wearable device detects the first sound signal from the smart device directly to the first sound detection module
- the second sound detection module in the wearable device detects the second sound from the smart device directly.
- the second sound signal of the module wherein the first sound signal and the second sound signal are simultaneously transmitted by the smart device; determine the time difference between the receiving moment of the first sound signal and the receiving moment of the second sound signal; based on The distance between the first sound detection module and the second sound detection module and the time difference determine the relative angle.
- An embodiment of the present invention also provides an apparatus for generating a control instruction, including: a generating module configured to generate a control instruction when it is determined that the relative angle between the wearable device worn by the user and the smart device changes, wherein: based on the information contained in the The first sound detection module and the second sound detection module in the wearable device determine the relative angle according to the respective detection operations of the sound signal sent from the smart device; the sending module is used for sending the control instruction to the smart device , so that the control instruction is executed by the smart device.
- a generating module configured to generate a control instruction when it is determined that the relative angle between the wearable device worn by the user and the smart device changes, wherein: based on the information contained in the The first sound detection module and the second sound detection module in the wearable device determine the relative angle according to the respective detection operations of the sound signal sent from the smart device; the sending module is used for sending the control instruction to the smart device , so that the control instruction is executed by the smart device.
- An embodiment of the present invention also provides a method for generating a control instruction, including: when it is determined that the relative angle between the smart device and the wearable device worn by the user changes, generating a control instruction, wherein based on the method contained in the smart device The first sound detection module and the second sound detection module in the device determine the relative angle for the respective detection operations of the sound signal sent from the wearable device; in response to the control instruction, execute the control corresponding to the control the operation of the instruction.
- the wearable device is a head wearable device adapted to be worn on the head, and the head wearable device includes a smart earphone or smart glasses.
- the determining the relative angle includes: a first sound detection module in the smart device detects a first sound signal from the wearable device directly to the first sound detection module, and a second sound detection module in the smart device detects a first sound signal from the wearable device.
- the wearable device directly reaches the second sound signal of the second sound detection module, wherein the first sound signal and the second sound signal are simultaneously transmitted by the wearable device; determine the receiving time of the first sound signal and the second sound the time difference between the reception moments of the signals; the relative angle is determined based on the distance between the first sound detection module and the second sound detection module and the time difference.
- Embodiments of the present invention also provide an apparatus for generating a control instruction, including: a generating module configured to generate a control instruction when it is determined that the relative angle between the smart device and the wearable device worn by the user changes, wherein the control instruction is generated based on the The first sound detection module and the second sound detection module in the smart device determine the relative angle with respect to the respective detection operations of the sound signal sent from the wearable device; the execution module is used for responding to the control instruction, in the smart An operation corresponding to the control instruction is performed in the device.
- control instructions can be generated in various application environments.
- Switching songs is to adjust the playback order of songs, for example, before the current song is played, it is forced to stop playing the current song and start playing the next song.
- a user desires to switch songs, it is usually necessary to click a predetermined trigger control in the song playing interface to switch songs.
- the present invention can easily switch songs without triggering operations on the display interface by the user, and also provides a brand-new, virtual and interactive song listening experience.
- an embodiment of the present invention also proposes a technical solution for switching songs based on the above-mentioned relative angle calculation method.
- FIG. 12 is a first exemplary flowchart of the method for switching songs according to the present invention.
- the method can be performed by a wearable device.
- the method includes: Step 1201 : when it is determined that the relative angle between the wearable device worn by the user and the smart device changes, a song switching instruction is generated, wherein based on the information contained in the wearable device The relative angle is determined by the first sound detection module and the second sound detection module for respective detection operations of a sound signal (preferably an ultrasonic signal) emitted from the smart device.
- Step 1202 Send the song switching instruction to the smart device, so that the smart device responds to the song switching instruction to perform a song switching operation, so that the songs played by the smart device are switched.
- the wearable device sends the song switching instruction to the smart device based on communication methods such as Bluetooth, infrared, Zifeng, and 4G/5G.
- the wearable device in FIG. 12 is equivalent to the first smart device in the method shown in FIG. 1 ; the smart device containing a sound source (preferably an ultrasonic source) in FIG. 12 is equivalent to the smart device in FIG. 1 the second smart device in the method shown.
- a sound source preferably an ultrasonic source
- determining the relative angle includes: a first sound detection module in the wearable device detects a first sound signal from the smart device directly to the first sound detection module, and a second sound detection module in the wearable device detects The second sound signal directly from the smart device to the second sound detection module, wherein the first sound signal and the second sound signal are simultaneously transmitted by the smart device; determine the receiving moment of the first sound signal and the second sound signal The time difference between the receiving moments of the sound signal; the relative angle is determined based on the distance between the first sound detection module and the second sound detection module and the time difference.
- generating a song switching instruction includes at least one of the following: when the change corresponds to a leftward movement of the wearable device relative to the smart device, generating an instruction for switching to a previous sequential song of the currently playing song; When the change corresponds to the movement of the wearable device to the right relative to the smart device, an instruction for switching to the next sequential song of the currently playing song is generated; when the change corresponds to the movement of the wearable device to the left relative to the smart device, generating an instruction for switching to the next sequential song of the currently playing song; when the change corresponds to the wearable device moving to the right relative to the smart device, generating an instruction for switching to the previous sequential song of the currently playing song, etc. Wait.
- FIG. 13 is a schematic diagram of the user wearing the smart earphone in the initial position according to the present invention.
- the signal arrival time difference is assumed to be the signal arrival time of the first sound detection module minus the signal arrival time of the second sound detection module.
- the first sound detection module for example, a microphone or a microphone array
- the second sound detection module for example, a microphone or a microphone array
- the smart headset determines that the relative angle at this time is
- FIG. 14 is a schematic diagram of a user wearing a smart headset tilting his head to the left according to the present invention. It can be seen from FIG. 14 that when the user tilts his head to the left while wearing the smart earphone, the first sound detection module (such as a microphone or a microphone array) located in the left earphone of the smart earphone and the second sound located in the right earphone of the smart earphone The detection module (eg, a microphone or a microphone array) continues to receive sound signals sent from the smart device 1200 (eg, a smart phone), respectively. Based on the relative angle determination method shown in Figure 1, the smart headset determines that the relative angle at this time is visible, less than Similarly, the relative angle when the smart earphone is panned to the left will be smaller.
- the first sound detection module such as a microphone or a microphone array
- FIG. 15 is a schematic diagram of a user wearing a smart headset tilting his head to the right according to the present invention.
- the first sound detection module such as a microphone or microphone array located in the left headset of the smart headset and the second sound located in the right headset of the smart headset
- the detection module eg, a microphone or a microphone array
- the smart headset determines that the relative angle at this time is visible, more than the Similarly, the relative angle when the smart headset is panned to the right also increases.
- the relative angle compared to the initial value Changes in can determine the direction of movement of the user wearing the smart headset. Specifically, when it is determined that the current relative angle is smaller than the initial value When it is determined that the user is wearing the smart headset and moves to the left (such as tilting his head); when it is determined that the current relative angle is greater than the initial value When , it is determined that the user is wearing the smart headset and moves to the right (for example, tilting his head).
- Example (1) When the smart earphone determines that the user is wearing the smart earphone and pans to the left, the smart earphone can generate an instruction for switching to the previous song of the currently playing song in the playlist, and send the instruction to the smart device 1200 based on the communication connection. Send this command. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the previous song in the playlist.
- Example (2) When the smart earphone determines that the user is wearing the smart earphone and pans to the right, the smart earphone can generate an instruction for switching to the next sequential song of the currently playing song in the playlist, and send the instruction to the smart device 1200 based on the communication connection. Send this command. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the next sequential song in the playlist.
- Example (3) When the smart earphone determines that the user is wearing the smart earphone and pans to the left, the smart earphone can generate an instruction for switching to the next sequential song of the currently playing song in the playlist, and send the instruction to the smart device 1200 based on the communication connection. Send this command. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the next sequential song in the playlist.
- Example (4) When the smart earphone determines that the user is wearing the smart earphone and pans to the right, the smart earphone can generate an instruction for switching to the previous song of the currently playing song in the playlist, and send the instruction to the smart device 1200 based on the communication connection. Send this command. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the previous song in the playlist.
- the songs are switched one time according to the order of the playlist, that is, one play sequence is skipped each time.
- multiple playback sequences can be adjusted to jump each time, which is not limited in the embodiment of the present invention.
- FIG. 16 is a schematic diagram of the user wearing the smart bracelet in the initial position according to the present invention.
- the signal arrival time difference is assumed to be the signal arrival time of the first sound detection module minus the signal arrival time of the second sound detection module. As can be seen from FIG.
- the first sound detection module 1501 for example, a microphone or a microphone array located in the left area of the smart bracelet 1500 and
- the second sound detection module 1502 located in the right area of the smart bracelet 1500 receives sound signals sent from the smart device 1200 (eg, a smart phone), respectively.
- the smart bracelet 1500 determines that the relative angle at this time is
- FIG. 17 is a schematic diagram of the user wearing the smart bracelet and turning his arm to the left according to the present invention. It can be seen from FIG.
- the first sound detection module 1501 for example, a microphone or a microphone array located in the left half of the smart bracelet 1500 and the The second sound detection module 1502 (eg, a microphone or a microphone array) in the right half area of 1500 continues to receive sound signals from the smart device 1200 (eg, a smart phone), respectively.
- the smart bracelet determines that the relative angle at this time is visible, less than
- FIG. 18 is a schematic diagram of the user wearing the smart bracelet and turning his arm to the right according to the present invention. It can be seen from FIG.
- the first sound detection module 1501 for example, a microphone or a microphone array located in the left half of the smart bracelet 1500 and the The second sound detection module 1502 (for example, a microphone or a microphone array) in the right half area of the , continues to receive sound signals sent from the smart device 1200 (for example, a smart phone), respectively.
- the smart bracelet 1500 determines that the relative angle at this time is visible, more than the
- the change of the user's hand can determine the moving direction of the user's hand. Specifically, when it is determined that the current relative angle is smaller than the initial value When , it is judged that the user is wearing the smart bracelet and turned his arm to the left; when it is judged that the current relative angle is greater than the initial value When it is determined that the user is wearing the smart bracelet and turns his arm to the right.
- a song switching instruction corresponding to the moving direction can be generated. For example, when the smart bracelet moves to the left relative to the smart device, an instruction for switching to the previous song of the currently playing song is generated. , When the smart bracelet moves to the right relative to the smart device, it generates an instruction for switching to the next song of the currently playing song. When the smart bracelet moves to the left relative to the smart device, it generates an instruction for switching to the next song of the currently playing song. When the instruction of the sequential song or the smart bracelet moves to the right relative to the smart device, an instruction for switching to the previous sequential song of the currently playing song is generated, and so on.
- FIG. 19 is a structural diagram of an apparatus for switching songs according to the present invention.
- the device includes: an instruction generation module, configured to generate a song switching instruction when it is determined that the relative angle between the wearable device worn by the user and the smart device changes, wherein based on the first sound contained in the wearable device
- the detection module and the second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the smart device; the instruction sending module is used to send the song switching instruction to the smart device, so that the smart device responds to the The song switching instruction is described to perform the song switching operation.
- the instruction generation module is used to enable the first sound detection module in the wearable device to detect the first sound signal from the smart device directly to the first sound detection module, and to enable the second sound detection in the wearable device
- the module detects the second sound signal from the smart device directly to the second sound detection module, wherein the first sound signal and the second sound signal are simultaneously transmitted by the smart device; determine the receiving time of the first sound signal and The time difference between the receiving moments of the second sound signal; the relative angle is determined based on the distance between the first sound detection module and the second sound detection module and the time difference.
- the instruction generation module is configured to perform at least one of the following: when the change corresponds to the movement of the wearable device to the left relative to the smart device, generate an instruction for switching to the previous sequential song of the currently playing song instruction; when the change corresponds to the movement of the wearable device to the right relative to the smart device, an instruction for switching to the next sequential song of the currently playing song is generated; when the change corresponds to the movement of the wearable device to the left relative to the smart device , generate an instruction for switching to the next sequential song of the currently playing song; when the change corresponds to the wearable device moving to the right relative to the smart device, generate an instruction for switching to the previous sequential song of the currently playing song .
- the present invention also provides a wearable device, comprising: a first sound detection module; a second sound detection module; and a control module for determining when the relative angle between the wearable device worn by the user and the smart device changes , generating a song switching instruction, wherein the relative angle is determined based on the respective detection operations of the first sound detection module and the second sound detection module included in the wearable device for the sound signal sent from the smart device; the communication module, It is used to send the song switching instruction to the smart device, so that the smart device responds to the song switching instruction to perform a song switching operation.
- the wearable device may include: smart earphones, smart glasses, smart watches, smart bracelets, and smart foot rings, and so on.
- FIG. 20 is a second exemplary flowchart of the method for switching songs according to the present invention.
- the method may be performed by a smart device.
- the method includes: Step 2001 : when it is determined that the relative angle between the smart device and the wearable device worn by the user changes, generate a song switching instruction, which is based on the first song included in the smart device.
- a sound detection module and a second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the wearable device;
- Step 2002 In response to the song switching instruction, perform a song switching operation in the smart device.
- the smart device in FIG. 20 is equivalent to the first smart device in the method shown in FIG. 1 ; the wearable device including the sound source in FIG. 20 is equivalent to the second smart device in the method shown in FIG. 1 . smart device. Therefore, the embodiments of the present invention can easily switch songs without triggering operations on the display interface by the user, and also provide a brand-new, virtual and interactive song listening experience.
- determining the relative angle includes: a first sound detection module in the smart device detects a first sound signal from the wearable device directly to the first sound detection module, and a second sound detection module in the smart device detects a sound signal from the wearable device. The wearable device directly reaches the second sound signal of the second sound detection module, wherein the first sound signal and the second sound signal are simultaneously transmitted by the wearable device; determine the receiving moment of the first sound signal and the first sound signal The time difference between the receiving moments of the two sound signals; the relative angle is determined based on the distance between the first sound detection module and the second sound detection module and the time difference.
- generating a song switching instruction includes at least one of the following: when the change corresponds to a leftward movement of the wearable device relative to the smart device, generating an instruction for switching to a previous sequential song of the currently playing song; When the change corresponds to the movement of the wearable device to the right relative to the smart device, an instruction for switching to the next sequential song of the currently playing song is generated; when the change corresponds to the movement of the wearable device to the left relative to the smart device, generating an instruction for switching to the next sequential song of the currently playing song; when the change corresponds to the wearable device moving to the right relative to the smart device, generating an instruction for switching to the previous sequential song of the currently playing song, etc. Wait.
- the wearable device includes a smart earphone, a smart watch, a smart bracelet, a smart glasses or a smart foot ring, and the like.
- FIG. 21 is a schematic diagram of the user wearing the smart earphone in the initial position according to the present invention. It is assumed that the signal arrival time difference is always the signal arrival time of the first sound detection module minus the signal arrival time of the second sound detection module. As can be seen from FIG.
- the first sound detection module for example, a microphone or a microphone array located in the smart device 1200 and the sound detection module located in the smart device 1200
- the second sound detection module respectively receives sound signals (eg, ultrasonic signals) from the same smart earphone (eg, the smart earphone worn on the user's right ear as shown in FIG. 21 ). Based on the relative angle determination method shown in FIG. 1 , the smart device 1200 determines that the relative angle at this time is:
- FIG. 22 is a schematic diagram of the present invention when the user wears the smart earphone and pans to the left. It can be seen from FIG. 22 that when the user wears the smart headset and pans to the left, the first sound detection module (eg, microphone or microphone array) located in the smart device 1200 and the second sound detection module (eg, microphone or microphone array), continue to receive sound signals from the same smart earphone (for example, the smart earphone worn on the user's right ear as shown in Figure 22). Based on the relative angle determination method shown in Figure 1, the smart device determines that the relative angle at this time is visible, more than the first sound detection module (eg, microphone or microphone array) located in the smart device 1200 and the second sound detection module (eg, microphone or microphone array), continue to receive sound signals from the same smart earphone (for example, the smart earphone worn on the user's right ear as shown in Figure 22). Based on the relative angle determination method shown in Figure 1, the smart device determines that the relative angle at this time is visible, more than the
- FIG. 14 is a schematic diagram of the present invention when the user wears the smart headset and pans to the right.
- the first sound detection module for example, a microphone or a microphone array
- the second sound detection module for example, microphone or microphone array
- the smart device determines that the relative angle at this time is visible, less than
- the smart device 1200 can determine the panning direction of the user wearing the smart headset. Specifically, when it is determined that the current relative angle is smaller than the initial value When it is determined that the user is wearing the smart headset and pans to the right; when it is determined that the current relative angle is greater than the initial value , it is determined that the user is panning to the left while wearing the smart headset.
- Example (1) When it is determined that the user wears the smart earphone to pan to the left, the smart device 1200 may generate an instruction for switching to the previous song in the playlist of the currently playing song. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the previous song in the playlist.
- Example (2) When it is determined that the user is wearing the smart headset to pan to the right, the smart device 1200 may generate an instruction for switching to the next sequential song of the currently playing song in the playlist. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the next sequential song in the playlist.
- Example (3) When it is determined that the user wears the smart earphone to pan to the left, the smart device 1200 may generate an instruction for switching to the next sequential song of the currently playing song in the playlist. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the next sequential song in the playlist.
- Example (4) When it is determined that the user is wearing the smart earphone to pan to the right, the smart device 1200 may generate an instruction for switching to the previous song of the currently playing song in the playlist. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the previous song in the playlist.
- the songs are switched one time according to the order of the playlist, that is, one play sequence is skipped each time.
- multiple playback sequences can be adjusted to jump each time, which is not limited in the embodiment of the present invention.
- FIG. 24 is a schematic diagram of the user wearing the smart bracelet in the initial position according to the present invention. It is assumed that the signal arrival time difference is always the signal arrival time of the first sound detection module minus the signal arrival time of the second sound detection module. As can be seen from FIG.
- the smart bracelet 1500 containing the sound source 1501 (preferably an ultrasonic sound source) facing the smart device 1200
- the first sound detection module for example, a microphone located in the smart device 1200 or a microphone array
- a second sound detection module eg, a microphone or a microphone array located in the smart device 1200 to receive sound signals (eg, ultrasonic signals) from the sound source 1501, respectively.
- the smart device 1200 determines that the relative angle at this time is
- FIG. 25 is a schematic diagram of the present invention when the user wears the wristband and pans to the left.
- the first sound detection module for example, a microphone or a microphone array
- the second sound detection module For example, a microphone or a microphone array
- FIG. 26 is a schematic diagram of the user wearing the smart bracelet panning to the right according to the present invention. It can be seen from FIG. 26 that when the user wears the smart bracelet 1500 and pans to the right, the first sound detection module (for example, a microphone or a microphone array) located in the smart device 1200 and the second sound detection module (for example, a microphone or a microphone array), continue to receive sound signals from the sound source 1501, respectively. Based on the relative angle determination method shown in FIG. 1 , the smart device 1200 determines that the relative angle at this time is visible, less than
- the change of can determine the translation direction of the user wearing the smart bracelet. Specifically, when it is determined that the current relative angle is smaller than the initial value When , it is determined that the user is wearing the smart bracelet to pan to the right; when it is determined that the current relative angle is greater than the initial value , it is determined that the user is wearing the smart bracelet to pan to the left.
- Example (1) When it is determined that the user wears the smart bracelet to pan to the left, the smart device 1200 may generate an instruction for switching to the previous song in the song list of the currently playing song. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the previous song in the playlist.
- Example (2) When it is determined that the user wears the smart bracelet to pan to the right, the smart device 1200 may generate an instruction for switching to the next sequential song of the currently playing song in the playlist. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the next sequential song in the playlist.
- Example (3) When it is determined that the user wears the smart bracelet to pan to the left, the smart device 1200 may generate an instruction for switching to the next sequential song of the currently playing song in the playlist. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the next sequential song in the playlist.
- Example (4) When it is determined that the user wears the smart bracelet and pans to the right, the smart device 1200 may generate an instruction for switching to the previous song of the currently playing song in the playlist. In response to the instruction, the song playing resource in the smart device 1200 stops playing the current song and starts playing the previous song in the playlist.
- FIG. 27 is a structural diagram of an apparatus for switching songs according to the present invention.
- the device includes: an instruction generation module configured to generate a song switching instruction when it is determined that the relative angle between the smart device and the wearable device worn by the user changes, wherein based on the first sound detection contained in the smart device The module and the second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the wearable device; the song switching module is used for performing the song switching operation in the smart device in response to the song switching instruction.
- the instruction generation module is configured to enable the first sound detection module in the smart device to detect the first sound signal from the wearable device directly to the first sound detection module, and the second sound detection module in the smart device to detect the first sound signal from the wearable device.
- the wearable device directly reaches the second sound signal of the second sound detection module, wherein the first sound signal and the second sound signal are simultaneously transmitted by the wearable device; determine the receiving moment of the first sound signal and the first sound signal The time difference between the receiving moments of the two sound signals; the relative angle is determined based on the distance between the first sound detection module and the second sound detection module and the time difference.
- the instruction generation module is configured to execute at least one of the following: when the change corresponds to the movement of the wearable device to the left relative to the smart device, generate an instruction for switching to the previous song of the currently playing song; The change corresponds to when the wearable device moves to the right relative to the smart device, generating an instruction for switching to the next sequential song of the currently playing song; when the change corresponds to the wearable device moving to the left relative to the smart device, generating an instruction.
- An instruction for switching to the next sequential song of the currently playing song; when the change corresponds to the wearable device moving to the right relative to the smart device, an instruction for switching to the previous sequential song of the currently playing song is
- the present invention also provides an intelligent device, comprising: a first sound detection module; a second sound detection module; generating a song switching instruction, wherein the relative angle is determined based on the respective detection operations of the first sound detection module and the second sound detection module included in the smart device for the sound signal sent from the wearable device; the song switching module, for performing a song switching operation in the smart device in response to the song switching instruction.
- the smart device includes: a smart phone; a tablet computer; and the like.
- Embodiments of the present invention may also be applied in virtual reality (VR) viewings.
- VR house viewing means viewing the house through the virtual three-dimensional space of the constructed house.
- viewing houses through VR often can only interact with the virtual three-dimensional space, click to walk, click to open and close.
- a user wishes to change the viewing angle of the house, he usually needs to click on a preset point in the house to move to the point, and display the house from the point of view of the house.
- the embodiment of the present invention also proposes a technical solution for realizing virtual house viewing based on the above-mentioned relative angle calculation method.
- VR viewings can be viewed, lectured or taken in a virtual three-dimensional space.
- VR viewing refers to the use of VR technology to truly restore the three-dimensional scene of the house.
- Consumers provide an immersive viewing experience in free mode, so that users can experience the real viewing scene without leaving home. For example, by opening the VR house listing on the APP and touching anywhere on the screen, you can obtain depth information including the size, orientation, distance, etc. of the real space of the house.
- FIG. 28 is a first exemplary flowchart of the virtual house viewing method of the present invention.
- the method can be performed by a wearable device.
- the method includes: Step 2801 : when it is determined that the relative angle between the wearable device worn by the user and the smart device changes, generate a viewing angle change instruction, wherein based on the first angle included in the wearable device The sound detection module and the second sound detection module determine the relative angle with respect to the respective detection operations of the sound signal sent from the smart device.
- Step 2802 Send the viewing angle change instruction to the smart device, so that the smart device adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the smart device based on the viewing angle change instruction.
- the wearable device sends the viewing angle change instruction to the smart device based on communication methods such as Bluetooth, infrared, Zifeng, and 4G/5G.
- the wearable device in FIG. 28 is equivalent to the first smart device in the method shown in FIG. 1 ; the smart device containing the sound source in FIG. 28 is equivalent to the second smart device in the method shown in FIG. 1 . smart device. Therefore, the embodiments of the present invention can easily switch the viewing angle without the user's triggering operation on the display interface, and also provide a brand-new, virtual and interactive viewing experience.
- determining the relative angle includes: a first sound detection module in the wearable device detects a first sound signal from the smart device directly to the first sound detection module, and a second sound detection module in the wearable device detects The second sound signal directly from the smart device to the second sound detection module, wherein the first sound signal and the second sound signal are simultaneously transmitted by the smart device; determine the receiving moment of the first sound signal and the second sound signal The time difference between the receiving moments of the sound signal; the relative angle is determined based on the distance between the first sound detection module and the second sound detection module and the time difference.
- the generating the viewing angle change instruction includes at least one of the following: when the change corresponds to the wearable device moving to the left relative to the smart device, generating the viewing angle for adjusting the panoramic image to move a fixed angle to the left or to the left An instruction to move an angle related to the amount of change of the change, or an instruction to move to the left to a predetermined point in the panoramic image; when the change corresponds to the wearable device moving to the right relative to the smart device, generate an instruction for adjusting An instruction to move the viewing angle of the panoramic image to the right by a fixed angle or to the right by an angle related to the change amount of the change, or to generate an instruction to move to the right to a predetermined point in the panoramic image; when the change corresponds to the wearable device When moving to the left relative to the smart device, generate an instruction for adjusting the viewing angle of the panoramic image to move to the right by a fixed angle or to the right by an angle related to the change amount of the change, or generate an instruction to move to the right to a predetermined point in the
- FIG. 29 is a schematic diagram of the user wearing the smart earphone in the initial position according to the present invention.
- the signal arrival time difference is assumed to be the signal arrival time of the first sound detection module minus the signal arrival time of the second sound detection module.
- the first sound detection module for example, a microphone or a microphone array
- the second sound detection module for example, a microphone or a microphone array
- the smart headset determines that the relative angle at this time is
- FIG. 30 is a schematic diagram of a user wearing a smart headset tilting his head to the left according to the present invention.
- the first sound detection module such as a microphone or a microphone array located in the left earphone of the smart earphone and the second sound located in the right earphone of the smart earphone
- the detection module eg, a microphone or a microphone array
- the smart headset determines that the relative angle at this time is visible, less than Similarly, the relative angle of the smart headset will be smaller when panning to the right.
- FIG. 31 is a schematic diagram of a user wearing a smart headset tilting his head to the right according to the present invention.
- the first sound detection module such as a microphone or microphone array located in the left earphone of the smart earphone and the second sound located in the right earphone of the smart earphone
- the detection module eg, a microphone or a microphone array
- the smart headset determines that the relative angle at this time is visible, more than the Similarly, the relative angle when the smart headset is panned to the right also increases.
- the change of can determine the tilting direction of the user wearing the smart headset. Specifically, when it is determined that the current relative angle is smaller than the initial value When it is determined that the user is wearing a smart headset and tilted his head to the left; when it is determined that the current relative angle is greater than the initial value When , it is determined that the user is wearing the smart headset and moves to the right (for example, tilting his head).
- Example (1) When it is determined that the user wears the smart headset and moves to the left, the smart headset can generate a viewing angle change instruction for adjusting the viewing angle of the panoramic image to move to the left by a fixed angle (eg, 30 degrees). Based on the viewing angle change instruction, the smart device adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the smart device and moves the fixed angle to the left. Preferably, the distance change between the smart earphone and the smart device is further detected (such as ultrasonic ranging), and a backward/forward instruction in the room is generated. The smart device executes the back/forward command to display what the user will see when back/forward in the scene.
- a fixed angle eg, 30 degrees
- the smart device adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the smart device and moves the fixed angle to the left.
- the distance change between the smart earphone and the smart device is further detected (such as ultrasonic ranging), and a backward/forward instruction in the room
- Example (2) When it is determined that the user is wearing a smart headset and moves to the left, the smart headset can generate a change amount (that is, the difference between the current relative angle and the relative angle of the initial position) for adjusting the leftward movement and change of the viewing angle of the panoramic image. ) relative to the angle of view change command.
- the correlation may be a proportional relationship.
- a viewing angle change instruction for adjusting the viewing angle of the panoramic image to move to the left by K*A1 angle (where K is a predetermined coefficient) can be generated.
- the smart device Based on the viewing angle change instruction, the smart device adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the smart device and moves to the left by an angle of K*A1.
- the distance change between the smart earphone and the smart device is further detected, and a backward/forward instruction in the room is generated.
- the smart device executes the back/forward command to show what the user will see when back/forward in the scene.
- Example (3) When it is determined that the user wears the smart headset and moves to the left, the smart headset generates an instruction to move to the left to a predetermined point in the panoramic image.
- the smart device moves to the left to a predetermined point in the panoramic image based on the viewing angle change instruction, so that the position in the panoramic image and the viewing angle of the panoramic image can be changed by switching the preset point.
- a viewing angle change command in the same direction as the user tilts his head is generated.
- a viewing angle change instruction in a direction opposite to that of the user tilting his head may also be generated, so as to achieve a wrong-direction user experience.
- the implementation process includes: S01: a user's smart device transmits a positioning signal in an ultrasound format, the positioning signal includes a unique identifier (Mac address/ID, etc.) of the smart device, and is a signal based on a CDMA code division multiple access technology architecture.
- S03 The wearable device uses the positioning method of the smart device to calculate the relative angle between itself and the smart device, and uses ultrasonic ranging to calculate the relative distance between itself and the smart device.
- S04 The smart device remains stationary, and the user's head or body moves.
- the screen rotates to the right; when the relative angle becomes smaller, it means that the user's head or body moves to the right.
- Move to the left the screen rotates to the left; when the relative distance becomes smaller, it means that the user's head or body moves forward, and the screen shows what the user will see when moving forward in the scene; when the relative distance becomes larger, it means that the user's head or body moves forward. Moving backwards, the screen shows what the user would see when stepping back in the scene.
- S05 If the user needs to maintain the viewing angle of a scene, the user sends a locked viewing angle instruction, and the displayed smart device obtains the locked viewing angle instruction, and locks the viewing screen of the current viewing angle according to the locked viewing angle instruction. If the user's head or body moves , it will not drive the change of the viewing angle of the scene, so as to maintain the content on the same viewing angle of the viewing screen, so that the user can move while viewing the screen without affecting the viewing screen; After the unlock instruction, the viewing angle of the viewing angle is no longer locked, and the displayed smart device changes the viewing angle of the image according to the user's movement.
- Xiao Ming is using VR house viewing software to visit a certain house, and his current location is at the entrance door.
- Xiao Ming wears smart headphones and uses his smartphone to view the room.
- the specific process includes: S01: The smart phone transmits a positioning signal in an ultrasound format, the positioning signal includes a unique identifier (Mac address/ID, etc.) of the smart phone, and is a signal based on a CDMA code division multiple access technology architecture.
- S02 The smart earphone detects a positioning signal, parses an identifier from the detected positioning signal, and confirms that the detected positioning signal originates from the same sound source based on the identifier.
- S03 Calculate the relative angle between the smart headset and the smartphone, and use ultrasonic ranging to calculate the relative distance between the smart headset and the smartphone.
- S04 Keep the smartphone still, if Huawei's upper body is tilted to the right (that is, the relative angle between the smart earphone and the smartphone becomes larger), the screen will turn to the right to display the scene on the left side of the entrance hall; if Huawei's upper body is tilted to the left (That is, the relative angle between the smart headset and the smart phone becomes smaller), and the screen is turned to the left to display the scene on the right side of the entrance hall.
- the balcony screen viewed from the current viewing angle is locked, so that Xiao Ming can move while viewing the balcony without affecting the viewing screen.
- Xiao Ming wants to watch other places, he can say "unlock the screen", and the smartphone receives Xiao Ming's voice information, and generates an unlock instruction after judging that the voice information is the voice information of the unlocking perspective.
- the unlock instruction the view of the balcony viewed from the current viewing angle is no longer locked, and the viewing angle of the house is changed with the movement of Xiao Ming's head or body.
- FIG. 32 is a first exemplary structural diagram of the virtual house viewing device of the present invention.
- the device includes: an instruction generation module configured to generate a viewing angle change instruction when it is determined that the relative angle between the wearable device worn by the user and the smart device changes, wherein based on the first sound contained in the wearable device
- the detection module and the second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the smart device; the instruction sending module is used to adjust the display interface of the smart device based on the viewing angle change instruction to display the house on the display interface. perspective of the panoramic image.
- the instruction generation module is configured to enable the first sound detection module in the wearable device to detect the first sound signal from the smart device to the first sound detection module, and enable the first sound detection module in the wearable device to detect the first sound signal directly from the smart device to the first sound detection module.
- the second sound detection module detects the second sound signal from the smart device directly to the second sound detection module, wherein the first sound signal and the second sound signal are simultaneously transmitted by the smart device; The time difference between the time of reception and the time of reception of the second sound signal; the relative angle is determined based on the distance between the first sound detection module and the second sound detection module and the time difference.
- the instruction generation module is configured to perform at least one of the following: when the change corresponds to the wearable device moving to the left relative to the smart device, generating a fixed angle for adjusting the viewing angle of the panoramic image to move left or An instruction to move the angle related to the change amount of the change to the left, or to generate an instruction to move to the left to a predetermined point in the panoramic image; when the change corresponds to the movement of the wearable device to the right relative to the smart device, generate an instruction to move to the left relative to the smart device.
- an instruction to move the angle related to the change amount of the change to the right or generate an instruction to move to the right to a predetermined point in the panoramic image;
- an instruction for adjusting the viewing angle of the panoramic image to move to the right by a fixed angle or to the right by an angle related to the change amount of the change is generated, or an instruction to move to the right in the panoramic image is predetermined.
- Point position instruction when the change corresponds to the wearable device moving to the right relative to the smart device, generate a command for adjusting the viewing angle of the panoramic image to move left by a fixed angle or to the left by an angle related to the change amount of the change. command, or generate a command to move left to a predetermined point in the panoramic image, etc.
- the present invention also provides a wearable device, comprising: a first sound detection module; a second sound detection module; and a control module for generating a change in viewing angle when it is determined that the relative angle between the wearable device and the smart device changes an instruction, wherein the relative angle is determined based on the respective detection operations of the first sound detection module and the second sound detection module with respect to the sound signal sent from the smart device; the communication module is used to send the view angle change instruction to the smart device, thereby Based on the viewing angle change instruction, the intelligent device adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the intelligent device.
- the wearable device may include: smart earphones, smart glasses, smart watches, smart bracelets, and the like.
- FIG. 33 is a second exemplary flowchart of the virtual house viewing method of the present invention.
- the method may be performed by a smart device.
- the method includes: Step 3301 : when it is determined that the relative angle between the smart device and the wearable device worn by the user changes, a viewing angle change instruction is generated, wherein based on the first angle included in the smart device The relative angle is determined by a sound detection module and a second sound detection module for respective detection operations of the sound signal emitted from the wearable device.
- Step 3302 Based on the viewing angle change instruction, adjust the viewing angle for displaying the panoramic image of the house in the display interface of the smart device.
- the smart device in FIG. 33 is equivalent to the first smart device in the method shown in FIG. 1 ; the wearable device containing the sound source in FIG. 33 is equivalent to the second smart device in the method shown in FIG. 1 . smart device. Therefore, the embodiments of the present invention can easily switch the viewing angle without the user's triggering operation on the display interface, and also provide a brand-new, virtual and interactive viewing experience.
- determining the relative angle includes: a first sound detection module in the smart device detects a first sound signal from the wearable device directly to the first sound detection module, and a second sound detection module in the smart device detects a sound signal from the wearable device. The wearable device directly reaches the second sound signal of the second sound detection module, wherein the first sound signal and the second sound signal are simultaneously transmitted by the wearable device; determine the receiving moment of the first sound signal and the first sound signal The time difference between the receiving moments of the two sound signals; the relative angle is determined based on the distance between the first sound detection module and the second sound detection module and the time difference.
- the generating the viewing angle change instruction includes at least one of the following: when the change corresponds to the wearable device moving to the left relative to the smart device, generating the viewing angle for adjusting the panoramic image to move a fixed angle to the left or to the left An instruction to move an angle related to the amount of change of the change, or an instruction to move to the left to a predetermined point in the panoramic image; when the change corresponds to the wearable device moving to the right relative to the smart device, generate an instruction for adjusting An instruction to move the viewing angle of the panoramic image to the right by a fixed angle or to the right by an angle related to the change amount of the change, or to generate an instruction to move to the right to a predetermined point in the panoramic image; when the change corresponds to the wearable device When moving to the left relative to the smart device, generate an instruction for adjusting the viewing angle of the panoramic image to move to the right by a fixed angle or to the right by an angle related to the change amount of the change, or generate an instruction to move to the right to a predetermined point in the
- FIG. 34 is a schematic diagram of the user wearing the smart earphone in the initial position according to the present invention. It is assumed that the signal arrival time difference is always the signal arrival time of the first sound detection module minus the signal arrival time of the second sound detection module. It can be seen from FIG.
- the first sound detection module for example, a microphone or a microphone array located in the smart device 1200 and the The second sound detection module (eg, a microphone or a microphone array) respectively receives sound signals from the same smart earphone (eg, the smart earphone worn on the user's right ear as shown in FIG. 34 ). Based on the relative angle determination method shown in FIG. 1 , the smart device 1200 determines that the relative angle at this time is
- FIG. 35 is a schematic diagram of the present invention when the user wears the smart headset and pans to the left.
- the first sound detection module for example, a microphone or a microphone array
- the second sound detection module for example, microphone or microphone array
- FIG. 36 is a schematic diagram of the present invention when the user wears the smart headset and pans to the right. It can be seen from FIG. 36 that when the user wears the smart headset and pans to the right, the first sound detection module (for example, a microphone or a microphone array) located in the smart device 1200 and the second sound detection module (for example, microphone or microphone array), continue to receive sound signals from the same smart earphone (for example, the smart earphone worn on the user's right ear as shown in Figure 36). Based on the relative angle determination method shown in Figure 1, the smart headset determines that the relative angle at this time is visible, less than
- the first sound detection module for example, a microphone or a microphone array
- the change of the user can determine the translation direction of the user wearing the smart headset. Specifically, when it is determined that the current relative angle is smaller than the initial value When it is determined that the user is wearing the smart headset and pans to the right; when it is determined that the current relative angle is greater than the initial value , it is determined that the user is panning to the left while wearing the smart headset.
- Example (1) When it is determined that the user wears the smart headset to pan to the left, the smart headset can generate a viewing angle change instruction for adjusting the viewing angle of the panoramic image to move to the left by a fixed angle (eg, 30 degrees). Based on the viewing angle change instruction, the smart device adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the smart device and moves the fixed angle to the left. Preferably, the distance change between the smart earphone and the smart device is further detected (such as ultrasonic ranging), and a backward/forward instruction in the room is generated. The smart device executes the back/forward command to display what the user will see when back/forward in the scene.
- a fixed angle eg, 30 degrees
- the smart device adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the smart device and moves the fixed angle to the left.
- the distance change between the smart earphone and the smart device is further detected (such as ultrasonic ranging), and a backward/forward instruction in the room
- Example (2) When it is determined that the user wears the smart headset to pan to the left, the smart headset can generate the leftward movement and change of the viewing angle used to adjust the panoramic image (that is, the difference between the current relative angle and the relative angle of the initial position). ) relative to the angle of view change command.
- the correlation may be a proportional relationship.
- a viewing angle change instruction for adjusting the viewing angle of the panoramic image to move to the left by K*A1 angle (where K is a predetermined coefficient) can be generated.
- the smart device Based on the viewing angle change instruction, the smart device adjusts the viewing angle for displaying the panoramic image of the house in the display interface of the smart device and moves to the left by an angle of K*A1.
- the distance change between the smart earphone and the smart device is further detected, and a backward/forward instruction in the room is generated.
- the smart device executes the back/forward command to display what the user will see when back/forward in the scene.
- Example (3) When it is determined that the user wears the smart headset to pan to the left, the smart headset generates an instruction to move to the left to a predetermined point in the panoramic image.
- the smart device moves to the left to a predetermined point in the panoramic image based on the viewing angle change instruction, so that the position in the panoramic image and the viewing angle of the panoramic image can be changed by switching the preset point.
- the viewing angle change instruction in the same direction as the user's panning is generated.
- a viewing angle change instruction in an opposite direction to the user's translation can also be generated, so as to achieve a wrong-direction user experience.
- the implementation process includes: S01: a user's wearable device transmits a positioning signal in an ultrasonic format, the positioning signal includes a unique identifier (Mac address/ID, etc.) of the wearable device, and is a signal based on a CDMA code division multiple access technology architecture.
- S03 The smart device calculates the relative angle between itself and the wearable device, and uses ultrasonic ranging to calculate the relative distance between itself and the wearable device.
- S04 The smart device remains stationary, and the user's head or body moves.
- the screen rotates to the left; when the relative angle becomes smaller, it means that the user's head or body moves to the left.
- Move to the right the screen rotates to the right; when the relative distance becomes smaller, it means that the user's head or body moves forward, and the screen shows what the user will see when moving forward in the scene; when the relative distance becomes larger, it means that the user's head or body moves forward. Moving backwards, the screen shows what the user would see when stepping back in the scene.
- S05 If the user needs to maintain the viewing angle of a scene, the user sends a locked viewing angle instruction, and the displayed smart device obtains the locked viewing angle instruction, and locks the viewing screen of the current viewing angle according to the locked viewing angle instruction. If the user's head or body moves , it will not drive the change of the viewing angle of the scene, so as to maintain the content on the same viewing angle of the viewing screen, so that the user can move while viewing the screen without affecting the viewing screen; After the unlock instruction, the viewing angle of the viewing angle is no longer locked, and the displayed smart device changes the viewing angle of the image according to the user's movement.
- Xiao Ming is using VR house viewing software to visit a certain house, and his current location is at the entrance door.
- Xiao Ming wears smart headphones and uses his smartphone to view the room.
- the specific process includes: S01: The smart headset transmits a positioning signal in an ultrasonic format, the positioning signal includes a unique identifier (Mac address/ID, etc.) of the smart phone, and is a signal based on a CDMA code division multiple access technology architecture.
- S02 The smartphone detects the positioning signal, parses the identifier from the detected positioning signal, and confirms that the detected positioning signal originates from the same sound source based on the identifier.
- S03 Calculate the relative angle between the smartphone and the smart headset, and use ultrasonic ranging to calculate the relative distance between the smartphone and the smart headset.
- S04 The smartphone remains stationary, if Xiao Ming pans to the right (that is, the relative angle between the smartphone and the smart headset becomes smaller), the screen rotates to the right to display the scene on the left side of the entrance hall; The relative angle between the smart headset and the smart phone becomes larger), and the screen turns to the left to show the scene on the right side of the entrance hall. If Huawei moves forward and approaches the mobile phone (that is, the relative distance between the smart earphone and the smart phone becomes smaller), it means to move forward and enter the interior of the house from the entrance door.
- Xiao Ming leans back that is, the relative distance between the smart headset and the smartphone becomes larger
- S05 If Xiao Ming needs to maintain the viewing angle of the balcony, or wants a comfortable posture, he can say "lock the screen".
- the smartphone receives Xiao Ming's voice information, determines that the voice information is the voice information of the locked viewing angle, and generates a locked viewing angle indication.
- the balcony screen viewed from the current viewing angle is locked, so that Xiao Ming can move while viewing the balcony without affecting the viewing screen.
- Xiao Ming wants to watch other places, he can say "unlock the screen", and the smartphone receives Xiao Ming's voice information, and generates an unlock instruction after judging that the voice information is the voice information of the unlocking perspective.
- the unlock instruction the view of the balcony viewed from the current viewing angle is no longer locked, and the viewing angle of the house is changed with the movement of Xiao Ming's head or body.
- FIG. 37 is a structural diagram of a virtual house viewing device of the present invention.
- the device includes: an instruction generation module configured to generate a viewing angle change instruction when it is determined that the relative angle between the smart device and the wearable device worn by the user changes, wherein based on the first sound detection included in the smart device The module and the second sound detection module determine the relative angle according to the respective detection operations of the sound signal sent from the wearable device; the viewing angle adjustment module is used for adjusting the display interface of the smart device based on the viewing angle change instruction to display the house. perspective of the panoramic image.
- the instruction generation module is configured to enable the first sound detection module in the smart device to detect the first sound signal directly from the wearable device to the first sound detection module, and enable the second sound detection module in the smart device
- the sound detection module detects the second sound signal from the wearable device directly to the second sound detection module, wherein the first sound signal and the second sound signal are simultaneously transmitted by the wearable device; determine the first sound signal The time difference between the receiving time of the second sound signal and the receiving time of the second sound signal; the relative angle is determined based on the distance between the first sound detection module and the second sound detection module and the time difference.
- the instruction generation module is configured to perform at least one of the following: when the change corresponds to the wearable device moving to the left relative to the smart device, generating a fixed angle for adjusting the viewing angle of the panoramic image to the left or An instruction to move the angle related to the change amount of the change to the left, or to generate an instruction to move to the left to a predetermined point in the panoramic image; when the change corresponds to the movement of the wearable device to the right relative to the smart device, generate an instruction to move to the left relative to the smart device.
- an instruction to move the angle related to the change amount of the change to the right or generate an instruction to move to the right to a predetermined point in the panoramic image;
- an instruction for adjusting the viewing angle of the panoramic image to move to the right by a fixed angle or to the right by an angle related to the change amount of the change is generated, or an instruction to move to the right in the panoramic image is predetermined.
- Point position instruction when the change corresponds to the wearable device moving to the right relative to the smart device, generate a command for adjusting the viewing angle of the panoramic image to move left by a fixed angle or to the left by an angle related to the change amount of the change. command, or generate a command to move left to a predetermined point in the panoramic image, etc.
- the present invention also provides an intelligent device, comprising: a first sound detection module; a second sound detection module; generating a viewing angle change instruction, wherein the relative angle is determined based on the respective detection operations of the first sound detection module and the second sound detection module included in the smart device with respect to the sound signal emitted from the wearable device; the viewing angle adjustment module, is configured to adjust the viewing angle of the panoramic image of the house displayed in the display interface of the smart device based on the viewing angle change instruction.
- the smart device includes: a smart phone; a tablet computer; and the like.
- Embodiments of the present invention further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium.
- a computer program is stored on the computer-readable storage medium.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Stereophonic System (AREA)
- User Interface Of Digital Computer (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
Description
Claims (47)
- 一种生成控制指令的方法,其特征在于,包括:当确定用户所佩戴的可穿戴设备与智能设备的相对角度发生变化时,生成控制指令,其中:基于包含在所述可穿戴设备中的第一声音检测模块和第二声音检测模块针对从智能设备发出的声音信号的各自检测操作,确定所述相对角度;向智能设备发送所述控制指令,从而由智能设备执行所述控制指令。
- 根据权利要求1所述的生成控制指令的方法,其特征在于,所述可穿戴设备为适配于在头部佩戴的头部可穿戴设备,所述头部可穿戴设备包括智能耳机或智能眼镜。
- 根据权利要求1所述的生成控制指令的方法,其特征在于,所述控制指令包括下列中的至少一个:用于切换图片的指令;用于切换文章的指令;用于切换视频的指令;用于切换音频的指令;用于切换邮件的指令;用于切换视角的指令;用于切换界面的指令。
- 一种生成控制指令的装置,其特征在于,包括:生成模块,用于当确定用户所佩戴的可穿戴设备与智能设备的相对角度发生变化时,生成控制指令,其中:基于包含在所述可穿戴设备中的第一声音检测模块和第二声音检测模块针对从智能设备发出的声音信号的各自检测操作,确定所述相对角度;发送模块,用于向智能设备发送所述控制指令,从而由智能设备执行所述控制指令。
- 一种生成控制指令的方法,其特征在于,包括:当确定智能设备与用户所佩戴的可穿戴设备之间的相对角度发生变化时,生成控制指令,其中基于包含在所述智能设备中的第一声音检测模块和第二声音检测模块针对从可穿戴设备发出的声音信号的各自检测操作,确定所述相对 角度;响应于所述控制指令,在智能设备中执行对应于所述控制指令的操作。
- 根据权利要求5所述的生成控制指令的方法,其特征在于,所述可穿戴设备为适配于在头部佩戴的头部可穿戴设备,所述头部可穿戴设备包括智能耳机或智能眼镜。
- 根据权利要求5所述的生成控制指令的方法,其特征在于,所述控制指令包括下列中的至少一个:用于切换图片的指令;用于切换文章的指令;用于切换视频的指令;用于切换音频的指令;用于切换邮件的指令;用于切换视角的指令;用于切换界面的指令。
- 一种生成控制指令的装置,其特征在于,包括:生成模块,用于当确定智能设备与用户所佩戴的可穿戴设备之间的相对角度发生变化时,生成控制指令,其中基于包含在所述智能设备中的第一声音检测模块和第二声音检测模块针对从可穿戴设备发出的声音信号的各自检测操作,确定所述相对角度;执行模块,用于响应于所述控制指令,在智能设备中执行对应于所述控制指令的操作。
- 一种切换歌曲的方法,其特征在于,包括:当确定用户所佩戴的可穿戴设备与智能设备之间的相对角度发生变化时,生成歌曲切换指令,其中基于包含在所述可穿戴设备中的第一声音检测模块和第二声音检测模块针对从智能设备发出的声音信号的各自检测操作,确定所述相对角度;向智能设备发送所述歌曲切换指令,从而由智能设备响应于所述歌曲切换指令以执行歌曲切换操作。
- 根据权利要求9所述的切换歌曲的方法,其特征在于,所述确定相对角度包括:可穿戴设备中的第一声音检测模块检测从智能设备直达所述第一声音检测 模块的第一声音信号,可穿戴设备中的第二声音检测模块检测从该智能设备直达所述第二声音检测模块的第二声音信号,其中所述第一声音信号和所述第二声音信号为智能设备同时发射的;确定第一声音信号的接收时刻与第二声音信号的接收时刻之间的时间差;基于第一声音检测模块与第二声音检测模块之间的距离以及所述时间差,确定所述相对角度。
- 根据权利要求9所述的切换歌曲的方法,其特征在于,所述生成歌曲切换指令包括下列中的至少一个:当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于切换到当前播放歌曲的上一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于切换到当前播放歌曲的下一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于切换到当前播放歌曲的下一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于切换到当前播放歌曲的上一顺序歌曲的指令。
- 根据权利要求9-12中任一项所述的切换歌曲的方法,其特征在于,所述可穿戴设备包括智能耳机、智能手表、智能手环、智能眼镜或智能脚环。
- 一种切换歌曲的装置,其特征在于,包括:指令生成模块,用于当确定用户所佩戴的可穿戴设备与智能设备之间的相 对角度发生变化时,生成歌曲切换指令,其中基于包含在所述可穿戴设备中的第一声音检测模块和第二声音检测模块针对从智能设备发出的声音信号的各自检测操作,确定所述相对角度;指令发送模块,用于向智能设备发送所述歌曲切换指令,从而由智能设备响应于所述歌曲切换指令以执行歌曲切换操作。
- 根据权利要求14所述的切换歌曲的装置,其特征在于,指令生成模块,用于使能可穿戴设备中的第一声音检测模块检测从智能设备直达所述第一声音检测模块的第一声音信号,使能可穿戴设备中的第二声音检测模块检测从该智能设备直达所述第二声音检测模块的第二声音信号,其中所述第一声音信号和所述第二声音信号为智能设备同时发射的;确定第一声音信号的接收时刻与第二声音信号的接收时刻之间的时间差;基于第一声音检测模块与第二声音检测模块之间的距离以及所述时间差,确定所述相对角度。
- 根据权利要求14所述的切换歌曲的装置,其特征在于,指令生成模块,用于执行下列中的至少一个:当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于切换到当前播放歌曲的上一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于切换到当前播放歌曲的下一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于切换到当前播放歌曲的下一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于切换到当前播放歌曲的上一顺序歌曲的指令。
- 一种可穿戴设备,其特征在于,包括:第一声音检测模块;第二声音检测模块;控制模块,用于当确定用户所佩戴的可穿戴设备与智能设备之间的相对角度发生变化时,生成歌曲切换指令,其中基于包含在所述可穿戴设备中的第一声音检测模块和第二声音检测模块针对从智能设备发出的声音信号的各自检测操作,确定所述相对角度;通信模块,用于向智能设备发送所述歌曲切换指令,从而由智能设备响应于所述歌曲切换指令以执行歌曲切换操作。
- 一种切换歌曲的方法,其特征在于,包括:当确定智能设备与用户所佩戴的可穿戴设备之间的相对角度发生变化时,生成歌曲切换指令,其中基于包含在所述智能设备中的第一声音检测模块和第二声音检测模块针对从可穿戴设备发出的声音信号的各自检测操作,确定所述相对角度;响应于所述歌曲切换指令,在智能设备中执行歌曲切换操作。
- 根据权利要求19所述的切换歌曲的方法,其特征在于,所述确定相对角度包括:智能设备中的第一声音检测模块检测从可穿戴设备直达所述第一声音检测模块的第一声音信号,智能设备中的第二声音检测模块检测从该可穿戴设备直达所述第二声音检测模块的第二声音信号,其中所述第一声音信号和所述第二声音信号为可穿戴设备同时发射的;确定第一声音信号的接收时刻与第二声音信号的接收时刻之间的时间差;基于第一声音检测模块与第二声音检测模块之间的距离以及所述时间差,确定所述相对角度。
- 根据权利要求19所述的切换歌曲的方法,其特征在于,所述生成歌曲切换指令包括下列中的至少一个:当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于切换到当前播放歌曲的上一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于切换到当前播放歌曲的下一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于切换到当前播放歌曲的下一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于切换到当前播放歌曲的上一顺序歌曲的指令。
- 根据权利要求19-22中任一项所述的切换歌曲的方法,其特征在于,所述可穿戴设备包括智能耳机、智能手表、智能手环、智能眼镜或智能脚环。
- 一种切换歌曲的装置,其特征在于,包括:指令生成模块,用于当确定智能设备与用户所佩戴的可穿戴设备之间的相对角度发生变化时,生成歌曲切换指令,其中基于包含在所述智能设备中的第一声音检测模块和第二声音检测模块针对从可穿戴设备发出的声音信号的各自检测操作,确定所述相对角度;歌曲切换模块,用于响应于所述歌曲切换指令,在智能设备中执行歌曲切换操作。
- 根据权利要求24所述的切换歌曲的装置,其特征在于,指令生成模块,用于使能智能设备中的第一声音检测模块检测从可穿戴设备直达所述第一声音检测模块的第一声音信号,智能设备中的第二声音检测模块检测从该可穿戴设备直达所述第二声音检测模块的第二声音信号,其中所述第一声音信号和所述第二声音信号为可穿戴设备同时发射的;确定第一声音信号的接收时刻与第二声音信号的接收时刻之间的时间差;基于第一声音检测模块与第二声音检测模块之间的距离以及所述时间差,确定所述相对角度。
- 根据权利要求24所述的切换歌曲的装置,其特征在于,指令生成模块,用于执行下列中的至少一个:当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于切换到当前播放歌曲的上一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于切换到当前播放歌曲的下一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于切换到当前播放歌曲的下一顺序歌曲的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于切换到当前播放歌曲的上一顺序歌曲的指令。
- 一种智能设备,其特征在于,包括:第一声音检测模块;第二声音检测模块;控制模块,用于当确定智能设备与用户所佩戴的可穿戴设备之间的相对角 度发生变化时,生成歌曲切换指令,其中基于包含在所述智能设备中的第一声音检测模块和第二声音检测模块针对从可穿戴设备发出的声音信号的各自检测操作,确定所述相对角度;歌曲切换模块,用于响应于所述歌曲切换指令,在智能设备中执行歌曲切换操作。
- 一种虚拟看房方法,其特征在于,包括:当确定用户所佩戴的可穿戴设备与智能设备之间的相对角度发生变化时,生成视角变化指令,其中基于包含在所述可穿戴设备中的第一声音检测模块和第二声音检测模块针对从智能设备发出的声音信号的各自检测操作,确定所述相对角度;向智能设备发送所述视角变化指令,从而由智能设备基于所述视角变化指令,调整在智能设备的显示界面中展示房屋的全景图像的视角。
- 根据权利要求29所述的虚拟看房方法,其特征在于,所述确定相对角度包括:可穿戴设备中的第一声音检测模块检测从智能设备直达所述第一声音检测模块的第一声音信号,可穿戴设备中的第二声音检测模块检测从该智能设备直达所述第二声音检测模块的第二声音信号,其中所述第一声音信号和所述第二声音信号为智能设备同时发射的;确定第一声音信号的接收时刻与第二声音信号的接收时刻之间的时间差;基于第一声音检测模块与第二声音检测模块之间的距离以及所述时间差,确定所述相对角度。
- 根据权利要求30所述的虚拟看房方法,其特征在于,所述生成视角变化指令包括下列中的至少一个:当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于调整全景图像的视角向左移动固定角度或向左移动与所述变化的变化量相关的角度的指令,或生成向左移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于调整全景图像的视角向右移动固定角度或向右移动110与所述变化的变化量相关的角度的指令,或生成向右移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于调整全景图像的视角向右移动固定角度或向右移动与所述变化的变化量相关的角度的指令,或生成向右移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于调整全景图像的视角向左移动固定角度或向左移动与所述变化的变化量相关的角度的指令,或生成向左移动到全景图像中预定点位的指令。
- 一种虚拟看房装置,其特征在于,包括:指令生成模块,用于当确定用户所佩戴的可穿戴设备与智能设备之间的相对角度发生变化时,生成视角变化指令,其中基于包含在所述可穿戴设备中的第一声音检测模块和第二声音检测模块针对从智能设备发出的声音信号的各自检测操作,确定所述相对角度;指令发送模块,用于向智能设备发送所述视角变化指令,从而由智能设备基于所述视角变化指令,调整在智能设备的显示界面中展示房屋的全景图像的视角。
- 根据权利要求33所述的虚拟看房装置,其特征在于,指令生成模块,用于使能可穿戴设备中的第一声音检测模块检测从智能设 备直达所述第一声音检测模块的第一声音信号,使能可穿戴设备中的第二声音检测模块检测从该智能设备直达所述第二声音检测模块的第二声音信号,其中所述第一声音信号和所述第二声音信号为智能设备同时发射的;确定第一声音信号的接收时刻与第二声音信号的接收时刻之间的时间差;基于第一声音检测模块与第二声音检测模块之间的距离以及所述时间差,确定所述相对角度。
- 根据权利要求33所述的虚拟看房装置,其特征在于,指令生成模块,用于执行下列中的至少一个:当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于调整全景图像的视角向左移动固定角度或向左移动与所述变化的变化量相关的角度的指令,或生成向左移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于调整全景图像的视角向右移动固定角度或向右移动与所述变化的变化量相关的角度的指令,或生成向右移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于调整全景图像的视角向右移动固定角度或向右移动与所述变化的变化量相关的角度的指令,或生成向右移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于调整全景图像的视角向左移动固定角度或向左移动与所述变化的变化量相关的角度的指令,或生成向左移动到全景图像中预定点位的指令。
- 一种可穿戴设备,其特征在于,包括:第一声音检测模块;第二声音检测模块;控制模块,用于当确定可穿戴设备与智能设备之间的相对角度发生变化时,生成视角变化指令,其中基于第一声音检测模块和第二声音检测模块针对从智能设备发出的声音信号的各自检测操作,确定所述相对角度;通信模块,用于向智能设备发送所述视角变化指令,从而由所述智能设备基于所述视角变化指令,调整在智能设备的显示界面中展示房屋的全景图像的视角。
- 一种虚拟看房方法,其特征在于,包括:当确定智能设备与用户所佩戴的可穿戴设备之间的相对角度发生变化时,生成视角变化指令,其中基于包含在所述智能设备中的第一声音检测模块和第二声音检测模块针对从可穿戴设备发出的声音信号的各自检测操作,确定所述相对角度;基于所述视角变化指令,调整在智能设备的显示界面中展示房屋的全景图像的视角。
- 根据权利要求38所述的虚拟看房方法,其特征在于,所述确定相对角度包括:智能设备中的第一声音检测模块检测从可穿戴设备直达所述第一声音检测模块的第一声音信号,智能设备中的第二声音检测模块检测从该可穿戴设备直达所述第二声音检测模块的第二声音信号,其中所述第一声音信号和所述第二声音信号为可穿戴设备同时发射的;确定第一声音信号的接收时刻与第二声音信号的接收时刻之间的时间差;基于第一声音检测模块与第二声音检测模块之间的距离以及所述时间差,确定所述相对角度。
- 根据权利要求38所述的虚拟看房方法,其特征在于,所述生成视角变化指令包括下列中的至少一个:当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于调整全景图像的视角向左移动固定角度或向左移动与所述变化的变化量相关的角度的指令,或生成向左移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于调整全景图像的视角向右移动固定角度或向右移动与所述变化的变化量相关的角度的指令,或生成向右移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于调整全景图像的视角向右移动固定角度或向右移动与所述变化的变化量相关的角度的指令,或生成向右移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于调整全景图像的视角向左移动固定角度或向左移动与所述变化的变化量相关的角度的指令,或生成向左移动到全景图像中预定点位的指令。
- 一种虚拟看房装置,其特征在于,包括:指令生成模块,用于当确定智能设备与用户所佩戴的可穿戴设备之间的相对角度发生变化时,生成视角变化指令,其中基于包含在所述智能设备中的第一声音检测模块和第二声音检测模块针对从可穿戴设备发出的声音信号的各自检测操作,确定所述相对角度;视角调整模块,用于基于所述视角变化指令,调整在智能设备的显示界面中展示房屋的全景图像的视角。
- 根据权利要求42所述的虚拟看房装置,其特征在于,指令生成模块,用于使能智能设备中的第一声音检测模块检测从可穿戴设备直达所述第一声音检测模块的第一声音信号,使能智能设备中的第二声音检测模块检测从该可穿戴设备直达所述第二声音检测模块的第二声音信号,其中所述第一声音信号和所述第二声音信号为可穿戴设备同时发射的;确定第一声音信号的接收时刻与第二声音信号的接收时刻之间的时间差;基于第一声音检测模块与第二声音检测模块之间的距离以及所述时间差,确定所述相对角度。
- 根据权利要求42所述的虚拟看房装置,其特征在于,指令生成模块,用于执行下列中的至少一个:当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于调整全景图像的视角向左移动固定角度或向左移动与所述变化的变化量相关的角度的指令,或生成向左移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于调整全景图像的视角向右移动固定角度或向右移动与所述变化的变化量相关的角度的指令,或生成向右移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向左移动时,生成用于调整全景图像的视角向右移动固定角度或向右移动与所述变化的变化量相关的角度的指令,或生成向右移动到全景图像中预定点位的指令;当所述变化对应于可穿戴设备相对智能设备向右移动时,生成用于调整全景图像的视角向左移动固定角度或向左移动与所述变化的变化量相关的角度的 指令,或生成向左移动到全景图像中预定点位的指令。
- 一种智能设备,其特征在于,包括:第一声音检测模块;第二声音检测模块;控制模块,用于当确定智能设备与用户所佩戴的可穿戴设备之间的相对角度发生变化时,生成视角变化指令,其中基于包含在所述智能设备中的第一声音检测模块和第二声音检测模块针对从可穿戴设备发出的声音信号的各自检测操作,确定所述相对角度;视角调整模块,用于基于所述视角变化指令,调整在智能设备的显示界面中展示房屋的全景图像的视角。
- 一种计算机可读存储介质,其特征在于,其中存储有计算机可读指令,该计算机可读指令用于执行如权利要求1-3中任一项所述的生成控制指令的方法,或如权利要求5-7中任一项所述的生成控制指令的方法,或如权利要求9-13中任一项所述的切换歌曲的方法,或如权利要求19-23中任一项所述的切换歌曲的方法,或如权利要求29-32中任一项所述的虚拟看房的方法,或如权利要求38-41中任一项所述的虚拟看房的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/139,909 US20230333850A1 (en) | 2020-08-25 | 2023-04-26 | Method and apparatus for generating control instruction |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010863741 | 2020-08-25 | ||
CN202010863742 | 2020-08-25 | ||
CN202010961268 | 2020-09-14 | ||
CN202010961270 | 2020-09-14 | ||
CN202011155758.4 | 2020-10-26 | ||
CN202011155758.4A CN112256130A (zh) | 2020-08-25 | 2020-10-26 | 生成控制指令的方法和装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/139,909 Continuation US20230333850A1 (en) | 2020-08-25 | 2023-04-26 | Method and apparatus for generating control instruction |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022088435A1 true WO2022088435A1 (zh) | 2022-05-05 |
Family
ID=74262058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/137438 WO2022088435A1 (zh) | 2020-08-25 | 2020-12-18 | 生成控制指令的方法和装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230333850A1 (zh) |
CN (1) | CN112256130A (zh) |
WO (1) | WO2022088435A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024073297A1 (en) * | 2022-09-30 | 2024-04-04 | Sonos, Inc. | Generative audio playback via wearable playback devices |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104093094A (zh) * | 2014-06-16 | 2014-10-08 | 华南理工大学 | 基于自适应旋转对准的室内语音采集方法与装置 |
CN106937143A (zh) * | 2015-12-31 | 2017-07-07 | 幸福在线(北京)网络技术有限公司 | 一种虚拟现实视频的播放控制方法及装置和设备 |
CN109239667A (zh) * | 2018-10-26 | 2019-01-18 | 深圳市友杰智新科技有限公司 | 一种基于双麦克风阵列的声源定位方法 |
US20190196006A1 (en) * | 2017-12-01 | 2019-06-27 | Electromagnetic Systems, Inc. | Obstacle Position and Extent Measurement By Automotive Radar |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6845338B1 (en) * | 2003-02-25 | 2005-01-18 | Symbol Technologies, Inc. | Telemetric contextually based spatial audio system integrated into a mobile terminal wireless system |
CN101201399B (zh) * | 2007-12-18 | 2012-01-11 | 北京中星微电子有限公司 | 一种声源定位方法及系统 |
CN101776982A (zh) * | 2010-01-21 | 2010-07-14 | 中国传媒大学 | 一种利用数字罗盘进行便携设备控制的方法 |
CN103763440A (zh) * | 2014-02-19 | 2014-04-30 | 联想(北京)有限公司 | 一种信息处理方法、电子设备附件及电子设备 |
-
2020
- 2020-10-26 CN CN202011155758.4A patent/CN112256130A/zh not_active Withdrawn
- 2020-12-18 WO PCT/CN2020/137438 patent/WO2022088435A1/zh active Application Filing
-
2023
- 2023-04-26 US US18/139,909 patent/US20230333850A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104093094A (zh) * | 2014-06-16 | 2014-10-08 | 华南理工大学 | 基于自适应旋转对准的室内语音采集方法与装置 |
CN106937143A (zh) * | 2015-12-31 | 2017-07-07 | 幸福在线(北京)网络技术有限公司 | 一种虚拟现实视频的播放控制方法及装置和设备 |
US20190196006A1 (en) * | 2017-12-01 | 2019-06-27 | Electromagnetic Systems, Inc. | Obstacle Position and Extent Measurement By Automotive Radar |
CN109239667A (zh) * | 2018-10-26 | 2019-01-18 | 深圳市友杰智新科技有限公司 | 一种基于双麦克风阵列的声源定位方法 |
Also Published As
Publication number | Publication date |
---|---|
US20230333850A1 (en) | 2023-10-19 |
CN112256130A (zh) | 2021-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110337318B (zh) | 混合现实装置中的虚拟和真实对象记录 | |
US9632683B2 (en) | Methods, apparatuses and computer program products for manipulating characteristics of audio objects by using directional gestures | |
US11323837B2 (en) | Electronic device displays a graphical representation that plays binaural sound | |
CN109407822B (zh) | 协作虚拟现实的防恶心和视频流式传输技术 | |
US20140328505A1 (en) | Sound field adaptation based upon user tracking | |
CN109407821B (zh) | 与虚拟现实视频的协作交互 | |
JP2017509181A (ja) | ジェスチャ相互作用式の装着可能な空間オーディオシステム | |
CN112272817B (zh) | 用于在沉浸式现实中提供音频内容的方法和装置 | |
US20230333850A1 (en) | Method and apparatus for generating control instruction | |
WO2022062531A1 (zh) | 一种多通道音频信号获取方法、装置及系统 | |
Cohen et al. | Applications of Audio Augmented Reality: Wearware, Everyware, Anyware, and Awareware | |
Pugliese et al. | ATSI: augmented and tangible sonic interaction | |
Rosca et al. | Mobile interaction with remote worlds: The acoustic periscope |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20959592 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20959592 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.10.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20959592 Country of ref document: EP Kind code of ref document: A1 |