CN105527862B - A kind of information processing method and the first electronic equipment - Google Patents
A kind of information processing method and the first electronic equipment Download PDFInfo
- Publication number
- CN105527862B CN105527862B CN201410509682.9A CN201410509682A CN105527862B CN 105527862 B CN105527862 B CN 105527862B CN 201410509682 A CN201410509682 A CN 201410509682A CN 105527862 B CN105527862 B CN 105527862B
- Authority
- CN
- China
- Prior art keywords
- information
- electronic equipment
- sensing
- path
- sound wave
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The invention discloses a kind of information processing method and the first electronic equipments, wherein, the described method includes: the path where the first sensing module in described two sensing modules and the second sensing module and the user between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization path;The sound wave is detected on first sensing detection and localization path, obtains the first information;In the second sensing detection and localization path detection to the sound wave, the second information is obtained;The operation result obtained according to the first information and second information operation positions emission source where the user, obtains third information;Parse the voice command carried in the sound wave, when meeting a preset rules, it assists institute's speech commands to execute the first processing by the third information, makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtains.
Description
Technical field
The present invention relates to mechanics of communication more particularly to a kind of information processing methods and the first electronic equipment.
Background technique
Present inventor at least has found exist in the related technology during realizing the embodiment of the present application technical solution
Following technical problem:
Existing speech recognition interaction, since its realization mechanism is the identification based on special sound content, Zhi Nengshi
For single user, the speech recognition scene of single equipment, for example, a certain controlled device is the desk lamp for having voice control function, user
Issuing the voice command for being controlled the controlled device by the emission source at place is " turning on light " or " turning off the light ", is known
Not Chu the voice content that carries of voice command when being " turning on light " or " turning off the light ", will correspond to open and the desk lamp or close this
Lamp, moreover, for single equipment, in addition to according to voice content, be can not according to the location information of emission source itself, such as direction or
Towards carrying out assistant voice content recognition, such as emission source edge emitting to the left, then corresponding " turning on light ", emission source edge emitting to the right,
Then corresponding " turning off the light " etc..
To the speech recognition scene of multiple controlled devices, which is just more difficult to solve, the relevant technologies
In, for the problem, it there is no effective solution.
Summary of the invention
In view of this, the embodiment of the present invention is desirable to provide a kind of information processing method and the first electronic equipment, at least solve
The above problem of the existing technology.
The technical solution of the embodiment of the present invention is achieved in that
The embodiment of the invention discloses a kind of information processing methods, and the method is applied in the first electronic equipment, described
First electronic equipment includes at least two sets sensing mould groups, the sound of emission source transmitting where the sensing mould group is used to detect user
Wave;Every set sensing mould group includes two sensing modules, which comprises
By emission source where the first sensing module in described two sensing modules and the second sensing module and the user it
Between path, be identified as the first sensing detection and localization path and second sensing detection and localization path;
The sound wave is detected on first sensing detection and localization path, obtains the first information;
In the second sensing detection and localization path detection to the sound wave, the second information is obtained;
The operation result obtained according to the first information and second information operation is to transmitting where the user
Source is positioned, and third information is obtained;
The voice command carried in the sound wave is parsed, when meeting a preset rules, is assisted by the third information
Institute's speech commands execute the first processing, make it possible to according to the first obtained result of processing at least one second electronic equipment into
The corresponding voice control of row.
Preferably, the first information detects that the sound wave reaches institute to sense described first on detection and localization path
State first time spent by the first sensing module;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization path
Second time spent by sensing module.
Preferably, the operation result obtained according to the first information and second information operation is to the user
Place emission source is positioned, and third information is obtained, comprising:
It is time difference according to the operation result that the first time and second temporal calculation obtain;
It is converted to obtain an angle value according to the time difference, the angle value is for characterizing the first sensing positioning inspection
Survey the corner dimension in the calibration path that path meets preset condition with one;
It is obtained by the position line of first sensing module and second sensing module and the angle value described
Demarcate path;
The position of emission source where the first position demarcated by at least two calibration paths is determined as the user
It sets.
Preferably, described to assist institute's speech commands to execute the first processing by the third information, it makes it possible to according to the
The result that one processing obtains carries out corresponding voice control at least one second electronic equipment, comprising:
Obtain the first position;
Obtain the position of at least one second electronic equipment;
Distance difference is obtained according to the position operation of the first position and at least one second electronic equipment;
The hair where the user is selected from least one described second electronic equipment according to the distance difference
The second electronic equipment that source meets a threshold value is penetrated, corresponding voice control is carried out to the second electronic equipment that selection obtains.
Preferably, the first information detects that the sound wave reaches institute to sense described first on detection and localization path
State the first intensity corresponding to the first sensing module;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization path
Second intensity corresponding to sensing module.
Preferably, the operation result obtained according to the first information and second information operation is to the user
Place emission source is positioned, and third information is obtained, comprising:
It is that first intensity is big according to the operation result that first intensity and the second intensity operation obtain
When second intensity, hair where the first direction in correspondence first sensing detection and localization path is determined as the user
Penetrate the direction in source.
Preferably, described to assist institute's speech commands to execute the first processing by the third information, it makes it possible to according to the
The result that one processing obtains carries out corresponding voice control at least one second electronic equipment, comprising:
Obtain the first direction;
Obtain the position of at least one second electronic equipment;
The second electronic equipment for being directed toward the first direction from least one described second electronic equipment chooses,
Corresponding voice control is carried out to the second electronic equipment that selection obtains.
The embodiment of the invention provides a kind of first electronic equipment, first electronic equipment includes at least two sets sensing moulds
Group, the sound wave of emission source transmitting where the sensing mould group is used to detect user;Every set sensing mould group includes two sensing modules,
First electronic equipment further include:
Detect path determining unit, for by the first sensing module in described two sensing modules and the second sensing module with
Path where the user between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization
Path;
First acquisition unit obtains first for detecting the sound wave on first sensing detection and localization path
Information;
Second acquisition unit, for obtaining the second letter in the second sensing detection and localization path detection to the sound wave
Breath;
Positioning unit, the operation result for being obtained according to the first information and second information operation is to described
Emission source where user positions, and obtains third information;
Processing unit is controlled, for parsing the voice command carried in the sound wave, when meeting a preset rules, is passed through
Third information auxiliary institute's speech commands execute the first processing, and the result for making it possible to be obtained according to the first processing is at least one
A second electronic equipment carries out corresponding voice control.
Preferably, the first information detects that the sound wave reaches institute to sense described first on detection and localization path
State first time spent by the first sensing module;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization path
Second time spent by sensing module.
Preferably, the positioning unit includes:
First operation subelement, the operation knot for being obtained according to the first time and second temporal calculation
Fruit is time difference;
Second operation subelement obtains an angle value for converting according to the time difference, and the angle value is used for table
Levy the corner dimension in the calibration path that the first sensing detection and localization path meets preset condition with one;
Third operation subelement, for by first sensing module and second sensing module position line with
The angle value obtains the calibration path;
Position locator unit, it is described for the first position demarcated by at least two calibration paths to be determined as
The position of emission source where user.
Preferably, the control processing unit includes:
First obtains subelement, for obtaining the first position;
Second obtains subelement, for obtaining the position of at least one second electronic equipment;
First processing subelement, for according to the first position and the position of at least one second electronic equipment fortune
Calculation obtains distance difference;
Second processing subelement, for being selected from least one described second electronic equipment according to the distance difference
The second electronic equipment for meeting a threshold value apart from emission source where the user carries out pair the second electronic equipment that selection obtains
The voice control answered.
Preferably, the first information detects that the sound wave reaches institute to sense described first on detection and localization path
State the first intensity corresponding to the first sensing module;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization path
Second intensity corresponding to sensing module.
Preferably, the positioning unit, comprising:
Direction locator unit, the operation knot for being obtained according to first intensity and the second intensity operation
It is when fruit is that first intensity is greater than second intensity, the first direction in correspondence first sensing detection and localization path is true
The direction of emission source where being set to the user.
Preferably, the control processing unit includes:
Third obtains subelement, for obtaining the first direction;
4th obtains subelement, for obtaining the position of at least one second electronic equipment;
Third handles subelement, for being directed toward the first direction from least one described second electronic equipment
Two electronic equipments choose, and carry out corresponding voice control to the second electronic equipment that selection obtains.
The information processing method of the embodiment of the present invention, the method are applied in the first electronic equipment, first electronics
Equipment includes at least two sets sensing mould groups, the sound wave of emission source transmitting where the sensing mould group is used to detect user;Every set passes
Feeling mould group includes two sensing modules, which comprises passes the first sensing module in described two sensing modules and second
Path where feeling module and the user between emission source is identified as the first sensing detection and localization path and the second sensing
Detection and localization path;The sound wave is detected on first sensing detection and localization path, obtains the first information;Described
Two sensing detection and localization path detections obtain the second information to the sound wave;According to the first information and second information
The operation result that operation obtains positions emission source where the user, obtains third information;Parse the sound wave
The voice command of middle carrying when meeting a preset rules, assists institute's speech commands to execute at first by the third information
Reason makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtains.Using
The embodiment of the present invention can solve the above problem of the existing technology.
Detailed description of the invention
Fig. 1 is an implementation process schematic diagram of embodiment of the present invention method one;
Fig. 2 is an implementation process schematic diagram of embodiment of the present invention method two;
Fig. 3 is an implementation process schematic diagram of embodiment of the present invention method three;
Fig. 4 is the schematic diagram using a sensor array of the embodiment of the present invention;
Fig. 5 is the schematic diagram using another sensor array of the embodiment of the present invention;
Fig. 6 is the positioning schematic diagram using one application scenarios of the embodiment of the present invention;
Fig. 7 is a composed structure schematic diagram of electronic equipment embodiment one of the present invention.
Specific embodiment
The implementation of technical solution is described in further detail with reference to the accompanying drawing.
Embodiment of the method one:
The embodiment of the invention provides a kind of information processing methods, and the method is applied in the first electronic equipment, described
First electronic equipment includes at least two sets sensing mould groups, the sound of emission source transmitting where the sensing mould group is used to detect user
Wave;Every set sensing mould group includes two sensing modules, as shown in Figure 1, which comprises
Step 101, will be where the first sensing module in described two sensing modules and the second sensing module and the user
Path between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization path;
Step 102 detects the sound wave on first sensing detection and localization path, obtains the first information;
Step 103 senses detection and localization path detection to the sound wave described second, obtains the second information;
Here, the first information and second information are divided into two types, such as time according to the difference of application scenarios
Or intensity, rear extended meeting are specifically described, for example are compared by time difference or intensity and carried out position positioning or direction positioning.Wherein, by force
Degree includes phase intensity or audio intensity etc..
Step 104, the operation result obtained according to the first information and second information operation are to the user
Place emission source is positioned, and third information is obtained;
Here, the position of emission source and/or sounding direction where family can be used in the third information, wherein Ke Yitong
Spending the above-mentioned time difference obtains the position;The direction can also be relatively obtained by intensity.
Step 105 parses the voice command carried in the sound wave, when meeting a preset rules, passes through the third
Information assists institute's speech commands to execute the first processing, and the result for making it possible to be obtained according to the first processing is at least one the second electricity
Sub- equipment carries out corresponding voice control.
Here, the processing can be language identification, and first electronic equipment includes being made of at least two sensors
The sensor module, sensor is a kind of specific implementation of the sensing module;Corresponding first electronic equipment, it is described
Second electronic equipment refers to " controlled device " of the voice command control issued by emission source where user, such as TV or refrigerator
Smart home ".
Using the embodiment of the present invention, the first sensing detection and localization path and the second sensing positioning inspection are obtained by step 101
Survey path;The first information and the second information are obtained by step 102-103;The first information and the second letter are based on by step 104
Breath obtains third information, and the acoustic location so as to be detected according to sensor goes out the corresponding third information of user's emission source, such as position
It sets and/or direction;Third information itself or third information are based on as auxiliary information, auxiliary language order by step 105
The first processing is carried out, to realize the voice control to the second electronic equipment.
In one preferred embodiment of the embodiment of the present invention, the first information is on first sensing detection and localization road
Detect that the sound wave reaches first time spent by first sensing module on diameter;Second information is described the
Detect that the sound wave reaches the second time spent by second sensing module on two sensing detection and localization paths.
Embodiment of the method two:
The embodiment of the invention provides a kind of information processing methods, and the method is applied in the first electronic equipment, described
First electronic equipment includes at least two sets sensing mould groups, the sound of emission source transmitting where the sensing mould group is used to detect user
Wave;Every set sensing mould group includes two sensing modules, as shown in Figure 2, which comprises
Step 201, will be where the first sensing module in described two sensing modules and the second sensing module and the user
Path between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization path;
Step 202 detects the sound wave on first sensing detection and localization path, obtains at the first time;
Step 203 senses detection and localization path detection to the sound wave described second, obtains for the second time;
Step 204, the operation result obtained according to the first time and second temporal calculation are the time difference
Value;
Step 205 converts to obtain an angle value according to the time difference, and the angle value is passed for characterizing described first
The corner dimension in the calibration path that sense detection and localization path meets preset condition with one;
Here, the calibration path can be the neutrality line of two sensor position lines;
Step 206 passes through the position line of first sensing module and second sensing module and the angle value
Obtain the calibration path;
The first position demarcated by at least two calibration paths is determined as sending out where the user by step 207
Penetrate the position in source;
Here, the first position can be the position in the crosspoint of line segment where at least two calibration paths.
Step 208 parses the voice command carried in the sound wave, when meeting a preset rules, passes through described first
Location assistance institute's speech commands execute the first processing, and the result for making it possible to be obtained according to the first processing is at least one the second electricity
Sub- equipment carries out corresponding voice control.
Here, the processing can be language identification, and first electronic equipment includes being made of at least two sensors
The sensor module, sensor is a kind of specific implementation of the sensing module;Corresponding first electronic equipment, it is described
Second electronic equipment refers to " controlled device " of the voice command control issued by emission source where user, such as TV or refrigerator
Smart home ".
It is described to assist institute's speech commands to hold by the third information in one preferred embodiment of the embodiment of the present invention
The processing of row first makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtains
System, comprising:
Obtain the first position;
Obtain the position of at least one second electronic equipment;
Distance difference is obtained according to the position operation of the first position and at least one second electronic equipment;
The hair where the user is selected from least one described second electronic equipment according to the distance difference
The second electronic equipment that source meets a threshold value is penetrated, corresponding voice control is carried out to the second electronic equipment that selection obtains.
Embodiment of the method three:
The embodiment of the invention provides a kind of information processing methods, and the method is applied in the first electronic equipment, described
First electronic equipment includes at least two sets sensing mould groups, the sound of emission source transmitting where the sensing mould group is used to detect user
Wave;Every set sensing mould group includes two sensing modules, as shown in Figure 3, which comprises
Step 301, will be where the first sensing module in described two sensing modules and the second sensing module and the user
Path between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization path;
Step 302 detects the sound wave on first sensing detection and localization path, obtains the first intensity;
Step 303 senses detection and localization path detection to the sound wave described second, obtains the second intensity;
Step 304, the operation result obtained according to first intensity and the second intensity operation are described the
When one intensity is greater than second intensity, the first direction in correspondence first sensing detection and localization path is determined as the use
The direction of emission source where family;
Step 305 parses the voice command carried in the sound wave, when meeting a preset rules, passes through described first
Direction assists institute's speech commands to execute the first processing, and the result for making it possible to be obtained according to the first processing is at least one the second electricity
Sub- equipment carries out corresponding voice control.
Here, the processing can be language identification, and first electronic equipment includes being made of at least two sensors
The sensor module, sensor is a kind of specific implementation of the sensing module;Corresponding first electronic equipment, it is described
Second electronic equipment refers to " controlled device " of the voice command control issued by emission source where user, such as TV or refrigerator
Smart home ".
It is described to assist institute's speech commands to hold by the third information in one preferred embodiment of the embodiment of the present invention
The processing of row first makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtains
System, comprising:
Obtain the first direction;
Obtain the position of at least one second electronic equipment;
The second electronic equipment for being directed toward the first direction from least one described second electronic equipment chooses,
Corresponding voice control is carried out to the second electronic equipment that selection obtains.
The embodiment of the present invention is described below by taking a practical application scene as an example:
This scene is to be based on microphone array and people based on the user speech instruction identification scheme of stereo sound sensing positioning
Ear stereo sound positions bionical, can judge to use according to the sound time difference and intensity difference that alternative sounds sensor receives
Position and/or sounding direction of the family apart from smart home device (controlled device controlled by sound instruction), this position
It sets and/or sounding is towards that can assist semantics recognition, auxiliary semantics recognition is for localization of sound source, to provide the identification of enhancing
Ability and effect.
Such as: user issues identical voice to different directions, can be judged as two equipment to different directions
One of issue instruction, or be interpreted the different instruction for same equipment.It can simplify the voice operating of user in this way.Tool
For body, identical voice is issued to different directions by user, so that it may be judged as two equipment to different directions
One of issue instruction, to obtain that user distance which controlled device is close or signal is strong, then come to define controlled set for which
Standby voice, automatic identification, and be simplified to and go to some without user and gone in front of equipment to carry out voice input control;Pass through needle
To the different instruction of same equipment, so that the judgement with direction is different, to define different voice commands, with regard to carrying out without user
Voice has input.
Here, microphone array: the multiple sound transducers being mounted on indoor wall.The position of installation can be water
Flat, fixed intervals are arranged as a line, intersect as shown in figure 4, being also possible to horizontal and vertical direction, as shown in figure 5, in turn also
A rectangle plane can be arranged as with fixed intervals.
Here, for the judgement of above-mentioned position of articulation, the positioning of user pronunciation and several factors: room are needed towards judgement
Between 3 dimensional views;Coordinate of each sound transducer on the view;Coordinate of the controlled device on the view;User's hair
Instruction sound out reaches time difference or the phase difference of each sensor.
When sensor position is fixed known, by the time difference for the sound that any two sensor receives, such as Fig. 6 institute
Show, so that it may judge sound source and any two sensor (sensor pair that such as sensor 1 and sensor 2 are constituted;Or
Another sensor that sensor 2 and sensor 3 are constituted to) angular relationship (ratio of line neutrality line (shown in Fig. 6 chain lines)
Such as, the sensor that sensor 1 and sensor 2 are constituted is to corresponding angle a);In conjunction with sound source and sensor more than two
Pair angular relationship (i.e. two or more straight lines, shown in Fig. 6 chain lines) can be obtained by spatial position (i.e. Fig. 6 of sound source
In two dashdotted crosspoints show the position of sound source).
When number of sensors and density increase, error and noise can be further eliminated, location information is just more accurate.
Here, it for pronouncing towards for judgement, due to the sound shielding action of human skull and facial muscle, is shaken by vocal cords
The sound issued is moved, the intensity on different directions is different, i.e. the directive property of sound.According to being placed in four, room wall
The intensity of sound that upper multiple sensors receive is poor, it can be determined that goes out the direction of sound source.
In addition, the principle about stereo sound positioning, can also be positioned by binaural effect, ears time difference and sound
Etc. technologies realize, do not repeat them here.
It need to be noted that: the description of following electronic equipment item, with the above method description be it is similar, with method
Beneficial effect description, does not repeat them here.For undisclosed technical detail in electronic equipment embodiment of the present invention, the present invention is please referred to
The description of embodiment of the method.
Electronic equipment embodiment one:
The embodiment of the invention provides the first electronic equipment, first electronic equipment includes at least two sets sensing mould groups,
The sound wave of emission source transmitting where the sensing mould group is used to detect user;Every set sensing mould group includes two sensing modules, such as
Shown in Fig. 7, first electronic equipment further include:
Detect path determining unit, for by the first sensing module in described two sensing modules and the second sensing module with
Path where the user between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization
Path;
First acquisition unit obtains first for detecting the sound wave on first sensing detection and localization path
Information;
Second acquisition unit, for obtaining the second letter in the second sensing detection and localization path detection to the sound wave
Breath;
Positioning unit, the operation result for being obtained according to the first information and second information operation is to described
Emission source where user positions, and obtains third information;
Processing unit is controlled, for parsing the voice command carried in the sound wave, when meeting a preset rules, is passed through
Third information auxiliary institute's speech commands execute the first processing, and the result for making it possible to be obtained according to the first processing is at least one
A second electronic equipment carries out corresponding voice control.
In one preferred embodiment of the embodiment of the present invention, the first information is on first sensing detection and localization road
Detect that the sound wave reaches first time spent by first sensing module on diameter;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization path
Second time spent by sensing module.
In one preferred embodiment of the embodiment of the present invention, the positioning unit includes:
First operation subelement, the operation knot for being obtained according to the first time and second temporal calculation
Fruit is time difference;
Second operation subelement obtains an angle value for converting according to the time difference, and the angle value is used for table
Levy the corner dimension in the calibration path that the first sensing detection and localization path meets preset condition with one;
Third operation subelement, for by first sensing module and second sensing module position line with
The angle value obtains the calibration path;
Position locator unit, it is described for the first position demarcated by at least two calibration paths to be determined as
The position of emission source where user.
In one preferred embodiment of the embodiment of the present invention, the control processing unit includes:
First obtains subelement, for obtaining the first position;
Second obtains subelement, for obtaining the position of at least one second electronic equipment;
First processing subelement, for according to the first position and the position of at least one second electronic equipment fortune
Calculation obtains distance difference;
Second processing subelement, for being selected from least one described second electronic equipment according to the distance difference
The second electronic equipment for meeting a threshold value apart from emission source where the user carries out pair the second electronic equipment that selection obtains
The voice control answered.
In one preferred embodiment of the embodiment of the present invention, the first information is on first sensing detection and localization road
Detect that the sound wave reaches the first intensity corresponding to first sensing module on diameter;
Second information is to detect that the sound wave reaches described second on second sensing detection and localization path
Second intensity corresponding to sensing module.
In one preferred embodiment of the embodiment of the present invention, the positioning unit, comprising:
Direction locator unit, the operation knot for being obtained according to first intensity and the second intensity operation
It is when fruit is that first intensity is greater than second intensity, the first direction in correspondence first sensing detection and localization path is true
The direction of emission source where being set to the user.
In one preferred embodiment of the embodiment of the present invention, the control processing unit includes:
Third obtains subelement, for obtaining the first direction;
4th obtains subelement, for obtaining the position of at least one second electronic equipment;
Third handles subelement, for being directed toward the first direction from least one described second electronic equipment
Two electronic equipments choose, and carry out corresponding voice control to the second electronic equipment that selection obtains.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can combine, or
It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion
Mutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unit
Or communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit, it can and it is in one place, it may be distributed over multiple network lists
In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated in one processing unit, it can also
To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned
Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program
When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned include: movable storage device, it is read-only
Memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or
The various media that can store program code such as person's CD.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent product
When selling or using, it also can store in a computer readable storage medium.Based on this understanding, the present invention is implemented
Substantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words,
The computer software product is stored in a storage medium, including some instructions are used so that computer equipment (can be with
It is personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention.
And storage medium above-mentioned includes: that movable storage device, ROM, RAM, magnetic or disk etc. are various can store program code
Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (14)
1. a kind of information processing method, the method is applied in the first electronic equipment, and first electronic equipment includes at least
Two sets of sensing mould groups, the sound wave of emission source transmitting where the sensing mould group is used to detect user;Every set sensing mould group includes two
A sensing module, which comprises
It will be between the first sensing module in described two sensing modules and the second sensing module and user place emission source
Path is identified as the first sensing detection and localization path and the second sensing detection and localization path;
The sound wave is detected on first sensing detection and localization path, obtains the first information;
In the second sensing detection and localization path detection to the sound wave, the second information is obtained;
The operation result obtained according to the first information and second information operation to emission source where the user into
Row positioning, obtains third information;The third information is the position of emission source or direction where the user;
The voice command carried in the sound wave is parsed, when meeting a preset rules, by described in third information auxiliary
Voice command executes the first processing, makes it possible to carry out pair at least one second electronic equipment according to the first obtained result of processing
The voice control answered.
2. according to the method described in claim 1, the first information is to detect on first sensing detection and localization path
First time spent by first sensing module is reached to the sound wave;
Second information is to detect that the sound wave reaches second sensing on second sensing detection and localization path
Second time spent by module.
3. according to the method described in claim 2, the fortune obtained according to the first information and second information operation
It calculates result to position emission source where the user, obtains third information, comprising:
It is time difference according to the operation result that the first time and second temporal calculation obtain;
It is converted to obtain an angle value according to the time difference, the angle value is for characterizing first sensing detection and localization road
Diameter and one meet preset condition calibration path corner dimension;
The calibration is obtained by the position line of first sensing module and second sensing module and the angle value
Path;
The position of emission source where the first position demarcated by at least two calibration paths is determined as the user.
4. according to the method described in claim 3, described assist institute's speech commands to execute at first by the third information
Reason makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtains, comprising:
Obtain the first position;
Obtain the position of at least one second electronic equipment;
Distance difference is obtained according to the position operation of the first position and at least one second electronic equipment;
It is selected from least one described second electronic equipment according to the distance difference apart from emission source where the user
The second electronic equipment for meeting a threshold value carries out corresponding voice control to the second electronic equipment that selection obtains.
5. according to the method described in claim 1, the first information is to detect on first sensing detection and localization path
The first intensity corresponding to first sensing module is reached to the sound wave;
Second information is to detect that the sound wave reaches second sensing on second sensing detection and localization path
Second intensity corresponding to module.
6. according to the method described in claim 5, the fortune obtained according to the first information and second information operation
It calculates result to position emission source where the user, obtains third information, comprising:
It is greater than institute according to the operation result that first intensity and the second intensity operation obtain for first intensity
When stating the second intensity, emission source where the first direction in correspondence first sensing detection and localization path is determined as the user
Direction.
7. according to the method described in claim 6, described assist institute's speech commands to execute at first by the third information
Reason makes it possible to carry out corresponding voice control at least one second electronic equipment according to the result that the first processing obtains, comprising:
Obtain the first direction;
Obtain the position of at least one second electronic equipment;
The second electronic equipment for being directed toward the first direction from least one described second electronic equipment chooses, to choosing
The second electronic equipment selected carries out corresponding voice control.
8. a kind of first electronic equipment, first electronic equipment include at least two sets sensing mould groups, the sensing mould group is used for
The sound wave of emission source transmitting where detecting user;Every set sensing mould group includes two sensing modules, and first electronic equipment is also
Include:
Detect path determining unit, for by the first sensing module in described two sensing modules and the second sensing module with it is described
Path where user between emission source is identified as the first sensing detection and localization path and the second sensing detection and localization road
Diameter;
First acquisition unit obtains the first information for detecting the sound wave on first sensing detection and localization path;
Second acquisition unit, for, to the sound wave, obtaining the second information in the second sensing detection and localization path detection;
Positioning unit, the operation result for being obtained according to the first information and second information operation is to the user
Place emission source is positioned, and third information is obtained;The third information is the position of emission source or direction where the user;
Processing unit is controlled, for parsing the voice command carried in the sound wave, when meeting a preset rules, by described
Third information assists institute's speech commands to execute the first processing, make it possible to according to the first obtained result of processing at least one the
Two electronic equipments carry out corresponding voice control.
9. the first electronic equipment according to claim 8, the first information is on first sensing detection and localization road
Detect that the sound wave reaches first time spent by first sensing module on diameter;
Second information is to detect that the sound wave reaches second sensing on second sensing detection and localization path
Second time spent by module.
10. the first electronic equipment according to claim 9, the positioning unit include:
First operation subelement, the operation result for being obtained according to the first time and second temporal calculation are
Time difference;
Second operation subelement obtains an angle value for converting according to the time difference, and the angle value is for characterizing institute
State the corner dimension in the calibration path that the first sensing detection and localization path meets preset condition with one;
Third operation subelement, for by first sensing module and second sensing module position line with it is described
Angle value obtains the calibration path;
Position locator unit, for the first position demarcated by at least two calibration paths to be determined as the user
The position of place emission source.
11. the first electronic equipment according to claim 10, the control processing unit include:
First obtains subelement, for obtaining the first position;
Second obtains subelement, for obtaining the position of at least one second electronic equipment;
First processing subelement, for being obtained according to the position operation of the first position and at least one second electronic equipment
To distance difference;
Second processing subelement, for selecting distance from least one described second electronic equipment according to the distance difference
Emission source meets the second electronic equipment of a threshold value where the user, carries out to the second electronic equipment that selection obtains corresponding
Voice control.
12. the first electronic equipment according to claim 8, the first information is on first sensing detection and localization road
Detect that the sound wave reaches the first intensity corresponding to first sensing module on diameter;
Second information is to detect that the sound wave reaches second sensing on second sensing detection and localization path
Second intensity corresponding to module.
13. the first electronic equipment according to claim 12, the positioning unit, comprising:
Direction locator unit, the operation result for being obtained according to first intensity and the second intensity operation are
When first intensity is greater than second intensity, the first direction in correspondence first sensing detection and localization path is determined as
The direction of emission source where the user.
14. the first electronic equipment according to claim 13, the control processing unit include:
Third obtains subelement, for obtaining the first direction;
4th obtains subelement, for obtaining the position of at least one second electronic equipment;
Third handles subelement, the second electricity for being directed toward the first direction from least one described second electronic equipment
Sub- equipment chooses, and carries out corresponding voice control to the second electronic equipment that selection obtains.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410509682.9A CN105527862B (en) | 2014-09-28 | 2014-09-28 | A kind of information processing method and the first electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410509682.9A CN105527862B (en) | 2014-09-28 | 2014-09-28 | A kind of information processing method and the first electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105527862A CN105527862A (en) | 2016-04-27 |
CN105527862B true CN105527862B (en) | 2019-01-15 |
Family
ID=55770157
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410509682.9A Active CN105527862B (en) | 2014-09-28 | 2014-09-28 | A kind of information processing method and the first electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105527862B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107728482A (en) * | 2016-08-11 | 2018-02-23 | 阿里巴巴集团控股有限公司 | Control system, control process method and device |
KR102338376B1 (en) * | 2017-09-13 | 2021-12-13 | 삼성전자주식회사 | An electronic device and Method for controlling the electronic device thereof |
CN110112801B (en) * | 2019-04-29 | 2023-05-02 | 西安易朴通讯技术有限公司 | Charging method and charging system |
CN110299865B (en) * | 2019-06-20 | 2021-05-11 | Oppo广东移动通信有限公司 | Electronic device, control method of electronic device, and storage medium |
CN112584014A (en) * | 2020-12-01 | 2021-03-30 | 苏州触达信息技术有限公司 | Intelligent camera, control method thereof and computer readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5901232A (en) * | 1996-09-03 | 1999-05-04 | Gibbs; John Ho | Sound system that determines the position of an external sound source and points a directional microphone/speaker towards it |
CN101510425A (en) * | 2008-02-15 | 2009-08-19 | 株式会社东芝 | Voice recognition apparatus and method for performing voice recognition |
CN103529726A (en) * | 2013-09-16 | 2014-01-22 | 四川虹微技术有限公司 | Intelligent switch with voice recognition function |
CN103871229A (en) * | 2014-03-26 | 2014-06-18 | 珠海迈科电子科技有限公司 | Remote controller adopting acoustic locating and control method of remote controller |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100419454C (en) * | 2005-01-19 | 2008-09-17 | 北京北阳电子技术有限公司 | Sound source positioning apparatus and method, electronic apparatus employing the same |
-
2014
- 2014-09-28 CN CN201410509682.9A patent/CN105527862B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5901232A (en) * | 1996-09-03 | 1999-05-04 | Gibbs; John Ho | Sound system that determines the position of an external sound source and points a directional microphone/speaker towards it |
CN101510425A (en) * | 2008-02-15 | 2009-08-19 | 株式会社东芝 | Voice recognition apparatus and method for performing voice recognition |
CN103529726A (en) * | 2013-09-16 | 2014-01-22 | 四川虹微技术有限公司 | Intelligent switch with voice recognition function |
CN103871229A (en) * | 2014-03-26 | 2014-06-18 | 珠海迈科电子科技有限公司 | Remote controller adopting acoustic locating and control method of remote controller |
Also Published As
Publication number | Publication date |
---|---|
CN105527862A (en) | 2016-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105527862B (en) | A kind of information processing method and the first electronic equipment | |
CN112088315B (en) | Multi-mode speech localization | |
US10075791B2 (en) | Networked speaker system with LED-based wireless communication and room mapping | |
US9854362B1 (en) | Networked speaker system with LED-based wireless communication and object detection | |
KR101576148B1 (en) | System and method for the multidimensional evaluation of gestures | |
US9847082B2 (en) | System for modifying speech recognition and beamforming using a depth image | |
EP2737727B1 (en) | Method and apparatus for processing audio signals | |
JP2007221300A (en) | Robot and control method of robot | |
CN107533134A (en) | Method and system for the position of at least detection object in space | |
Murray et al. | Robotic sound-source localisation architecture using cross-correlation and recurrent neural networks | |
CN106465012B (en) | System and method for locating sound and providing real-time world coordinates using communication | |
US20190327556A1 (en) | Compact sound location microphone | |
Cech et al. | Active-speaker detection and localization with microphones and cameras embedded into a robotic head | |
KR20220117282A (en) | Audio device auto-location | |
EP4287595A1 (en) | Sound recording method and related device | |
CN113196390A (en) | Perception system based on hearing and use method thereof | |
US9924286B1 (en) | Networked speaker system with LED-based wireless communication and personal identifier | |
CN113491575A (en) | Surgical system control based on voice commands | |
JP2004198656A (en) | Robot audio-visual system | |
CN110164443A (en) | Method of speech processing, device and electronic equipment for electronic equipment | |
US20150039314A1 (en) | Speech recognition method and apparatus based on sound mapping | |
Nguyen et al. | Selection of the closest sound source for robot auditory attention in multi-source scenarios | |
TW202324372A (en) | Audio system with dynamic target listening spot and ambient object interference cancelation | |
Sasou | Acoustic head orientation estimation applied to powered wheelchair control | |
US20060126877A1 (en) | Method for simulating a movement by means of an acoustic reproduction device, and sound reproduction arrangement therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |