CN106131754B - Group technology and device between more equipment - Google Patents

Group technology and device between more equipment Download PDF

Info

Publication number
CN106131754B
CN106131754B CN201610515064.4A CN201610515064A CN106131754B CN 106131754 B CN106131754 B CN 106131754B CN 201610515064 A CN201610515064 A CN 201610515064A CN 106131754 B CN106131754 B CN 106131754B
Authority
CN
China
Prior art keywords
microphone
loud speaker
sonic data
volume intensity
grouping
Prior art date
Application number
CN201610515064.4A
Other languages
Chinese (zh)
Other versions
CN106131754A (en
Inventor
霍伟明
Original Assignee
广东美的制冷设备有限公司
美的集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东美的制冷设备有限公司, 美的集团股份有限公司 filed Critical 广东美的制冷设备有限公司
Priority to CN201610515064.4A priority Critical patent/CN106131754B/en
Publication of CN106131754A publication Critical patent/CN106131754A/en
Application granted granted Critical
Publication of CN106131754B publication Critical patent/CN106131754B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/02Spatial or constructional arrangements of loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/027Spatial or constructional arrangements of microphones, e.g. in dummy heads

Abstract

The invention discloses the group technologies and device between a kind of more equipment, the described method comprises the following steps:Obtain the first sonic data that the first loud speaker is played with the first volume intensity;Obtain the second sonic data that at least one first microphone receives;If the second sonic data and the matching of the first sonic data, then determine that the first loud speaker and at least one first microphone belong to first area grouping, so as to obtain the relationship that a loud speaker corresponds to multiple microphones, and then it can realize the automatic network-building between multiple microphones and multiple loud speakers, user configuration difficulty is reduced, improves user experience.

Description

Group technology and device between more equipment

Technical field

The present invention relates to field of intelligent control technology, group technology and device between more particularly to a kind of more equipment.

Background technology

In smart home, when needing to arrange multiple loud speakers and multiple microphones, although wirelessly can be with It is brought convenience to wiring, but also brings a problem simultaneously:Since loud speaker or microphone are removable, can match to user It puts and brings huge difficulty, substantially reduce user experience.

Invention content

The present invention is directed to solve at least some of the technical problems in related technologies.

For this purpose, first purpose of the present invention is to propose the group technology between a kind of more equipment, this method passes through sound The the sending and receiving of wave number evidence can obtain a loud speaker and correspond to the relationship of multiple microphones, and then can realize multiple Mikes Automatic network-building between wind and multiple loud speakers reduces user configuration difficulty, improves user experience.

Second object of the present invention is to propose the apparatus for grouping between a kind of more equipment.

To achieve the above object, first aspect present invention embodiment proposes the group technology between a kind of more equipment, packet Include following steps:Obtain the first sonic data that the first loud speaker is played with the first volume intensity;Obtain at least one first wheat The second sonic data that gram wind receives;If second sonic data and first sonic data matching, it is determined that described the One loud speaker and at least one first microphone belong to first area grouping.

Group technology between more equipment according to embodiments of the present invention is obtained the first loud speaker and is broadcast with the first volume intensity The first sonic data put, and obtain the second sonic data that at least one first microphone receives.If the second sonic data and First sonic data matches, it is determined that and the first loud speaker and at least one first microphone belong to first area grouping, so as to The relationship of multiple microphones is corresponded to, and then can realize between multiple microphones and multiple loud speakers to obtain a loud speaker Automatic network-building reduces user configuration difficulty, improves user experience.

According to one embodiment of present invention, if second sonic data and first sonic data mismatch, Determine that first loud speaker and at least one microphone belong to different group areas.

According to one embodiment of present invention, the second loud speaker and at least one second microphone belong to second area point Group, when the Duplication of at least one first microphone and at least one second microphone reaches predetermined threshold value, institute Group technology is stated to further include:For obtaining first loud speaker and second loud speaker and being played respectively with the second volume intensity Three sonic datas, wherein, second volume intensity is more than first volume intensity;Obtain the microphone of overlapping receives the Four sonic datas;If the falling tone wave number evidence and third sonic data matching, it is determined that first loud speaker, described Second loud speaker and the microphone of the overlapping belong to third group areas.

According to one embodiment of present invention, first loud speaker, second loud speaker and the overlapping are being determined Microphone belong to after third group areas, further include:Obtain first loud speaker and second loud speaker simultaneously with The fifth sound wave number evidence that third volume intensity plays, wherein, the third volume intensity is less than first volume intensity;It obtains The microphone of the overlapping receives the 6th sonic data of sound;If the 6th sonic data and fifth sound wave number evidence Match, then judge that the third group areas is verified.

According to one embodiment of present invention, first loud speaker, second loud speaker and the overlapping are being determined Microphone belong to after third group areas, further include:Obtain first loud speaker and second loudspeaker in turn with The 7th sonic data that 4th volume intensity plays, wherein, the 4th volume intensity is less than the third volume intensity;When sentencing When the 8th sonic data that a microphone in the third group areas of breaking receives is matched with the 7th sonic data, really Fixed one microphone is the microphone nearest with first loud speaker and second loudspeaker distance.

According to one embodiment of present invention, above-mentioned group technology, further includes:When any one microphone receives During nine sonic datas, the group areas belonging to any one described microphone is determined;Judge it is described belonging to group areas in be It is no to there is the human body detection sensor for detecting human body;If in the presence of the human body detection sensor for detecting human body, judge The human body detection sensor and any one described microphone belong to identical group areas.

To achieve the above object, second aspect of the present invention embodiment proposes the apparatus for grouping between a kind of more equipment, packet It includes:First acquisition module, for obtaining the first sonic data that the first loud speaker is played with the first volume intensity;Second obtains mould Block, for obtaining the second sonic data that at least one first microphone receives;Matching module, in the rising tone wave number During according to being matched with first sonic data, determine that first loud speaker and at least one first microphone belong to first Group areas.

Apparatus for grouping between more equipment according to embodiments of the present invention obtains the first loud speaker by the first acquisition module With the first sonic data that the first volume intensity plays, obtain what at least one first microphone received by the second acquisition module Second sonic data, matching module determine the first loud speaker and at least when the second sonic data and the first sonic data match One the first microphone belongs to first area grouping, so as to obtain the relationship that a loud speaker corresponds to multiple microphones, into And can realize the automatic network-building between multiple microphones and multiple loud speakers, user configuration difficulty is reduced, improves user experience.

According to one embodiment of present invention, the matching module is additionally operable to:In second sonic data and described When one sonic data mismatches, determine that first loud speaker and at least one microphone belong to different group areas.

According to one embodiment of present invention, the second loud speaker and at least one second microphone belong to second area point Group, when the Duplication of at least one first microphone and at least one second microphone reaches predetermined threshold value, institute The first acquisition module is stated to be additionally operable to:It obtains first loud speaker and second loud speaker and is played respectively with the second volume intensity Third sonic data, wherein, second volume intensity be more than first volume intensity;Second acquisition module is also used In:Obtain the falling tone wave number evidence that the microphone of overlapping receives;The matching module is additionally operable to:In the falling tone wave number evidence and During the third sonic data matching, the microphone category of first loud speaker, second loud speaker and the overlapping is determined In third group areas.

According to one embodiment of present invention, first acquisition module is additionally operable to:Obtain first loud speaker and institute The fifth sound wave number evidence stated the second loud speaker while played with third volume intensity, wherein, the third volume intensity is less than institute State the first volume intensity;Second acquisition module is additionally operable to:The microphone for obtaining the overlapping receives the 6th sound wave of sound Data;The apparatus for grouping further includes:Authentication module, in the 6th sonic data and the fifth sound wave Data Matching When, judge that the third group areas is verified.

According to one embodiment of present invention, first acquisition module is additionally operable to:Obtain first loud speaker and institute The 7th sonic data that the second loudspeaker in turn is played with the 4th volume intensity is stated, wherein, the 4th volume intensity is less than institute State third volume intensity;The apparatus for grouping further includes:First judgment module is judged for working as in the third group areas When the 8th sonic data that one microphone receives is matched with the 7th sonic data, determine one microphone for institute State the first loud speaker and the nearest microphone of second loudspeaker distance.

According to one embodiment of present invention, above-mentioned apparatus for grouping, further includes:Determining module, for working as any one When microphone receives nine sonic datas, the group areas belonging to any one described microphone is determined;Second judgment module, For judging in the affiliated group areas with the presence or absence of the human body detection sensor for detecting human body, and judging there is inspection When measuring the human body detection sensor of human body, judge that the human body detection sensor and any one described microphone belong to Identical group areas.

Description of the drawings

Fig. 1 is the flow chart of the group technology between more equipment according to first embodiment of the invention;

Fig. 2 is illustrated according to the layout of the microphone of a specific example of the invention, loud speaker and human body detection sensor Figure;

Fig. 3 is showing of overlapping between the corresponding multiple microphones of different loud speaker according to an embodiment of the invention It is intended to;

Fig. 4 is the flow chart of the group technology between more equipment according to second embodiment of the invention;

Fig. 5 is the flow chart of the group technology between more equipment according to third embodiment of the invention;

Fig. 6 is the flow chart of the group technology between more equipment according to four embodiment of the invention;

Fig. 7 is the flow chart of the group technology between more equipment according to fifth embodiment of the invention;

Fig. 8 is the flow chart of the group technology between more equipment according to sixth embodiment of the invention;

Fig. 9 is the block diagram of the apparatus for grouping between more equipment according to an embodiment of the invention;And

Figure 10 is the block diagram of the apparatus for grouping between more equipment in accordance with another embodiment of the present invention.

Specific embodiment

The embodiment of the present invention is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.

The group technology and device between the more equipment proposed according to embodiments of the present invention described with reference to the accompanying drawings.

Fig. 1 is the flow chart of the group technology between more equipment according to an embodiment of the invention.It is as shown in Figure 1, more Group technology between equipment includes the following steps:

S110 obtains the first sonic data that the first loud speaker is played with the first volume intensity.

S120 obtains the second sonic data that at least one first microphone receives.

S130, if the second sonic data and the matching of the first sonic data, it is determined that the first loud speaker and at least one first Microphone belongs to first area grouping.

According to one embodiment of present invention, if the second sonic data and the first sonic data mismatch, it is determined that first Loud speaker and at least one microphone belong to different group areas.

Specifically, as shown in Fig. 2, in user family, when carrying out group areas, all room doors can be first closed, so Latter key control, which enters, automatically configures pattern, and control centre first controls the first loud speaker in user family to send out the first volume at this time The first sonic data of intensity V1, then control centre control user family in each microphone receive the sonic data, it is each The sonic data received (the second sonic data) is uploaded to control centre by microphone.Control centre judges that each microphone connects The first the sonic data whether sonic data received sends out with the first loud speaker matches, that is, judges that each microphone receives Sound waveform and sound waveform that the first loud speaker is sent out it is whether identical, if it is, determining that the microphone is raised one's voice with first Device belongs to the same area, that is, belongs to first area grouping;If it is not, then determine that the microphone belongs to different from the first loud speaker Group areas, so as to obtain the relationship between the first loud speaker and multiple microphones.Then, control centre closes first and raises one's voice Device, and judge the relationship between remaining loud speaker and each microphone one by one according to the method described above, it is each in room so as to obtain Relationship between loud speaker and each microphone, and then realize the automatic network-building between multiple microphones and multiple loud speakers, drop Low user configuration difficulty improves user experience.

Group technology between more equipment according to embodiments of the present invention is obtained the first loud speaker and is broadcast with the first volume intensity The first sonic data put, and obtain the second sonic data that at least one first microphone receives.If the second sonic data and First sonic data matches, it is determined that and the first loud speaker and at least one first microphone belong to first area grouping, so as to The relationship of multiple microphones is corresponded to, and then can realize between multiple microphones and multiple loud speakers to obtain a loud speaker Automatic network-building reduces user configuration difficulty, improves user experience.

According to one embodiment of present invention, as shown in Figure 3 and Figure 4, the second loud speaker and at least one second microphone Belong to second area grouping, when the Duplication of at least one first microphone and at least one second microphone reaches predetermined threshold value When, group technology further includes:

S140 obtains the third sonic data that the first loud speaker and the second loud speaker are played respectively with the second volume intensity, Wherein, the second volume intensity is more than the first volume intensity.

S150 obtains the falling tone wave number evidence that the microphone of overlapping receives.

S160, if falling tone wave number evidence and the matching of third sonic data, it is determined that the first loud speaker, the second loud speaker and again Folded microphone belongs to third group areas.

Specifically, as shown in figure 3, when corresponding to two different loud speakers (such as the first loud speaker and the second loud speaker) When the Duplication of microphone reaches predetermined threshold value n%, then it is assumed that be likely to be two loud speakers in same region, this time control Center processed tunes up the volume intensity of two loud speakers, then judges the microphone of overlapping and whether can receive correct sound wave Data.That is, control centre controls the first loud speaker and the second loud speaker to send out the third sound wave number of the second volume intensity V2 respectively According to, and the microphone of overlapping is controlled to receive the sonic data, microphone will be on the sonic data that received (falling tone wave number evidence) Reach control centre.Control centre judge overlapping the sonic data that receives of microphone whether with third sonic data phase Match, if it is, determining that the microphone of overlapping belongs to the same area with the first loud speaker and the second loud speaker, that is, belong to third area Domain is grouped.

Further, in order to ensure the accuracy of group areas, in one embodiment of the invention, as shown in figure 5, It determines that the microphone of the first loud speaker, the second loud speaker and overlapping belongs to after third group areas, further includes:

S170 obtains the fifth sound wave number evidence that the first loud speaker and the second loud speaker are played simultaneously with third volume intensity, Wherein, third volume intensity is less than the first volume intensity.

S180, the microphone for obtaining overlapping receive the 6th sonic data of sound.

S190 if the 6th sonic data and fifth sound wave Data Matching, judges that third group areas is verified.

That is, after third group areas is realized, control centre also controls the first loud speaker and the second loud speaker same When send out the sonic data of mild volume V3, whether then judge the microphone of overlapping can have correctly received, if it is, Judge the success of third group areas, i.e. third group areas is verified, it is achieved thereby that multiple microphones and multiple loud speakers Between automatic network-building, reduce user configuration difficulty, improve user experience.

Group technology between more equipment according to embodiments of the present invention, obtain each loud speaker and each microphone it Between correspondence after, also according to the Duplication between the microphone corresponding to each loud speaker to microphone and loud speaker into One step performs group areas and confirmation, so as to which the automatic network-building between multiple microphones and multiple loud speakers be furthermore achieved, User configuration difficulty is reduced, improves user experience.

In addition, it is successful in group areas, in order to provide the user with more humane voice service in the future, may be used also To further confirm which microphone and which loudspeaker distance are nearest, for this purpose, in one embodiment of the invention, such as Fig. 6 It is shown, after the microphone for determining the first loud speaker, the second loud speaker and overlapping belongs to third group areas, further include:

The 7th sonic data that S210, the first loud speaker of acquisition and the second loudspeaker in turn are played with the 4th volume intensity, Wherein, the 4th volume intensity is less than third volume intensity.

S220, when the 8th sonic data and the 7th sonic data that judge in third group areas microphone reception During matching, it is the microphone nearest with the first loud speaker and the second loudspeaker distance to determine a microphone.

That is, control centre is by controlling each loudspeaker in turn in the same area grouping to send out faint volume V4 Sonic data, while control the same area be grouped in each microphone receive the sonic data, and judge each microphone Whether the sonic data received correct, if it is determined that the sonic data that microphone receives is correct, then illustrate the microphone with The loudspeaker distance of sounding is nearer, otherwise, the loudspeaker distance of the microphone and sounding farther out, so as to effectively judge with it is each The immediate microphone of loud speaker, the voice service for hommization lay the foundation.

In addition, as shown in fig. 7, the group technology between above-mentioned more equipment further includes:

S230 when any one microphone receives nine sonic datas, determines the area belonging to any one microphone Domain is grouped.

S240 is judged in affiliated group areas with the presence or absence of the human body detection sensor for detecting human body.

If S250 in the presence of the human body detection sensor for detecting human body, judges human body detection sensor and any one Microphone belongs to identical group areas.

That is, the grouping of human body detection sensor can be when day, descendant interacted with control centre automatic With grouping.Specifically, when interactive voice occurs for people and control centre, control centre can be detected in multiple human body detection sensors There is presence of which sensor sensing to people, so as to temporarily think to detect that human body detection sensor existing for people belongs to same Group.As shown in figure 8, the group technology between more equipment may include following steps:

The interaction of phonetic order occurs with control centre for S801, people.

S802, control centre judge the sound of people from microphone x, and microphone x is in the A of region.

S803, while control centre judges which human body detection sensor detects human body.

S804, judges whether the first human body detection sensor detects human body.If so, perform step S805;If not, Perform step S806.

S805, the sensor may be at same groups with microphone x.

S806, judges whether the second human body detection sensor detects human body.If so, perform step S807;If not, Perform step S808.

S807, the sensor may be at same groups with microphone x.

S808, judges whether N human body detection sensors detect human body.If so, perform step S809;If not, Judgement terminates.

S809, the sensor may be at same groups with microphone x.

When the interactive voice that multiple different time occurs can detect that same human body detection sensor is effective, then recognize Belong to identical group areas for the sensor and microphone, so as to avoid erroneous judgement.

Group technology between more equipment according to embodiments of the present invention carries out wheat by sending and receiving for sound wave first The grouping of gram wind and loud speaker, then by carrying out the self study of human body detection sensor with interacting for people in the future and being grouped, in grouping Appearance includes which microphone, loud speaker, human body detection sensor and belongs to same group and correspond to some loud speaker relative close Which microphone has, so as in the case of multiple loud speakers, multiple microphones and multiple human body detection sensors, pass through self-study Loud speaker, microphone, human body detection sensor are carried out group areas by the method for habit automatically, and then realize multiple microphones, more Automatic network-building between a loud speaker and multiple human body detection sensors reduces user configuration difficulty, improves user experience.

Fig. 9 is the block diagram of the apparatus for grouping between more equipment according to an embodiment of the invention.As shown in figure 9, it sets more Apparatus for grouping between standby includes:First acquisition module 10, the second acquisition module 20 and matching module 30.

Wherein, the first sound wave number that the first acquisition module 10 is played for the first loud speaker of acquisition with the first volume intensity According to the second acquisition module 20 is used to obtain the second sonic data that at least one first microphone receives, and matching module 30 is used for When the second sonic data and the first sonic data match, determine that the first loud speaker and at least one first microphone belong to first Group areas.

According to one embodiment of present invention, matching module 30 is additionally operable to:In the second sonic data and the first sonic data During mismatch, determine that the first loud speaker and at least one microphone belong to different group areas.

Specifically, as shown in Fig. 2, in user family, when carrying out group areas, all room doors can be first closed, so Apparatus for grouping between the latter more equipment of key control, which enters, automatically configures pattern, can first control first in user family to raise one's voice at this time Device sends out the first sonic data of the first volume intensity V1, and the first acquisition module 10 obtains the sonic data and is sent to matching mould Then block 30 controls each microphone in user family to receive the sonic data, the sonic data that each microphone will receive (the second sonic data) is uploaded to the second acquisition module 20 in apparatus for grouping, and the second acquisition module 20 is by the second sound wave of acquisition Data are sent to matching module 30.Matching module 30 judge sonic data that each microphone receives whether with the first loud speaker The first sonic data sent out matches, that is, judges the sound waveform that each microphone receives and the sound that the first loud speaker is sent out Whether sound wave shape is identical, if so, matching module 30 then determines that the microphone and the first loud speaker belong to the same area, that is, belong to First area is grouped;If not, matching module 30 then determines that the microphone belongs to different group areas from the first loud speaker, from And obtain the relationship between the first loud speaker and multiple microphones.The first loud speaker is then shut off, and judges that residue is raised one's voice one by one Relationship between device and each microphone, so as to obtain each relationship between loud speaker and each microphone in room, and then It realizes the automatic network-building between multiple microphones and multiple loud speakers, reduces user configuration difficulty, improve user experience.

Apparatus for grouping between more equipment according to embodiments of the present invention obtains the first loud speaker by the first acquisition module With the first sonic data that the first volume intensity plays, obtain what at least one first microphone received by the second acquisition module Second sonic data, matching module determine the first loud speaker and at least when the second sonic data and the first sonic data match One the first microphone belongs to first area grouping, so as to obtain the relationship that a loud speaker corresponds to multiple microphones, into And can realize the automatic network-building between multiple microphones and multiple loud speakers, user configuration difficulty is reduced, improves user experience.

According to one embodiment of present invention, the second loud speaker and at least one second microphone belong to second area point Group, when the Duplication of at least one first microphone and at least one second microphone reaches predetermined threshold value, first obtains mould Block 10 is additionally operable to obtain the third sonic data that the first loud speaker and the second loud speaker are played respectively with the second volume intensity, In, the second volume intensity is more than the first volume intensity;The microphone that second acquisition module 20 is additionally operable to obtain overlapping receive the Four sonic datas;Matching module 30 be additionally operable to falling tone wave number evidence and third sonic data matching when, determine the first loud speaker, Second loud speaker and the microphone of overlapping belong to third group areas.

Specifically, as shown in figure 3, when corresponding to two different loud speakers (such as the first loud speaker and the second loud speaker) When the Duplication of microphone reaches predetermined threshold value n%, then it is assumed that be likely to be two loud speakers in same region, at this time will The volume intensity of two loud speakers tunes up, and then judges the microphone of overlapping and whether can receive correct sonic data.That is, The first loud speaker and the second loud speaker is controlled to send out the third sonic data of the second volume intensity V2, the first acquisition module 10 respectively The sonic data is obtained, while the microphone of overlapping is controlled to receive the sonic data, microphone is by the sonic data received ( Four sonic datas) it is uploaded to the second acquisition module 20.Matching module 30 judges that the sonic data that the microphone of overlapping receives is It is no to match with third sonic data, if so, matching module 30 then determines the microphone of overlapping and the first loud speaker and second Loud speaker belongs to the same area, that is, belongs to third group areas.

Further, in order to ensure the accuracy of group areas, in one embodiment of the invention, as shown in Figure 10, First acquisition module 10 is additionally operable to obtain the fifth sound that the first loud speaker and the second loud speaker are played simultaneously with third volume intensity Wave number evidence, wherein, third volume intensity is less than the first volume intensity;Second acquisition module 20 is additionally operable to obtain the microphone of overlapping Receive the 6th sonic data of sound;Apparatus for grouping further includes authentication module 40, and authentication module 40 is used in the 6th sonic data During with fifth sound wave Data Matching, judge that third group areas is verified.

That is, after third group areas is realized, the first loud speaker and the second loud speaker is also controlled to be simultaneously emitted by temperature With the sonic data of volume V3, whether the microphone that then authentication module 40 judges to be overlapped can have correctly received, if so, Authentication module 40 then judges the success of third group areas, i.e. third group areas is verified, it is achieved thereby that multiple microphones With the automatic network-building between multiple loud speakers, user configuration difficulty is reduced, improves user experience.

Apparatus for grouping between more equipment according to embodiments of the present invention, obtain each loud speaker and each microphone it Between correspondence after, also according to the Duplication between the microphone corresponding to each loud speaker to microphone and loud speaker into One step performs group areas and confirmation, so as to which the automatic network-building between multiple microphones and multiple loud speakers be furthermore achieved, User configuration difficulty is reduced, improves user experience.

In addition, it is successful in group areas, in order to provide the user with more humane voice service in the future, may be used also To further confirm which microphone and which loudspeaker distance are nearest, for this purpose, in one embodiment of the invention, such as Figure 10 Shown, the first acquisition module 10 is additionally operable to obtain that the first loud speaker and the second loudspeaker in turn are played with the 4th volume intensity Seven sonic datas, wherein, the 4th volume intensity is less than third volume intensity;Apparatus for grouping further includes the first judgment module 50, the One judgment module 50 is used for when the 8th sonic data and seven tunes wave that judge that a microphone in third group areas receives During Data Matching, it is the microphone nearest with the first loud speaker and the second loudspeaker distance to determine a microphone.

That is, by the way that each loudspeaker in turn in the same area grouping is controlled to send out the sound wave number of faint volume V4 According to, while each microphone in the same area grouping is controlled to receive the sonic data, and pass through the first judgment module 50 and judge Whether the sonic data that each microphone receives is correct, if the first judgment module 50 judges the sound wave number that microphone receives According to correct, then illustrate that the loudspeaker distance of the microphone and sounding is nearer, otherwise, the loudspeaker distance of the microphone and sounding compared with Far, so as to effectively judge to lay the foundation with the immediate microphone of each loud speaker, the voice service for hommization.

In addition, in one embodiment of the invention, above-mentioned apparatus for grouping further includes:Determining module and second judges mould Block (not specifically illustrated in figure), wherein it is determined that module is used for when any one microphone receives nine sonic datas, really Group areas belonging to any one fixed microphone, the second judgment module are used to judge in affiliated group areas with the presence or absence of inspection The human body detection sensor of human body is measured, and when judging to have the human body detection sensor for detecting human body, judge that human body is examined It surveys sensor and any one microphone belongs to identical group areas.

That is, the grouping of human body detection sensor can be the Auto-matching point when people and apparatus for grouping interact Group.Specifically, as shown in figure 8, when interactive voice occurs for people and apparatus for grouping, the second judgment module of apparatus for grouping can detect There is presence of which sensor sensing to people in multiple human body detection sensors, so as to temporarily think to detect human body existing for people Detection sensor belongs to same group.When the interactive voice that multiple different time occurs can detect same human testing sensing When device is effective, then it is assumed that the sensor belongs to identical group areas with microphone, so as to avoid erroneous judgement.

Apparatus for grouping between more equipment according to embodiments of the present invention carries out wheat by sending and receiving for sound wave first The grouping of gram wind and loud speaker, then by carrying out the self study of human body detection sensor with interacting for people in the future and being grouped, in grouping Appearance includes which microphone, loud speaker, human body detection sensor and belongs to same group and correspond to some loud speaker relative close Which microphone has, so as in the case of multiple loud speakers, multiple microphones and multiple human body detection sensors, pass through self-study Loud speaker, microphone, human body detection sensor are carried out group areas by the method for habit automatically, and then realize multiple microphones, more Automatic network-building between a loud speaker and multiple human body detection sensors reduces user configuration difficulty, improves user experience.

In the description of the present invention, it is to be understood that term " first ", " second " are only used for description purpose, and cannot It is interpreted as indicating or implies relative importance or imply the quantity of the technical characteristic indicated by indicating.Define as a result, " the One ", at least one this feature can be expressed or be implicitly included to the feature of " second ".In the description of the present invention, " multiple " It is meant that at least two, such as two, three etc., unless otherwise specifically defined.

In the present invention unless specifically defined or limited otherwise, term " installation ", " connected ", " connection ", " fixation " etc. Term should be interpreted broadly, for example, it may be being fixedly connected or being detachably connected or integral;Can be that machinery connects It connects or is electrically connected;It can be directly connected, can also be indirectly connected by intermediary, can be in two elements The connection in portion or the interaction relationship of two elements, unless otherwise restricted clearly.For those of ordinary skill in the art For, the concrete meaning of above-mentioned term in the present invention can be understood as the case may be.

In the description of this specification, reference term " one embodiment ", " example ", " is specifically shown " some embodiments " The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment of the present invention or example.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It is combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field Art personnel can tie the different embodiments or examples described in this specification and the feature of different embodiments or examples It closes and combines.

Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example Property, it is impossible to limitation of the present invention is interpreted as, those of ordinary skill in the art within the scope of the invention can be to above-mentioned Embodiment is changed, changes, replacing and modification.

Claims (12)

1. the group technology between a kind of more equipment, which is characterized in that include the following steps:
Obtain the first sonic data that the first loud speaker is played with the first volume intensity;
Obtain the second sonic data that at least one first microphone receives;
If second sonic data and first sonic data matching, it is determined that first loud speaker and described at least one A first microphone belongs to first area grouping.
2. group technology according to claim 1, which is characterized in that if second sonic data and first sound wave Data mismatch, it is determined that first loud speaker and at least one microphone belong to different group areas.
3. group technology according to claim 1 or 2, which is characterized in that the second loud speaker and at least one second Mike Wind belongs to second area grouping, when the Duplication of at least one first microphone and at least one second microphone reaches During to predetermined threshold value, the group technology further includes:
The third sonic data that first loud speaker and second loud speaker are played respectively with the second volume intensity is obtained, In, second volume intensity is more than first volume intensity;
Obtain the falling tone wave number evidence that the microphone of overlapping receives;
If the falling tone wave number evidence and third sonic data matching, it is determined that first loud speaker, described second are raised Sound device and the microphone of the overlapping belong to third group areas.
4. group technology according to claim 3, which is characterized in that determining that first loud speaker, described second raise Sound device and the microphone of the overlapping belong to after third group areas, further include:
First loud speaker and second loud speaker while the fifth sound wave number evidence played with third volume intensity are obtained, In, the third volume intensity is less than first volume intensity;
The microphone for obtaining the overlapping receives the 6th sonic data of sound;
If the 6th sonic data and the fifth sound wave Data Matching, judge that the third group areas is verified.
5. group technology according to claim 3, which is characterized in that determining that first loud speaker, described second raise Sound device and the microphone of the overlapping belong to after third group areas, further include:
Obtain first loud speaker and the 7th sonic data that second loudspeaker in turn is played with the 4th volume intensity, In, the 4th volume intensity is less than third volume intensity;
When the 8th sonic data and the 7th sonic data that judge that a microphone in the third group areas receives During matching, it is the microphone nearest with first loud speaker and second loudspeaker distance to determine one microphone.
6. group technology according to claim 1, which is characterized in that further include:
When any one microphone receives nine sonic datas, the region point belonging to any one described microphone is determined Group;
Judge in the affiliated group areas with the presence or absence of the human body detection sensor for detecting human body;
If in the presence of the human body detection sensor for detecting human body, the human body detection sensor and described any one is judged A microphone belongs to identical group areas.
7. a kind of apparatus for grouping between more equipment, which is characterized in that including:
First acquisition module, for obtaining the first sonic data that the first loud speaker is played with the first volume intensity;
Second acquisition module, for obtaining the second sonic data that at least one first microphone receives;
Matching module, in second sonic data and first sonic data matching, determining that described first raises one's voice Device and at least one first microphone belong to first area grouping.
8. apparatus for grouping according to claim 7, which is characterized in that the matching module is additionally operable to:
When second sonic data and first sonic data mismatch, determine first loud speaker and it is described at least One microphone belongs to different group areas.
9. apparatus for grouping according to claim 7 or 8, which is characterized in that the second loud speaker and at least one second Mike Wind belongs to second area grouping, when the Duplication of at least one first microphone and at least one second microphone reaches During to predetermined threshold value, first acquisition module is additionally operable to:
The third sonic data that first loud speaker and second loud speaker are played respectively with the second volume intensity is obtained, In, second volume intensity is more than first volume intensity;
Second acquisition module is additionally operable to:
Obtain the falling tone wave number evidence that the microphone of overlapping receives;
The matching module is additionally operable to:
When the falling tone wave number evidence and the third sonic data match, determine that first loud speaker, described second are raised Sound device and the microphone of the overlapping belong to third group areas.
10. apparatus for grouping according to claim 9, which is characterized in that first acquisition module is additionally operable to:
First loud speaker and second loud speaker while the fifth sound wave number evidence played with third volume intensity are obtained, In, the third volume intensity is less than first volume intensity;
Second acquisition module is additionally operable to:The microphone for obtaining the overlapping receives the 6th sonic data of sound;
The apparatus for grouping further includes:
Authentication module, in the 6th sonic data and the fifth sound wave Data Matching, judging the third region Packet authentication passes through.
11. apparatus for grouping according to claim 9, which is characterized in that first acquisition module is additionally operable to:
Obtain first loud speaker and the 7th sonic data that second loudspeaker in turn is played with the 4th volume intensity, In, the 4th volume intensity is less than third volume intensity;
The apparatus for grouping further includes:
First judgment module, for when judge the 8th sonic data that a microphone in the third group areas receives and During the 7th sonic data matching, determine one microphone be with first loud speaker and second loud speaker away from From nearest microphone.
12. apparatus for grouping according to claim 7, which is characterized in that further include:
Determining module, for when any one microphone receives nine sonic datas, determining any one described microphone Affiliated group areas;
Second judgment module, for judging in the affiliated group areas with the presence or absence of the human testing sensing for detecting human body Device, and when judging to have the human body detection sensor for detecting human body, judge the human body detection sensor and described Any one microphone belongs to identical group areas.
CN201610515064.4A 2016-06-30 2016-06-30 Group technology and device between more equipment CN106131754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610515064.4A CN106131754B (en) 2016-06-30 2016-06-30 Group technology and device between more equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610515064.4A CN106131754B (en) 2016-06-30 2016-06-30 Group technology and device between more equipment
PCT/CN2016/113803 WO2018098889A1 (en) 2016-06-30 2016-12-30 Inter-device grouping method and apparatus

Publications (2)

Publication Number Publication Date
CN106131754A CN106131754A (en) 2016-11-16
CN106131754B true CN106131754B (en) 2018-06-29

Family

ID=57468104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610515064.4A CN106131754B (en) 2016-06-30 2016-06-30 Group technology and device between more equipment

Country Status (2)

Country Link
CN (1) CN106131754B (en)
WO (1) WO2018098889A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131754B (en) * 2016-06-30 2018-06-29 广东美的制冷设备有限公司 Group technology and device between more equipment

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007185444A (en) * 2006-01-16 2007-07-26 Aruze Corp Game machine
US8320824B2 (en) * 2007-09-24 2012-11-27 Aliphcom, Inc. Methods and systems to provide automatic configuration of wireless speakers
US8095120B1 (en) * 2007-09-28 2012-01-10 Avaya Inc. System and method of synchronizing multiple microphone and speaker-equipped devices to create a conferenced area network
JP5311863B2 (en) * 2008-03-31 2013-10-09 ヤマハ株式会社 Electronic keyboard instrument
NO332231B1 (en) * 2010-01-18 2012-08-06 Cisco Systems Int Sarl Method for a mating computers and video devices
TW201127092A (en) * 2010-01-27 2011-08-01 Mipro Electronics Co Ltd Wireless loudspeaker system and volume control method for wireless microphone and wireless loudspeaker system thereof
EP2375779A3 (en) * 2010-03-31 2012-01-18 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Apparatus and method for measuring a plurality of loudspeakers and microphone array
JP5729161B2 (en) * 2010-09-27 2015-06-03 ヤマハ株式会社 Communication terminal, wireless device, and wireless communication system
ES2644529T3 (en) * 2011-03-30 2017-11-29 Koninklijke Philips N.V. Determine the distance and / or acoustic quality between a mobile device and a base unit
CN103312911B (en) * 2012-03-12 2015-03-04 联想(北京)有限公司 Data processing method and electronic terminal
CN103974168A (en) * 2013-01-29 2014-08-06 联想(北京)有限公司 Information processing method and electronic devices
CN203368759U (en) * 2013-06-13 2013-12-25 杭州联汇数字科技有限公司 Indoor sound amplifying and recording device
EP2930953A1 (en) * 2014-04-07 2015-10-14 Harman Becker Automotive Systems GmbH Sound wave field generation
TWI584657B (en) * 2014-08-20 2017-05-21 國立清華大學 A method for recording and rebuilding of a stereophonic sound field
CN104954936A (en) * 2015-05-18 2015-09-30 上海斐讯数据通信技术有限公司 Electronic reading terminal and noise reduction method and system for electronic reading terminal
CN106131754B (en) * 2016-06-30 2018-06-29 广东美的制冷设备有限公司 Group technology and device between more equipment

Also Published As

Publication number Publication date
CN106131754A (en) 2016-11-16
WO2018098889A1 (en) 2018-06-07

Similar Documents

Publication Publication Date Title
US6937136B2 (en) Security system
KR101794733B1 (en) Security and intrusion monitoring system based on the detection of sound variation pattern and the method
CN104885406A (en) Method and device for controlling home device remotely in home network system
EP0866432B1 (en) Security system with audible link two-way communication
US6870934B2 (en) Audio loudspeaker detection using back-EMF sensing
IL198390D0 (en) Method, system and computer program product for real-time detection of sensitivity decline in analyte sensors
TW200608839A (en) Methods and array for creating a mathematical model of a plasma processing system
WO2003056760A8 (en) Method ans system for transmission of signals to nodes in a system
US20070010200A1 (en) Wireless communication device
MY157035A (en) Utilizing broadcast signals to convey restricted association information
WO2007034371A3 (en) Method and apparatus for acoustical outer ear characterization
CA2536517C (en) Electronic location monitoring system
MXPA06011955A (en) Physiological event handling system, method, and device; monitoring device and program product.
US20090087003A1 (en) System and method for determining an in-ear acoustic response for confirming the identity of a user
ES2573457T3 (en) Driving assistance system and procedure for its operation
KR20070096028A (en) Management and assistance system for the deaf
CN102447787B (en) Mobile phone and method for controlling of prompt tone
US20150204965A1 (en) Position specification system and method
CN104285428A (en) Method and system for operating communication service
US7574011B2 (en) Detection device
CN1468755A (en) Monitoring system, method for remoto-controlling sensing equipment and monitor remoto controller
TW201215003A (en) Transmitter apparatus, receiver apparatus, and communication system
WO2007135198A3 (en) Method for adjusting a hearing device with frequency transposition and corresponding arrangement
WO2006065473A3 (en) Speaker diagnostics based upon driving-point impedance
DE102004034187A1 (en) Person detector for electronic device has pyroelectric sensor and infrared distance sensor in combination, several position detectors corresponding to infrared emitters arranged in housing

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
GR01 Patent grant