GB2556045A - Communication device - Google Patents

Communication device Download PDF

Info

Publication number
GB2556045A
GB2556045A GB1619162.9A GB201619162A GB2556045A GB 2556045 A GB2556045 A GB 2556045A GB 201619162 A GB201619162 A GB 201619162A GB 2556045 A GB2556045 A GB 2556045A
Authority
GB
United Kingdom
Prior art keywords
communication device
user
peer
audio
communication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1619162.9A
Inventor
Greenberg David
Taylor Clive
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eartex Ltd
Original Assignee
Eartex Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eartex Ltd filed Critical Eartex Ltd
Priority to GB1619162.9A priority Critical patent/GB2556045A/en
Priority to PCT/GB2017/053407 priority patent/WO2018087570A1/en
Priority to PCT/GB2017/053404 priority patent/WO2018087567A1/en
Publication of GB2556045A publication Critical patent/GB2556045A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/26Devices for calling a subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • H04M1/271Devices whereby a plurality of signals may be stored simultaneously controlled by voice recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
    • H04R2201/107Monophonic and stereophonic headphones with microphone for two-way hands free communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A first communication device 5 comprises a peer-to-peer networking interface 7 arranged to establish a connection between the first communication device 5 and a second communication device via a peer-to-peer network. An audio input device 11 receives audio from a first user and a voice recognition module 13 initiates establishing the connection between the first communication device 5 and the second communication device based on an audio voice command received from the first user via the audio input device 11. The first communication device 5 further comprises a communication module 15 arranged to transmit, via the peer-to-­peer networking interface 7 to the second communication device, audio data based on the audio received from the first user via the audio input device 11. The first communication device 5 receives audio data from the second communication device, via the peer-to-peer networking interface 7 and outputs audio, via an audio output device 17, based on the received audio data.

Description

(54) Title of the Invention: Communication device
Abstract Title: A voice command initiates a connection for audio data transmission via a peer-to-peer network (57) A first communication device 5 comprises a peer-to-peer networking interface 7 arranged to establish a connection between the first communication device 5 and a second communication device via a peer-to-peer network. An audio input device 11 receives audio from a first user and a voice recognition module 13 initiates establishing the connection between the first communication device 5 and the second communication device based on an audio voice command received from the first user via the audio input device 11. The first communication device 5 further comprises a communication module 15 arranged to transmit, via the peer-to-peer networking interface 7 to the second communication device, audio data based on the audio received from the first user via the audio input device 11. The first communication device 5 receives audio data from the second communication device, via the peer-to-peer networking interface 7 and outputs audio, via an audio output device 17, based on the received audio data.
Figure GB2556045A_D0001
of 5
Figure GB2556045A_D0002
Figure GB2556045A_D0003
FIG. 1 of 5
Figure GB2556045A_D0004
FIG. 2 of 5
Figure GB2556045A_D0005
FIG. 3 of 5
Figure GB2556045A_D0006
FIG. 4 of 5
Figure GB2556045A_D0007
FIG. 5
COMMUNICATION DEVICE
TECHNICAL FIELD
This disclosure relates to a communication device, a communication system comprising a plurality of the communication devices and a method of operating a communication device.
BACKGROUND
Communication devices are often bulky, complex and expensive. This can be a problem, in particular, in a workplace environment where there are a number of employees each requiring their own communication device in order to communicate with one another.
A bulky communication device may hinder an employee's ability to go about their work, whilst a complex communication device may be difficult for an employee to use. In addition, if each individual communication device is expensive, then it will become very costly for an employer to equip their entire workforce with communication devices.
Thus, there exists a need to provide a compact, simple and inexpensive communication device.
SUMMARY
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
According to one aspect of the invention there is provided a first communication device comprising: a peer-to-peer networking interface arranged to establish a connection between the first communication device and a second communication device via a peer-to-peer network; an audio input device for receiving audio from a first user; a voice recognition module arranged to initiate establishing the connection between the first communication device and the second communication device based on an audio voice command received from the first user via the audio input device; a communication module arranged to: transmit, via the peer-to-peer networking interface to the second communication device, audio data based on audio received from the first user via the audio input device; and arranged to receive audio data from the second communication device, via the peer-to-peer networking interface; and an audio output device for outputting audio based on the received audio data.
According to another aspect of the invention there is provided a communication system comprising a plurality of the communication devices described herein connected to one another via a peer-topeer network.
According to another aspect of the invention there is provided a method comprising: receiving audio from a first user via an audio input device at a first communication device; initiating establishing a connection between the first communication device and a second communication device via a peer-to-peer network, based on an audio voice command received from the first user via the audio input device; transmitting, via the peer-to-peer network to the second communication device, audio data based on audio received from the first user via the audio input device; receiving at the first communication device audio data from the second communication device via the peerto-peer network; and outputting audio based on the received audio data via an audio output device at the first communication device.
According to another aspect of the invention there is provided a computer program comprising code portions which when loaded and run on a computer cause the computer to execute a method as described herein.
According to one example there is provided a first communication device comprising: a networking interface arranged to establish a connection between the first communication device and a second communication device via a network; an audio input device for receiving audio from a first user; a voice recognition module arranged to initiate establishing the connection between the first communication device and the second communication device based on an audio voice command received from the first user via the audio input device; a communication module arranged to: transmit, via the networking interface to the second communication device, audio data based on audio received from the first user via the audio input device; and arranged to receive audio data from the second communication device, via the networking interface; and an audio output device for outputting audio based on the received audio data.
According to another example there is provided a communication system comprising a plurality of the communication devices described herein connected to one another via a network.
According to another example there is provided a method comprising: receiving audio from a first user via an audio input device at a first communication device; initiating establishing a connection between the first communication device and a second communication device via a network, based on an audio voice command received from the first user via the audio input device; transmitting, via the network to the second communication device, audio data based on audio received from the first user via the audio input device; receiving at the first communication device audio data from the second communication device via the network; and outputting audio based on the received audio data via an audio output device at the first communication device.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
Figure 1 schematically shows the basic general architecture of a communication system;
Figure 2 shows the basic general architecture of a communication device;
Figure 3 shows a flow chart illustrating a method of activating different modes at the communication device;
Figure 4 shows a flow chart illustrating a method of using the communication device in a 'connection-enabled' mode; and
Figure 5 shows a flow chart illustrating a method of using the communication device in a 'voicerecognition' mode.
DETAILED DESCRIPTION
Referring to Figure 1, there is a communication system 1 comprising a plurality of communication headsets 3, where each headset 3 is worn by a different user 4. Each headset 3 comprises a pair of ear-defenders 3', which can be used to protect the user's ears from noise.
Each headset 3 can be connected to another headset 3 using a peer-to-peer networking interface at each headset 3. The headsets 3 may be connected to one another directly or indirectly via another headset 3 (or headsets 3). In this way, the headsets 3 are connected to one another to form a peer-to-peer network, so that the users 4 can communicate with one another. A MESH network, is one type of peer-to-peer network that may be used to connect the plurality of headsets 3 to one another.
A wireless MESH network (IEEE 802.15.4) is an ad-hoc network formed by devices which are in range of one another. It is a peer-to-peer cooperative communication infrastructure in which wireless access points (APs) and nearby devices act as repeaters that transmit data from node to node. In some cases, many of the APs aren’t physically connected to a wired network. The APs, and other devices create a mesh with each other that can route data back to a wired network via a gateway.
A wireless mesh network becomes more efficient with each additional network connection. Wireless mesh networks feature a “multi-hop” topology in which data packets “hop” short distances from one node to another until they reach their final destination. The greater the number of available nodes, the greater the distance the data packet may be required to travel. Increasing capacity or extending the coverage area can be achieved by adding more nodes, which can be fixed or mobile.
The communication system 1 comprises a peer-to-peer network of headsets 3 which enables communication over the network using short range low power wireless links. This can require considerably less computing and signal transmission power than in other communication devices. In addition, this can allow the headsets 3 to consume less power and to have a simpler and smaller design. The peer-to-peer network may comprise the headsets 3 only. However, in another example, the peer-to-peer network may comprises the headsets 3 as well other devices such as the APs described above.
In the workplace environment, an employee equipped with one of the headsets 3 described herein can be reachable at all times. This may avoid the need for a general announcement system such as a public address (PA) system, which uses one loudspeaker to communicate information to many employees. A PA system may not be appropriate in some situations. Furthermore, the headset 3 can avoid the need for an employee to carry around a conventional mobile telephone.
Preferably, the headset 3 is head-mounted or ear-mounted and is sound reducing, for example, comprising ear defenders 3' for reducing noise level exposure. Thus, users' exposure to noise may be reduced.
Referring to Figure 2, the headset 3 comprises a communication device 5. In this example, the communication device 5 is integrated into one of the ear-defenders 3' of the headset 3. However, the communication device 5 may be integrated into each one of the ear-defenders 3' of the headset 3.
The communication device 5 comprises a peer-to-peer networking interface 7 and an antenna 9. The peer-to-peer networking interface 7 is arranged to establish a connection between the communication device 5 and another similar communication device via a peer-to-peer network, in which the other similar communication device also includes a peer-to-peer networking interface.
The communication device 5 comprises an audio input device 11 which is arranged to receive audio from a user wearing the headset 3. Thus, the communication device 5 is able to receive voice input from the user 4.
In the example illustrated in Figures 1 and 2, the audio input device 11 is arranged on an arm external to the ear defender 3', so that the audio input device 11 can be arranged proximate to the user's mouth. However, in another example the audio input device 11 comprises an in-ear microphone which receives amplitude modified user speech signals conducted into the ear canal via bone material, which is referred to as the occlusion effect. In this case, it is this occlusion effect which is used for user voice recognition. Here, the frequency spectrum of speech is modified causing an elevation of the lower tones. This technique enables ease of user transferability unlike conventional voice recognition systems which requires stored voice samples.
The communication device 5 comprises a voice recognition module 13 which is arranged to receive voice inputs from a user 4 via the audio input device 11. The voice recognition module 13 is arranged to store a number of pre-defined voice commands each associated with an action. The voice recognition module 13 is arranged to detect a match between voice input and one of the predefined voice commands, and is arranged to perform the action associated with the matching voice command.
The communication device 5 comprises a communication module 15 which is arranged to transmit, via the peer-to-peer networking interface 7 to another communication device, audio received from the user via the audio input device 11. In addition, the communication module 15 is arranged to receive audio data from other communication devices, via the peer-to-peer networking interface 7. Typically, the communication devices 5 can conduct two-way communication between one another. However, the communication device 5 may engage in one way communication with one or many other communication devices.
In addition, the voice recognition module 13 is arranged to control the networking interface 7 and the communication module 15. For instance, the voice recognition module 13 is arranged to cause the networking interface 7 to initiate establishing the connection between the communication device 5 and another communication device based on audio received from the user via the audio input device 4. The voice recognition module 13 may be arranged to cause the communication module 15 to communicate with another communication device.
The communication device 5 comprises an audio output device 17, such as a speaker, which is arranged to output audio received from the communication module 15 via the peer-to-peer networking interface 7. Thus, the communication device 5 is able to output audio received from a user of another communication device or devices.
The audio input device 11, the communication module 15, the peer-to-peer networking 7 and the audio output device 17 facilitate two-way communication between the communication devices 5.
The communication device 5 further comprises a user-interface switch 19 and a control module 21. In this example, the user-interface switch 19 is a pressure sensitive switch 19. However, any other suitable type of switch, control or contact sensor may be used.
The user-interface switch 19 and the control module 21 are arranged to activate different modes at the communication module 15. In this example, there is only one user-interface switch 19 for activating different modes at the communication module 15. Furthermore, in this example the communication device 5 comprises only one user-interface switch 19.
The control module 21 is arranged to store a number of pre-defined user-interactions with the switch 19. In addition, each pre-defined user-interaction is associated with a different action to be performed at the control module 21.
The control module 21 is arranged to detect a user interaction with the switch 19 and a match between detected user interaction and one of the pre-defined user-interactions. Then, the control module 21 is arranged to perform the action associated with the matching detected user interaction.
The communication module 15 is configured to be able to operate in a plurality of different modes, and the control module 21 is arranged to detect whether one of a plurality of pre-defined userinteractions with the switch has occurred. The control module 21 is arranged to activate the mode associated with the detected user-interaction.
The communication device 5 further comprises an environment audio input device 23, such as an external microphone. The external microphone can be used to detect environmental noise and provide noise cancelling via the audio output device 17. The communication device 5 may provide noise cancelling during communication between devices 5. The communication device 5 may decide to not provide noise cancelling when there is no communication between devices 5.
The communication device 5 further comprises a storage module 24, which is arranged to store an identification parameter for the device. The identification parameter is indicative of a unique identifier for the device 5. The unique identifier may be a number for the device, a title for the user of the device and/or the user's name. This unique identifier may be used so that other communication devices 5 can establish a connection with the communication device 5.
In addition, the storage module 24 may store a database comprising a list of unique identifiers for other communication devices 5 in the peer-to-peer network, where each unique identifier corresponds with a speech label stored at the storage module 24. Each speech label may be indicative of a name, or a label, for the user of the communication device 5 to which the speech label's associated unique identifier corresponds.
Each individual user can be stored in association with a number. For instance, the lowest number, such as One', may refer to the most senior person.
Figure 3 shows a flow chart illustrating a method of activating different modes at the communication device 5.
In STEP 300 the communication device 5 is activated, or 'powered-on'. In this case, the communication device 5, more specifically the communication module 15, is configured to operate in a connection-enabled mode initially. In the connection-enabled mode, the communication module 15 of the communication device 5 is configured to permit transmitting or receiving of audio to or from another communication device.
The voice recognition module 13 may be deactivated, when the communication module 15 is in the connection-enabled mode initially, and the voice recognition module 13 may be configured to be activated only in response to a user interaction with the switch 19. When activated, the voice recognition module 13 is arranged to perform at least one action in response to at least one voice command of a first instruction set stored at the voice recognition module 13.
In STEP 303 the control module 21 detects a user-interaction with the switch 19. In this case, the user wishes to instruct the communication device 5 to enter a connection-disabled mode. In order to do this, the user maintains contact with the switch 19, or 'holds' the switch down, for a first time period. In this example, the user holds the switch for over five seconds until the audio output device 17 outputs an audio notification, such as a single 'beep'. Upon hearing the 'beep', the user disengages contact with the switch 19, or 'releases' the switch. The control module 21 detects this interaction with the switch 19 and instructs the communication module 15 to enter the connectiondisabled mode.
In STEP 305, the communication module 15 enters the connection-disabled mode. In the connection-disabled mode the communication module 15 is not permitted to transmit or receive audio to or from another communication device. In the connection-disabled mode, the peer-to-peer networking interface may not be permitted to establish a connection between the communication device 5 and another communication device via the peer-to-peer network. In addition, in the connection-disabled mode the voice activation module 13 may be deactivated.
In STEP 307 the control module 21 detects another user-interaction with the switch 19. In this case, the user wishes to instruct the communication device 5 to re-enter the connection-enabled mode. In order to do this, the user performs a different user-interaction with the switch 19 compared with the user-interaction in STEP 303. Here, the user maintains contact with the switch 19 for a second time period, for instance two seconds longer than the first period of time.
In this example, the user holds the switch until the audio output device 17 outputs an audio notification, such as a two 'beeps'. Upon hearing the second 'beep', the user knows that they have reached the second time period threshold and can disengage contact with the switch 19, or 'release' the switch. The control module 21 detects this interaction with the switch and instructs the communication module to re-enter the connection-enabled mode. Thus, the method returns to STEP 300.
In this example, the user holds the switch for the first time period until the first single beep described in STEP 303 is output. Then, the user continues to hold the switch until the second time period has elapsed, at which point the audio output device 17 outputs a second beep. Here, the second time period is seven seconds, which is two seconds longer than the first period. However, the second time period may be any length of time so long as the user is given sufficient time to response to the first beep before the second beep occurs
In STEP 309 the control module 21 detects another user-interaction with the switch 19. In this case, the user wishes to instruct the communication device 5 to enter a voice-control mode. In order to do this, the user performs a different user-interaction with the switch 19 compared with the user-interactions in STEPs 303 and 307. Here, the user contacts with the switch 19 multiple times within a time period. For instance, the user may activate the switch 19 twice within a time period of under five seconds. The control module 21 detects this interaction with the switch 19 and instructs the communication module 15 to enter the voice control mode.
In STEP 311, the communication module 15 enters the voice control mode, in which the communication module 15 is permitted to transmit or receive audio to or from another communication device. In addition, the voice recognition module 13 is activated when the communication module 15 is in the voice-control mode.
In the voice-control mode the voice recognition module 13 may be arranged to perform a plurality of actions each in response to at least one voice command of a second instruction set. The second instruction set of the voice control mode may comprise a greater number of voice commands than the first instruction set of connection-enabled.
In STEP 313, as in STEP 303, the control module 21 detects a user-interaction with the switch 19 where the user maintains contact with the switch 19 for over five seconds until the audio output device 17 outputs a 'beep' at which point the user disengages contact with the switch 19. As before, the control module 21 detects this interaction with the switch and instructs the communication module to re-enter the connection-disabled mode. Thus, the method returns to 305.
In STEP 315, as in STEP 307, the control module 21 detects a user-interaction with the switch 19 where the user maintains contact with the switch 19, for the second time period until the audio output device 17 outputs two 'beeps' at which point the user disengages contact with the switch 19. The control module 21 detects this interaction with the switch and instructs the communication module to re-enter the connection-enabled mode. Thus, the method returns to STEP 300.
Figure 4 shows a flow chart illustrating a method of using the communication device 5 in the 'connection-enabled' mode.
As mentioned previously, in the connection enabled mode the communication module 15 of the communication device 5 is permitted to transmit or receive audio to or from another communication device. Thus, in STEP 400 the peer-to-peer networking interface 7 is in a waiting state where it checks to determine whether or not there is an incoming call from another communication device, or in other words a request for a connection to be made between the communication device 5 and another communication device. In addition, in the waiting state the control module 21 checks to determine whether or not there is a user-interaction with the switch 19 whilst there is not an incoming call. If there is a user-interaction with the switch 19 whilst there is not an incoming call, the method proceeds to STEP 402.
In STEP 402, control module 21 detects an interaction with the switch 19. In this case, the user wishes to provide a command to the voice recognition module 13. In order to do this, before speaking the voice command, the user maintains contact with the switch 19, for a time period, for instance less than five seconds. The control module 21 detects this interaction with the switch and, in response, activates the voice recognition module 13.
In STEP 404 voice recognition module 13 detects a voice command provided by the user. The voice recognition module identifies voice commands by detecting reserved words. The voice commands are verified by a pause preceding and following the command. For instance, the pause preceding and following the command may be a few seconds.
For instance, the user may say CALL SUPERVISOR. Next, in STEP 406 the voice recognition module 13 determines the action associated with the voice command. Then, the voice recognition module 13 outputs a confirmation request, via the audio output device 17. In this example, the confirmation request comprises outputting audio indicative of the determined action. For instance, the output may comprise repeating the voice command CALL SUPERVISOR.
In this example, the SUPERVISOR voice command may be described as a label associated with another communication device. In another example, the label may comprise a name for a user associated with the other communication device.
As described above, each user's contact name, title or number is associated with his/her communication device 5. When a user initiates a call, a message is broadcast to the peer-to-peer network for identifying the requested communication device 5. The requested communication device 5 responds and a connection is established between the calling and the receiving communication device 5.
In response to the confirmation request, the voice recognition module 13 waits for the user to provide a confirmation. The user may provide the confirmation by saying an affirmative voice command, for instance by saying yes. In this case, the method proceeds to STEP 408.
On the other hand the user may decline the confirmation by saying a negative voice command, for instance by saying no. In this case, the method returns to STEP 400.
In STEP 406, if the voice recognition module 13 fails to recognise the name of the person to be called, it prompts an appropriate audible notification for a repeat command. If the repeat command is unsuccessful the method returns to STEP 400. If the repeat command is successful the method proceeds to 408.
In STEP 408 the voice recognition module 13 causes the action associated with the voice command, input at STEP 404, to be performed. In this case, the voice recognition module 13 causes the peer-to-peer networking interface 7 to initiate the process of establishing a connection with a communication device associated with the supervisor.
In STEP 400 the peer-to-peer networking interface 7 checks to determine whether or not there is an incoming call from another communication device. If there is an incoming call the method proceeds to STEP 410 in which a notification is output, preferably at the audio output device 17, indicating to the user that there is an incoming call.
In STEP 412 the control module 21 checks to determine whether or not the user engages the switch 19. If the user engages the switch, for less than one second, in response to the incoming call the method proceeds to STEP 414, in which a connection is established between the communication device 5 and another communication device in the peer-to-peer network.
In STEP 416, the control module 21 determines that the user has engaged the switch 19 for less than five seconds, indicating that the user wishes to terminate the call. In STEP 418, in response to this user interaction, the control module 21 instructs the peer-to-peer networking interface 7 to disconnect the communication device 5 with the other communication device.
In STEP 420, the control module 21 checks to determine whether the switch 19 has been engaged within ten seconds of outputting the incoming call notification. If the user has not provided an interaction with the switch within this ten second time period, the method proceeds to STEP 424 in which the incoming call request is cancelled.
However, in STEP 422, if the control module 21 determines that the switch 19 has been engaged for a time period in excess of five seconds during the ten second time period, then the incoming call request is cancelled also.
Figure 5 shows a flow chart illustrating a method of using the communication device 5 in the 'voicerecognition' mode. The purpose of the voice-recognition mode is that the user can perform all required functions using voice commands rather than interacting with the switch 19. Thus, the voice recognition module 13 remains active whilst in the voice recognition mode.
In STEP 500 the user provides a voice command. Next, in STEP 502 the voice recognition module 13 detects that the user has provided the voice command and determines an action associated with the voice command.
In STEP 504, the voice recognition module 13 outputs a confirmation request, via the audio output device 17. The confirmation comprises outputting audio indicative of determined action.
In response to the confirmation request, the voice recognition module 13 waits for the user to provide a confirmation. The user may accept the confirmation by saying an affirmative voice command, for instance by saying YES. In this case the method proceeds to STEP 505 in which the action associated with the voice command is performed.
On the other hand the user may decline the confirmation request by saying a negative voice command, for instance by saying NO. In this case, the method returns to STEP 500.
In STEP 502, if the voice recognition module 13 fails to recognise the voice command, for instance if the voice recognition module 13 cannot recognise the name of the person to be called, it prompts an appropriate audible notification for a repeat command. If the repeat command is unsuccessful the voice recognition module simply waits for another voice command at STEP 500.
In this example the following voice commands are available in the voice recognition mode.
If the voice recognition module 13 detects that the user has said HANG-UP, whilst a call is in session between the communication device 5 and another communication device, the voice recognition module 13 instructs the peer-to-peer networking interface 7 to disconnect the communication device 5 from the other connected communication device.
If the voice recognition module 13 detects that a user has said PICK-UP, in response to an incoming call request, the voice recognition module 13 instructs the peer-to-peer networking interface 7 to connect the communication device 5 with the other connected communication device requesting the call.
If the voice recognition module 13 detects that a user has said DECLINE, in response to an incoming call request, the voice recognition module 13 instructs the peer-to-peer networking interface 7 to refuse a request to connect the communication device 5 with the other connected communication device requesting the call.
If the voice recognition module 13 detects that a user has said CALL followed by the name of a contact, the voice recognition module 13 instructs the peer-to-peer networking interface 7 to initiate a request to connect the communication device 5 with another connected communication device associated with the contact.
If the user voice recognition module 13 detects that a user has said EXIT, then the voice recognition module 13 instructs the communication module 15 to enter the connection-enabled mode.
In the system 1, user settings for each communication device 5 can be controlled by a device connected to the mesh network, such as a computer or a MESH network enabled smartphone. One of the user setting options could include a sound pressure threshold above which the user's speech is detected and processed into instructions for execution by the voice recognition module. Otherwise, settings would normally reflect user preferences for an optimum listening experience.
Access to cloud computing applications such as private clouds for company infrastructure services may be accessed by communication devices 5 via a gateway. This can include communication links to other sites for secure inter-site calls including conference calls.
The wireless network such as a mesh network (WMN) connects to a secure central database via a gateway containing employees' routing requirements for setting up wireless communication links.
When fitting the headset 3 to the ear of a user the pressure sensitive switch 19 may be engaged accidentally. To combat this issue, the communication device 5 may comprise a sensor, such as an acoustic in-ear sensor, arranged to determine that the communication device has not been mounted on an ear of a user; and, in response, ignoring any user interactions with the switch 19. When the headset 3 is correctly mounted the occlusion effect attenuates the external sound entering the ear canal thereby creating a difference in sounds levels measured by the acoustic inear sensor and external acoustic sensors.
The acoustic in-ear sensor may determine that communication device 5 has not been mounted if it receives audio above a particular attenuated amplitude of the amplitude measured by the externally mounted sensor 23, and the acoustic in-ear sensor may determine that the communication device 5 has been mounted if the amplitude of the received audio falls below that particular amplitude.
In the connection-enabled mode, and in the absence of streamed wireless audio of any kind for a certain time period, the communication device 5 may power down the transceiver into a 'beacon mode'. In the beacon mode, the peer-to-peer networking interface periodically checks for messages/activations and sends out a unique identifier which can be used to determine the device's location before returning to a sleep state. In the beacon mode, the communication device alternates between an active state and a dormant state, where a greater amount of the functionality of first communication device is activated in the active state than in the dormant state.
The communication module 15 of the communication device 5 may be configured to operate in an override mode in which the communication device 5 is able to transmit audio for output at another communication device irrespective of the mode activated at the other communication device. This enables a supervisor/manager to have connection priority to the user's device by automatically forcing acceptance of a connection request. This option could include termination of a call by the supervisor exclusively. The override mode may allow the communication device 5 to transmit audio for output at a plurality of other communication devices irrespective of the mode activated at each respective communication device. Thus, the override mode may be used in place of an conventional public address (PA) system.
The methods described herein may be performed by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible (or non-transitory) storage media include disks, thumb drives, memory cards etc and do not include propagated signals. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously. This acknowledges that firmware and software can be valuable, separately tradable commodities. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
The term 'computer' or 'computing device' is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realise that such processing capabilities are incorporated into many different devices and therefore the term 'computer' or 'computing device' includes PCs, servers, mobile telephones, personal digital assistants and many other devices.
Those skilled in the art will realise that storage devices utilised to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realise that by utilising conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages.
Any reference to 'an' item refers to one or more of those items. The term 'comprising' is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought. Any of the module described above may be implemented in hardware or software.
It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the scope of this invention.
A wireless mesh network as described above can be used for wireless peer to peer connectivity. However, in another embodiment each communication device 5 comprises a low power sub-GHZ
ISM band radio that does not depend on a mesh network for wide area coverage and connects wirelessly to a remote hub without the need for hopping from node to node.
The communication system may include P2P group functions where the supervisor/manager is given the option of group ownership which may extend to multiple concurrent P2P groups using WiFi or other such technology, or a group communication system (GCS) where the network is divided into optional sub-groups.

Claims (47)

1. A first communication device comprising:
a peer-to-peer networking interface arranged to establish a connection between the first communication device and a second communication device via a peer-to-peer network;
an audio input device for receiving audio from a first user;
a voice recognition module arranged to initiate establishing the connection between the first communication device and the second communication device based on an audio voice command received from the first user via the audio input device;
a communication module arranged to: transmit, via the peer-to-peer networking interface to the second communication device, audio data based on audio received from the first user via the audio input device; and arranged to receive audio data from the second communication device, via the peer-to-peer networking interface; and an audio output device for outputting audio based on the received audio data.
2. A first communication device according to claim 1 further comprising:
a contact sensor and a control module;
wherein the communication module is configured to be able to operate in a plurality of different modes; and the control module is arranged to detect whether one of a plurality of pre-defined userinteractions with the contact sensor has occurred; wherein each one of the pre-defined userinteractions is associated with a different mode of the communication module; and the control module is arranged to activate the mode associated with the detected userinteraction.
3. A first communication device according to claim 2 wherein a first one of the pre-defined user-interactions comprises activating the contact sensor for a first predetermined time period.
4. A first communication device according to claim 3 wherein a second one of the pre-defined user-interactions comprises activating the contact sensor for a second predetermined time period that is longer than the first predetermined time period.
5. A first communication device according to any one of claims 2 to 4 wherein a third one of the pre-defined user interactions comprises activating the contact sensor multiple times within a third predetermined time period.
6. A first communication device according to claim 5 wherein the third pre-defined userinteractions comprises activating the contact sensor twice within the third predetermined time period.
7. A first communication device according to any one of claims 2 to 6 wherein the communication module is configured to be able to operate in:
a connection-disabled mode in which the communication module is not permitted to transmit or receive audio data to or from the second communication device.
8. A first communication device according to claim 7 wherein in the connection-disabled mode: the peer-to-peer networking interface is not permitted to establish a connection between the first communication device and the second communication device via the peer-to-peer network.
9. A first communication device according to claim 7 or claim 8 wherein in the connectiondisabled mode the voice activation module is deactivated.
10. A first communication device according to any one of claims 2 to 9 wherein the communication module is configured to be able to operate in:
a connection-enabled mode in which the communication module is permitted to transmit or receive audio data to or from the second communication device.
11. A first communication device according to claim 10 wherein in the connection-enabled mode the voice recognition module is only activated in response to a user interaction with the contact sensor.
12. A first communication device according to claim 10 or claim 11 wherein in the connectionenabled mode the voice recognition module is arranged to perform at least one action in response to at least one voice command of a first instruction set.
13. A first communication device according to claim 12 wherein the voice recognition module is arranged to request a confirmation from the user before performing the at least one action in response to the at least on voice command; and if the user provides a confirmation, the voice recognition module proceeds with performing the at least one action.
14. A first communication device according to claim 12 or claim 13 wherein the at least one action performed in response to at least one voice command of first instruction set comprises:
establishing a connection with the second communication device; and the at least one voice command comprises:
inputting audio indicative of a label associated with the second communication device.
15. A first communication device according to claim 14 wherein the label comprises a name for a user associated with the second communication device.
16. A first communication device according to any one of claims 10 to 15 wherein in the connection-enabled mode:
the control module is arranged to activate the voice recognition module, in response to a user interaction with the contact sensor.
17. A first communication device according to any one of claims 2 to 16 wherein the communication module has:
a voice-control mode in which the communication module is permitted to transmit or receive audio data to or from the second communication device;
wherein the voice recognition module is activated when the communication module is in the voice-control mode.
18. A first communication device according to claim 17 wherein in the voice-control mode the voice recognition module is arranged to perform a plurality of actions each in response to at least one voice command of a second instruction set.
19. A first communication device according to claim 12 and claim 18 wherein the second instruction set comprises a greater number of voice commands than the first instruction set.
20. A first communication device according to claim 18 or claim 19 wherein the voice recognition module is arranged to request a confirmation from the user before performing the at least one action in response to the at least on voice command; and if the user provides a confirmation, the voice recognition module proceeds with performing the at least one action.
21. A first communication device according to any one of the preceding claims wherein there is only one contact sensor for activating the modes at the communication module.
22. A first communication device according to any one of the preceding claims comprising only one contact sensor.
23. A first communication device according to any one of the preceding claims wherein the audio input device comprises an in-ear microphone.
24. A first communication device according to any one of claims 2 to 23 further comprising:
an sensor arranged to determine that the communication device has not been mounted on an ear of a user; and, in response, ignoring user interactions with the contact sensor.
25. A first communication device according to any one of the preceding claims wherein the communication module has a beacon mode in which the first communication device alternates between an active state and a dormant state, where a greater amount of the functionality of first communication device is activated in the active state than in the dormant state.
26. A first communication device according to any one of the preceding claims wherein the second communication device has a plurality of modes; and the communication module of the first communication device has an override mode in which the first communication device is able to transmit audio data for output of audio based on the audio data at the second communication device irrespective of the mode activated at the second communication device.
27. A communication system comprising:
a plurality of the communication devices of any one of the preceding claims connected to one another via a peer-to-peer network.
28. A method comprising:
receiving audio from a first user via an audio input device at a first communication device;
initiating establishing a connection between the first communication device and a second communication device via a peer-to-peer network, based on an audio voice command received from the first user via the audio input device;
transmitting, via the peer-to-peer network to the second communication device, audio data based on audio received from the first user via the audio input device;
receiving at the first communication device audio data from the second communication device via the peer-to-peer network; and outputting audio based on the received audio data via an audio output device at the first communication device.
29. A method according to claim 28 further comprising:
detecting whether one of a plurality of pre-defined user-interactions with a contact sensor has occurred; wherein each one of the pre-defined user-interactions is associated with a different mode; and activating the mode associated with the detected user-interaction.
30. A method according to claim 29 further comprising:
not permitting transmitting or receiving audio data to or from the second communication device when the first communication device is in a connection-disabled mode.
31. A method according to claim 30 further comprising:
not permitting establishing a connection with the second communication device via the peer-to-peer network when the first communication device is in a connection-disabled mode.
32. A method according to claim 30 or claim 31 further comprising:
deactivating voice activation when in the first communication device is in the connectiondisabled mode.
33. A method according to any one of claims 29 to 32 further comprising:
permitting transmitting or receiving audio data to or from the second communication device when the first communication device is in a connection-enabled mode.
34. A method according to claim 33 further comprising:
activating voice recognition only in response to a user interaction with the contact sensor when in the connection-enabled mode.
35. A method according to claim 33 or claim 34 further comprising:
performing at least one action in response to at least one voice command of a first instruction set when in the connection-enabled mode.
36. A method according to claim 35 further comprising:
requesting confirmation from the user before performing the at least one action in response to the at least on voice command; and performing the at least one action, if the user provides a confirmation.
37. A method according to any one of claims 33 to 36 further comprising:
activating the voice recognition, in response to a user interaction with the contact sensor when in the connection-enabled mode.
38. A method according to any one of claims 29 to 37 further comprising:
permitting transmitting or receiving audio data to or from the second communication device, when the first communication device is in a voice-control mode; and activating voice recognition when in the voice-control mode.
39. A method according to any one of claims 29 to 38 further comprising:
performing a plurality of actions each in response to at least one voice command of a second instruction set.
40. A method according to claim 37 and claim 43 wherein the second instruction set comprises a greater number of voice commands than the first instruction set.
41. A method according to claim 39 or claim 40 further comprising:
requesting a confirmation from the user before performing the at least one action in response to the at least on voice command; and if the user provides a confirmation, performing the at least one action.
42. A method according to any one of claims 28 to 41 further comprising:
alternating between an active state and a dormant state, where a greater amount of the functionality of first communication device is activated in the active state than in the dormant state.
43. A method according to any one of claims 28 to 41 wherein the second communication device is able to operate in a plurality of modes; and the method further comprises:
transmitting audio data from the first communication device, in an override mode, for output at the second communication device irrespective of the mode activated at the second communication device.
44. A computer program comprising code portions which when loaded and run on a computer cause the computer to execute a method according to any of claims 28 to 43.
45. A communication system substantially as herein described with reference to the accompanying drawings
46. A communication device substantially as herein described with reference to the 5 accompanying drawings.
47. A method substantially as herein described with reference to the accompanying drawings.
Intellectual
Property
Office
Application No: GB 1619162.9 Examiner: Dan Hickery
GB1619162.9A 2016-11-11 2016-11-11 Communication device Withdrawn GB2556045A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1619162.9A GB2556045A (en) 2016-11-11 2016-11-11 Communication device
PCT/GB2017/053407 WO2018087570A1 (en) 2016-11-11 2017-11-10 Improved communication device
PCT/GB2017/053404 WO2018087567A1 (en) 2016-11-11 2017-11-10 Communication device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1619162.9A GB2556045A (en) 2016-11-11 2016-11-11 Communication device

Publications (1)

Publication Number Publication Date
GB2556045A true GB2556045A (en) 2018-05-23

Family

ID=60421810

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1619162.9A Withdrawn GB2556045A (en) 2016-11-11 2016-11-11 Communication device

Country Status (2)

Country Link
GB (1) GB2556045A (en)
WO (1) WO2018087567A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050202843A1 (en) * 2004-03-15 2005-09-15 Fors Steven L. Method and system for utilizing wireless voice technology within a radiology workflow
WO2015009122A1 (en) * 2013-07-19 2015-01-22 Samsung Electronics Co., Ltd. Method and device for communication
US20160078870A1 (en) * 2014-09-16 2016-03-17 Toyota Motor Engineering & Manufacturing North America, Inc. Method for initiating a wireless communication link using voice recognition

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6339706B1 (en) * 1999-11-12 2002-01-15 Telefonaktiebolaget L M Ericsson (Publ) Wireless voice-activated remote control device
US7366535B2 (en) * 2004-04-21 2008-04-29 Nokia Corporation Push-to-talk mobile communication terminals
US20060206310A1 (en) * 2004-06-29 2006-09-14 Damaka, Inc. System and method for natural language processing in a peer-to-peer hybrid communications network
US20060178159A1 (en) * 2005-02-07 2006-08-10 Don Timms Voice activated push-to-talk device and method of use
US20070225049A1 (en) * 2006-03-23 2007-09-27 Andrada Mauricio P Voice controlled push to talk system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050202843A1 (en) * 2004-03-15 2005-09-15 Fors Steven L. Method and system for utilizing wireless voice technology within a radiology workflow
WO2015009122A1 (en) * 2013-07-19 2015-01-22 Samsung Electronics Co., Ltd. Method and device for communication
US20160078870A1 (en) * 2014-09-16 2016-03-17 Toyota Motor Engineering & Manufacturing North America, Inc. Method for initiating a wireless communication link using voice recognition

Also Published As

Publication number Publication date
WO2018087567A1 (en) 2018-05-17

Similar Documents

Publication Publication Date Title
JP6312843B2 (en) Establishing a connection between the mobile device and the vehicle's hands-free system based on their distance
US8868137B2 (en) Alert processing devices and systems for noise-reducing headsets and methods for providing alerts to users of noise-reducing headsets
US9521360B2 (en) Communication system and method
US10404762B2 (en) Communication system and method
JP4282721B2 (en) Mobile terminal device
US20140269531A1 (en) Intelligent connection management in wireless devices
US20120202425A1 (en) System and method for initiating ad-hoc communication between mobile headsets
WO2014137524A1 (en) Wireless device pairing
US8971946B2 (en) Privacy control in push-to-talk
WO2021254160A1 (en) Bluetooth device and bluetooth preemption method and apparatus therefor, and computer-readable storage medium
US10827455B1 (en) Method and apparatus for sending a notification to a short-range wireless communication audio output device
US11032675B2 (en) Electronic accessory incorporating dynamic user-controlled audio muting capabilities, related methods and communications terminal
US9516476B2 (en) Teleconferencing system, method of communication, computer program product and master communication device
EP3217638B1 (en) Transferring information from a sender to a recipient during a telephone call under noisy environment
US10356232B1 (en) Dual-transceiver wireless calling
US20120045990A1 (en) Intelligent Audio Routing for Incoming Calls
US20200244788A1 (en) Opportunistic initiation of voice or video calls between smart speaker devices
WO2023130105A1 (en) Bluetooth enabled intercom with hearing aid functionality
EP3836582B1 (en) Relay device for voice commands to be processed by a voice assistant, voice assistant and wireless network
JP5973289B2 (en) Portable terminal, voice control program, and voice control method
US20240171953A1 (en) Earphone communication method, earphone device and computer-readable storage medium
JP5890289B2 (en) Portable terminal, voice control program, and voice control method
JP6257746B2 (en) COMMUNICATION SYSTEM, SERVER DEVICE, AND COMMUNICATION METHOD
GB2556045A (en) Communication device
GB2476705A (en) Routing multiple communication sessions within a local area such as a home or apartment

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)