GB2605121A - An electronics module for a wearable articel, a systemm, and a method of activation of an electronics module for a wearable article - Google Patents

An electronics module for a wearable articel, a systemm, and a method of activation of an electronics module for a wearable article Download PDF

Info

Publication number
GB2605121A
GB2605121A GB2101686.0A GB202101686A GB2605121A GB 2605121 A GB2605121 A GB 2605121A GB 202101686 A GB202101686 A GB 202101686A GB 2605121 A GB2605121 A GB 2605121A
Authority
GB
United Kingdom
Prior art keywords
controller
electronics module
voice data
primary controller
secondary controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2101686.0A
Other versions
GB202101686D0 (en
Inventor
John Lynch Michael
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Prevayl Innovations Ltd
Original Assignee
Prevayl Innovations Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Prevayl Innovations Ltd filed Critical Prevayl Innovations Ltd
Priority to GB2101686.0A priority Critical patent/GB2605121A/en
Publication of GB202101686D0 publication Critical patent/GB202101686D0/en
Publication of GB2605121A publication Critical patent/GB2605121A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3293Power saving characterised by the action undertaken by switching to a less power-consuming processor, e.g. sub-CPU
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3287Power saving characterised by the action undertaken by switching off individual functional units in the computer system
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting

Abstract

A system 10 comprises a primary controller, a secondary controller coupled to the primary controller and a microphone coupled to the secondary controller. The secondary controller is configured to operate at a lower power than the primary controller. The secondary controller is further configured to: receive incoming voice data input via the microphone, detect the presence of a keyword from the incoming voice data, and in response to the detection of a keyword, transmit a command to the primary controller. The command may be to configure the primary controller to receive and process subsequent incoming voice data from the secondary controller. The secondary controller may be configured to receive the subsequent incoming voice data from the microphone, and to pass on the data to the primary controller after processing them. The system may comprise an electronics module 100 for a wearable article 200 and a user electronic device 300 communicatively coupled to the electronics module, wherein the user electronic device comprises the primary controller and the electronics module comprises the secondary controller. The secondary controller may send the subsequent voice data to the primary controller via a wireless communicator to activate a smart assistant service.

Description

AN ELECTRONICS MODULE FOR A WEARABLE ARTICLE, A SYSTEM, AND A METHOD OF ACTIVATION OF AN ELECTRONICS MODULE FOR A WEARABLE ARTICLE The present invention is directed towards an electronics module for a wearable article and a method of activation of an electronics module for a wearable article. The present invention is also directed towards a system comprising an electronics module, a wearable article in the form of a garment, and a user electronic device communicatively coupled to the electronics module. More particularly, the wearable article comprises a biosignal measuring apparatus for sensing biosignals from a wearer of the wearable article, and which incorporates a sensor assembly and the electronics module. The electronics module is arranged to transmit biosignal data to the user electronics module or other remote device. The present invention is also directed towards a controller for an electronics module.
Background
Wearable articles, such as garments, incorporating sensors are wearable electronics used to measure and collect information from a wearer. Such wearable articles are commonly referred to as 'smart clothing'. It is advantageous to measure biosignals of the wearer during exercise, or other scenarios.
It is known to provide a garment, or other wearable article, to which an electronic device (i.e. an electronic module, and/or related components) is attached in a prominent position, such as on the chest or between the shoulder blades. Advantageously, the electronic device is a detachable device. The electronic device is configured to process the incoming signals, and the output from the processing is stored and/or displayed to a user in a suitable way A sensor senses a biosignal such as electrocardiogram (ECG) signals and the biosignals are coupled to the electronic device, via an interface.
The sensors may be coupled to the interface by means of conductors which are connected to terminals provided on the interface to enable coupling of the signals from the sensor to the interface.
Electronics modules for wearable articles such as garments are known to communicate with user electronic devices over wireless communication protocols such as Bluetooth 0 and Bluetooth Low Enemy. These electronics modules are typically removably attached to the wearable article, interface with internal electronics of the wearable article, and comprise a Bluetooth 0 antenna for communicating with the user electronic device.
The electronic device includes drive and sensing electronics comprising components and associated circuitry, to provide the required functionality.
The drive and sensing electronics include a power source to power the electronic device and the associated components of the drive and sensing circuitry.
A number of devices include what are known as Smart Assistant services. Examples are Google Assistant, Amazon's Alexa, Samsung's Bixby and Apple's Sid.
These services are commonly used in smart speakers in the home such as the Google Home series of speakers or Amazon's Dot, Echo and Show devices. All these devices are powered by the mains. Smart Assistants can also be implemented in mobile devices such smart watches and mobile phones.
Smart assistants enable voice activation of the device and functions carried out by the device.
Users primarily interact with the Smart Assistant through natural voice with the user initiating a command with a keyword, such as 'Okay, Google..' or 'Alexa...'. The Assistant is able to search the internet, schedule events and alarms, play music, adjust hardware settings on the user's device, and show information.
In smart watches these Smart Assistants must be enabled manually by the user opening a dedicated app on the watch and often having the users phone tethered at the same time as the Assistant is unable to work without a server connection.
An object of the present invention is to provide an improved method and system for voice activation of services on a mobile device such as a smartwatch, mobile phone, or other wearable article.
S um m a iv According to a first aspect of the invention, there is provided a system comprising a primary controller, a secondary controller coupled to the primary controller and a microphone coupled to the secondary controller. The secondary controller is configured to operate at a lower power than the primary controller. The secondary controller is further configured to: receive incoming voice data input via the microphone; detect the presence of a keyword from the incoming voice data; and in response to the detection of a keyword, transmit a command to the primary controller.
The command may be to configure the primary controller to receive and process subsequent incoming voice data from the secondary controller.
The secondary controller may be configured to receive the subsequent incoming voice data from the microphone, and to pass on the subsequent voice data to the primary controller.
The secondary controller may be configured to process the subsequent data prior to passing the voice data to the primary controller.
The secondary controller may be configured to process the subsequent voice data prior to passing the subsequent voice data to the primary controller.
The system may comprise an electronics module for a wearable article, wherein the electronics module comprises the primary controller and the secondary controller.
The system may comprise an electronics module for a wearable article and a user electronic device communicatively coupled to the electronics module, wherein the user electronic device comprises the primary controller and the electronics module comprises the secondary controller.
The secondary controller may include a wireless communicator, and the secondary controller may be configured to send the subsequent voice data to the user electronics device via the wireless communicator.
The command may be configured to activate a smart assistant service on the user electronics device.
The secondary controller may be integrated with the primary controller. Alternatively, the secondary controller is separate from the primary controller.
According to a second aspect of the present invention, there is provided an electronics module for a wearable article comprising a primary controller, a secondary controller coupled to the primary controller and a microphone coupled to the secondary controller. The secondary controller is configured to operate at a lower power than the primary controller. The secondary controller is further configured to: receive incoming voice data input via the microphone; detect the presence of a keyword from the incoming voice data; and in response to the detection of a keyword, transmit a command to the primary controller.
The primary controller may be configured to receives sensing data from sensing components of the wearable article.
The electronics module may further comprise an interface arranged to couple the primary controller to the sensing components of the wearable article.
The primary controller may be configured to perform a predetermined function in response to receiving the command. The predetermined function may be to activate a smart assistant service on the user electronics device. Alternatively, the predetermined function may be to begin processing incoming biosignal data, or any other function of the electronics module.
According to a third aspect of the present invention, there is a provided a method The method comprises deploying a speech recognition model to provide a speech recognition function for an electronics module. The method further comprises listening for incoming voice data. The method further comprises processing the incoming voice data. The method further comprises transmitting a command to a secondary device in response to a keyword detected as a result of the processing of the incoming voice data.
The transmitted command may be sent to a primary controller of the electronics module. The transmitted command may be a command to listen for subsequent voice data.
The transmitted command may be sent to a user electronics device.
The transmitted command may be a command to listen for subsequent voice data.
The transmitted command may be a command to activate a smart assistant service. The smart assistant may be the user electronic device.
The method may further comprise, prior to deploying the speech recognition model, providing a speech recognition model on a secondary controller of a user electronics module and training the speech recognition module with voice dataset.
The speech recognition model may be provided and trained on the electronics module.
The speech recognition model may be provided and trained on a remote device and transmitted to an electronics module for deployment.
The remote device may be a server communicatively coupled to the electronics module.
An advantage of the system is the user experience, there is no need for user intervention, such as to having to press and hold a button to activate the service.
The second benefit is average power consumption of the electronics module is reduced, allowing for smaller devices or longer battery lives.
Brief Description of the Drawings
Examples of the present disclosure will now be described with reference to the accompanying drawings, in which: Figure 1 shows a schematic diagram for an example system according to aspects of the present disclosure; Figure 2 illustrates a user electronic device displaying an ECG signal trace; Figure 3 shows a schematic diagram for an example electronics module according to aspects of the present disclosure; Figure 4 shows a schematic diagram for another example electronics module according to aspects of the present disclosure; Figure 5 shows a schematic diagram for an example analogue-to-digital converter used in the example electronics module of Figures 4 and 5 according to aspects of the present disclosure, Figure 6 shows a detailed schematic diagram of the components of an example electronics module according to aspects of the present disclosure; Figure 7 shows a flow diagram for an example method according to aspects of the present 30 disclosure; Figure 8 shows a swim flow diagram for a second example method according to aspects of the present disclosure; Figure 9 shows a swim flow diagram for a third example method according to aspects of the present disclosure; Figure 10 shows a swim flow diagram for a fourth example method according to aspects of the present disclosure; Figure 11 shows a swim flow diagram for a fifth example method according to aspects of the
present disclosure;
Figure 12 shows a swim flow diagram for a fifth example method according to aspects of the present disclosure; Figure 13 shows a swim flow diagram for a fourth example method according to aspects of the
present disclosure;
Figure 14 shows a swim flow diagram for a fifth example method according to aspects of the present disclosure; Figure 15 shows a swim flow diagram for a fifth example method according to aspects of the present disclosure; and Figure 16 shows a swim flow diagram for a fifth example method according to aspects of the
present disclosure.
Detailed Description
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings but are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
It is to be understood that the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.
"Wearable article" as referred to throughout the present disclosure may refer to any form of device interface which may be worn by a user such as a smart watch, necklace, garment, bracelet, or glasses. The wearable article may be a textile article. The wearable article may be a garment. The garment may refer to an item of clothing or apparel. The garment may be a top. The top may be a shirt, t-shirt, blouse, sweater, jacket/coat, or vest. The garment may be a dress, garment brassiere, shorts, pants, arm or leg sleeve, vest, jacket/coat, glove, armband, underwear, headband, hat/cap, collar, wristband, stocking, sock, or shoe, athletic clothing, personal protective equipment, including hard hats, swimwear, wetsuit or dry suit.
The term "wearer" includes a user who is using, wearing, or otherwise holding, the wearable article.
The type of wearable garment may dictate the type of biosignals to be detected. For example, a hat or cap may be used to detect electroencephalogram or mannetoencephaloaram signals.
The wearable article/garment may be constructed from a woven or a non-woven material. The wearable article/garment may be constructed from natural fibres, synthetic fibres, or a natural fibre blended with one or more other materials which can be natural or synthetic. The yam may be cotton. The cotton may be blended with polyester and/or viscose and/or polyamide according to the application. Silk may also be used as the natural fibre. Cellulose, wool, hemp and jute are also natural fibres that may be used in the wearable article/garment. Polyester, polycotton, nylon and viscose are synthetic fibres that may be used in the wearable article/garment.
The garment may be a fight-fitting garment. Beneficially, a fight-fitting garment helps ensure that the sensor devices of the garment are held in contact with or in the proximity of a skin surface of the wearer. The garment may be a compression garment. The garment may be an athletic garment such as an elastomeric athletic garment.
The garment has sensing units provided on an inside surface which are held in close proximity to a skin surface of a wearer wearing the garment. This enables the sensing units to measure biosignals for the wearer wearing the garment.
The sensing units may be arranged to measure one or more biosignals of a wearer wearing the garment.
"Biosignal" as referred to throughout the present disclosure may refer to signals from living beings that can be continually measured or monitored. Biosignals may be electrical or nonelectrical signals. Signal variations can be time variant or spatially variant.
Sensing components may be used for measuring one or a combination of bioelectrical, bioimpedance, biochemical, biomechanical, bioacoustics, biooptical or biothermal signals of the wearer 600. The bioelectrical measurements include electrocardiograms (ECG), electrogastrograms (EGG), electroencephalograms (EEG), and electromyography (EMG). The bioimpedance measurements include plethysmography (e.g., for respiration), body composition (e.g., hydration, fat, etc.), and electroimpedance tomography (Eli). The biomagnetic measurements include magnetoneurograms (MNG), magnetoencephalography (MEG), magnetogastrogram (MGG), magnetocardiogram (MCG). The biochemical measurements include glucose/lactose measurements which may be performed using chemical analysis of the wearer 600's sweat. The biomechanical measurements include blood pressure. The bioacoustics measurements include phonocardiograms (PCG). The biooptical measurements include orthopantomogram (OPG). The biothermal measurements include skin temperature and core body temperature measurements.
Referring to Figures 1 to 6, there is shown an example system 10 according to aspects of the present disclosure. The system 10 comprises an electronics module 100, a wearable article in the form of a garment 200, and a user electronic device 300. The garment 200 is worn by a user who in this embodiment is the wearer 600 of the garment 200.
The electronics module 100 is arranged to integrate with sensing units 400 incorporated into the garment 200 to obtain signals from the sensing units 400.
The electronics module 100 and the wearable article 200 and including the sensing units 400 comprise a wearable assembly 500.
The sensing units 400 comprise one or more sensors 209, 211 with associated conductors 203, 207 and other components and circuitry. The electronics module 100 is further arranged to wirelessly communicate data to the user electronic device 300. Various protocols enable wireless communication between the electronics module 100 and the user electronic device 300. Example communication protocols include Bluetooth 8, Bluetooth 8 Low Energy, and near-field communication (NFC).
The garment 200 has an electronics module holder in the form of a pocket 201. The pocket 201 is sized to receive the electronics module 100. When disposed in the pocket 201, the electronics module 100 is arranged to receive sensor data from the sensing units 400. The electronics module 100 is therefore removable from the garment 200.
The present disclosure is not limited to electronics module holders in the form pockets.
Alternatively, the electronics module 100 may be configured to be releasably mechanically coupled to the garment 200. The mechanical coupling of the electronic module 100 to the garment 200 may be provided by a mechanical interface such as a clip, a plug and socket arrangement, etc. The mechanical coupling or mechanical interface may be configured to maintain the electronic module 100 in a particular orientation with respect to the garment 200 when the electronic module 100 is coupled to the garment 200. This may be beneficial in ensuring that the electronic module 100 is securely held in place with respect to the garment 200 and/or that any electronic coupling of the electronic module 100 and the garment 200 (or a component of the garment 200) can be optimized. The mechanical coupling may be maintained using friction or using a positively engaging mechanism, for example.
B
Beneficially, the removable electronic module 100 may contain all the components required for data transmission and processing such that the garment 200 only comprises the sensing units 400 e.g. the sensors 209, 211 and communication pathways 203, 207. In this way, manufacture of the garment 200 may be simplified. In addition, it may be easier to clean a garment 200 which has fewer electronic components attached thereto or incorporated therein. Furthermore, the removable electronic module 100 may be easier to maintain and/or troubleshoot than embedded electronics. The electronic module 100 may comprise flexible electronics such as a flexible printed circuit (FPC).
The electronic module 100 may be configured to be electrically coupled to the garment 200.
Referring to Figure 3, there is shown a schematic diagram of an example of the electronics module 100 of Figure 1.
A more detailed block diagram of the electronics components of electronics module 100 and garment are shown in Figure 4.
The electronics module 100 comprises an interface 101, a controller 103, a power source 105, and one or more communication devices which, in the exemplar embodiment comprises a first antenna 107, a second antenna 109 and a wireless communicator 159. The electronics module 100 also includes an input unit such as a proximity sensor or a motion sensor 111, for example in the form of an inertial measurement unit (IMU).
The electronics module 100 also includes additional peripheral devices that are used to perform specific functions as will be described in further detail herein.
The interface 101 is arranged to communicatively couple with the sensing unit 400 of the garment 200. The sensing unit 400 comprises -in this example -the two sensors 209, 211 coupled to respective first and second electrically conductive pathways 203, 207, each with respective termination points 213, 215. The interface 101 receives signals from the sensors 209, 211. The controller 103 is communicatively coupled to the interface 101 and is arranged to receive the signals from the interface 101 for further processing.
The interface 101 of the embodiment described herein comprises first and second contacts 163, 165 which are arranged to be communicatively coupled to the termination points 213, 215 the respective first and second electrically conductive pathways 203, 207. The coupling between the termination points 213, 215 and the respective first and second contacts 163, 165 may be conductive or a wireless (e.g. inductive) communication coupling.
In this example the sensors 209, 211 are used to measure electropotenfial signals such as electrocardiogram (ECG) signals, although the sensors 209, 211 could be configured to measure other biosignal types as also discussed above.
The power source 105 may comprise a plurality of power sources. The power source 105 may be a battery. The battery may be a rechargeable battery. The battery may be a rechargeable battery adapted to be charged wirelessly such as by inductive charging. The power source 105 may comprise an energy harvesting device. The energy harvesting device may be configured to generate electric power signals in response to kinetic events such as kinetic events 10 performed by the wearer 600 of the garment 200. The kinetic event could include walking, running, exercising or respiration of the wearer 600. The energy harvesting material may comprise a piezoelectric material which generates electricity in response to mechanical deformation of the converter. The energy harvesting device may harvest energy from body heat of the wearer 600 of the garment. The energy harvesting device may be a thermoelectric energy harvesting device. The power source 105 may be a super capacitor, or an energy cell.
The first antenna 107 is arranged to communicatively couple with the user electronic device 300 using a first communication protocol. In the example described herein the first antenna 107 is a passive tag such as a passive Radio Frequency Identification (RFID) tag or Near Field Communication (NFC) tag. These tags comprise a communication module as well as a memory which stores the information, and a radio chip. The user electronic device 300 is powered to induce a magnetic field in an antenna of the user electronic device 300. When the user electronic device 300 is placed in the magnetic field of the communication module antenna 107, the user electronic device 300 induces current in the communication module antenna 107. This induced current is used to retrieve the information from the memory of the tag and transmit the same back to the user electronic device 300. The controller 103 is arranged to energize the first antenna 107 to transmit information.
In an example operation, the user electronic device 300 is brought into proximity with the electronics module 100. In response to this, the electronics module 100 is configured to energize the first antenna 107 to transmit information to the user electronic device 300 over the first wireless communication protocol. Beneficially, this means that the act of the user electronic device 300 approaching the electronics module 100 energizes the first antenna 107 to transmit the information to the user electronic device 300.
The information may comprise a unique identifier for the electronics module 100. The unique identifier for the electronics module 100 may be an address for the electronics module 100 such as a MAC address or Bluetooth 0 address.
The information may comprise authentication information used to facilitate the pairing between the electronics module 100 and the user electronic device 300 over the second wireless communication protocol. This means that the transmitted information is used as part of an out of band (00B) pairing process.
The information may comprise application information which may be used by the user electronic device 300 to start an application on the user electronic device 300 or configure an application running on the user electronic device 300. The application may be started on the user electronic device 300 automatically (e.g. without wearer 600 input). Alternatively, the application information may cause the user electronic device 300 to prompt the wearer 600 to start the application on the user electronic device. The information may comprise a uniform resource identifier such as a uniform resource location to be accessed by the user electronic device, or text to be displayed on the user electronic device for example. It will be appreciated that the same electronics module 100 can transmit any of the above example information either alone or in combination. The electronics module 100 may transmit different types of information depending on the current operational state of the electronics module 100 and based on information it receives from other devices such as the user electronic device 300.
The second antenna 109 is arranged to communicatively couple with the user electronic device 300 over a second wireless communication protocol. The second wireless communication protocol may be a Bluetooth ® protocol, Bluetooth ® 5 or a Bluetooth 0 Low Energy protocol but is not limited to any particular communication protocol. In the present embodiment, the second antenna 109 is integrated into controller 103. The second antenna 109 enables communication between the user electronic device 300 and the controller 100 for configuration and set up of the controller 103 and the peripheral devices as may be required. Configuration of the controller 103 and peripheral devices utilises the Bluetooth 0 protocol.
The wireless communicator 159 may be an alternative, or in addition to, the first and second antennas 107, 109.
Other wireless communication protocols can also be used, such as used for communication over: a wireless wide area network (AN), a wireless metro area network (VVMAN), a wireless local area network (VVLAN), a wireless personal area network (WAN), Bluetooth 0 Low Energy, Bluetooth ® Mesh, Thread, Zigbee, IEEE 802.15.4, Ant, a Global Navigation Satellite System (GNSS), a cellular communication network, or any other electromagnetic RF communication protocol. The cellular communication network may be a fourth generation (4G) LTE, LTE Advanced (LTE-A), LTE Cat-M1, LTE Cat-M2, NB-IoT, fifth generation (5G), sixth generation (6G), and/or any other present or future developed cellular wireless network.
The electronics module 100 includes configured a clock unit in the form of a real time clock (RTC) 153 coupled to the controller 103 and, for example, to be used for data logging, clock building, time stamping, timers, and alarms. As an example, the RTC 153 is driven by a low frequency clock source or crystal operated at 32.768 Hz.
The electronics module 100 also includes a location device 161 such as a GNSS (Global Navigation Satellite System) device which is arranged to provide location and position data for applications as required. In particular, the location device 161 provides geographical location data at least to a nation state level. Any device suitable for providing location, navigation or for tracking the position could be utilised. The GNSS device may include device may include Global Positioning System (GPS), BeiDou Navigation Satellite System (BDS) and the Galileo system devices.
The power source 105 in this example is a lithium polymer battery 105. The battery 105 is rechargeable and charged via a USB C input 131 of the electronics module 100. Of course, the present disclosure is not limited to recharging via USB and instead other forms of charging such as inductive of far field wireless charging are within the scope of the present disclosure.
Additional battery management functionality is provided in terms of a charge controller 133, battery monitor 135 and regulator 147. These components may be provided through use of a 30 dedicated power management integrated circuit (PMIC).
The USB C input 131 is also coupled to the controller 131 to enable direct communication with the controller 103 with an external device if required.
The controller 103 is communicatively connected to a battery monitor 135 so that that the controller 103 may obtain information about the state of charge of the battery 105.
The controller 103 has an internal memory 167 and is also communicatively connected to an external memory 143 which in this example is a NAND Flash memory. The memory 143 is used to for the storage of data when no wireless connection is available between the electronics module 100 and a user electronic device 300. The memory 143 may have a storage capacity of at least 1GB and preferably at least 2 GB. The electronics module 100 also comprises a temperature sensor 145 and a light emitting diode 147 for conveying status information. The electronic module 100 also comprises conventional electronics components including a power-on-reset generator 149, a development connector 151, the real time clock 153 and a FROG header 155.
Additionally, the electronics module 100 may comprise a haptic feedback unit 157 for providing a haptic (vibrational) feedback to the wearer 600.
The wireless communicator 159 may provide wireless communication capabilities for the garment 200 and enables the garment to communicate via one or more wireless communication protocols to a remote sewer 700. Wireless communications may include: a wireless wide area network (JAN), a wireless metro area network (WMAN), a wireless local area network (VVLAN), a wireless personal area network (WPAN), Bluetooth Low Energy, Bluetooth Mesh, Bluetooth 0 5, Thread, Zigbee, IEEE 802.15.4, Ant, a near field communication (NFC), Near Field Magnetic Induction (NFMI), a Global Navigation Satellite System (GNSS), a cellular communication network, or any other electromagnetic RF communication protocol. The cellular communication network may be a fourth generation (4G) LTE, LTE Advanced (LTE-A), LTE Cat-M1, LTE Cat-M2, NB-IoT, fifth generation (5G), sixth generation (6G), and/or any other present or future developed cellular wireless network.
The electronics module 100 may additionally comprise a Universal Integrated Circuit Card (UICC) that enables the garment to access services provided by a mobile network operator (MNO) or virtual mobile network operator (VMNO). The UICC may include at least a read-only memory (ROM) configured to store an MNO or VMNO profile that the garment can utilize to register and interact with an MNO or VMNO. The UICC may be in the form of a Subscriber Identity Module (SIM) card. The electronics module 100 may have a receiving section arranged to receive the SIM card. In other examples, the UICC is embedded directly into a controller of the electronics module 100. That is, the UICC may be an electronic/embedded UICC (eUICC).
A eUICC is beneficial as it removes the need to store a number of MNO profiles, i.e. electronic Subscriber Identity Modules (eSIMs). Moreover, eSIMs can be remotely provisioned to garments. The electronics module 100 may comprise a secure element that represents an 35 embedded Universal Integrated Circuit Card (eUICC). In the present disclosure, the electronics module may also be referred to as an electronics device or unit. These terms may be used interchangeably.
The controller 103 is connected to the interface 101 via an analog-to-digital converter (ADC) front end 139 and an electrostatic discharge (ESD) protection circuit 141.
Figure 5 is a schematic illustration of the component circuitry for the ADC front end 139.
In the example described herein the ADC front end 139 is an integrated circuit (IC) chip which converts the raw analogue biosignal received from the sensors 209, 211 into a digital signal for further processing by the controller 103. ADC IC chips are known, and any suitable one can be utilised to provide this functionality. ADC IC chips for ECG applications include, for example, the MAX30003 chip produced by Maxim Integrated Products Inc. The ADC front end 139 includes an input 169 and an output 171.
Raw biosignals from the electrodes 209, 211 are input to the ADC front end 139, where received signals are processed in an ECG channel 175 and subject to appropriate filtering through high pass and low pass filters for static discharge and interference reduction as well as for reducing bandwidth prior to conversion to digital signals. The reduction in bandwidth is important to remove or reduce motion artefacts that give rise to noise in the signal due to movement of the sensors 209, 211.
The output digital signals may be decimated to reduce the sampling rate prior to being passed to a serial programmable interface (SPI) 173 of the ADC front end 139.
ADC front end IC chips suitable for ECG applications may be configured to determine information from the input biosignals such as heart rate and the QRS complex and including the R-R interval. Support circuitry 177 provides base voltages for the ECG channel 175.
The determining of the QRS complex can be implemented for example using the known Pan Tomkins algorithm as described in Pan, Jiapu; Tompkins, Willis J. (March 1985). "A Real-Time QRS Detection Algorithm". IEEE Transactions on Biomedical Engineering. BME-32 (3): 230236.
Signals are output to the controller 103 via the SPI 173.
The controller 103 can also be configured to apply digital signal processing (DSP) to the digital signal from the ADC front end 139.
The DSP may include noise filtering additional to that carried out in the ADC front end 139 and ay also include additional processing to determine further information about the signal from the ADC front end 139.
The controller 103 is configured to send the biosignals to the user electronic device 300 using either of the first antenna 107, second antenna 109, or wireless communicator 159.
In some examples, the input unit -such as a proximity sensor or motion sensor -is arranged to detect a displacement of the electronics module 100. These displacements of the electronics module 100 may be caused by the object being tapped against the electronics module 100 or by the wearer 600 of the electronics module 100 being in motion, for example walking or running, or simply getting up from a recumbent position.
In the exemplar embodiment described herein, motion detection is provided by the IMU 111 which may comprise an accelerometer and optionally one or both of a gyroscope and a magnetometer. A gyroscope/magnetometer is not required in all examples, and instead only an accelerometer may be provided, or a gyroscope/magnetometer may be present but put into a low power state.
The input event could be provided by artificial intelligence (Al) and, as such, the input unit could be an Al system, machine or engine.
The IMU 111 can therefore be used to detect can detect orientation and gestures with event-detection interrupts enabling motion tracking and contextual awareness. It has recognition of free-fall events, tap and double-tap sensing, activity or inactivity, stationary/motion detection, and wakeup events in addition to 6D orientation. A single tap, for example, can be used enable toggling through various modes or waking the electronics module 100 from a low power mode.
Known examples of IMUs that can be used for this application include the ST LSM6DSOX manufactured by STMicroelectronics. This IMU a system-in-package IMU featuring a 3D digital accelerometer and a 3D digital gyroscope.
Another example of a known IMU suitable for this application is the LSM6DSO also be STMicroelectronics.
The IMU 111 can include machine learning functionality, for example as provided in the ST LSM6DSOX. The machine learning functionality is implemented in a machine learning core (MLC).
The machine learning processing capability uses decision-tree logic. The MLC is an embedded feature of the IMU 111 and comprises a set of configurable parameters and decision trees.
As is understood in the art, decision tree is a mathematical tool composed of a series of configurable nodes. Each node is characterized by an "if-then-else" condition, where an input signal (represented by statistical parameters calculated from the sensor data) is evaluated against a threshold.
Decision trees are stored and generate results in the dedicated output registers. The results of the decision tree can be read from the application processor at any time. Furthermore, there is the possibility to generate an interrupt for every change in the result in the decision tree, which is beneficial in maintaining low-power consumption.
Decision trees can be generated using known machine learning tool such as Weka developed by the University of Waikato or using MATLAB or Python. In an example operation, the wearer 600 has positioned the electronics module 100 within the pocket 201 (Figure 1) of the garment 200 and is wearing the garment 200. The wearer 600 taps their hand or mobile phone 300 against the pocket 201 and this tap event is detected by the input unit, which in this exemplar embodiment is the IMU 111. The IMU 111 sends a signal to the controller 103 to wake-up the controller 103 from the low power mode.
A processor of the IMU 111 may perform processing tasks to classify different types of detected motion. The processor of the IMU 111 may use the machine-learning functions so as to perform this classification. Performing the processing operations on the IMU 111 rather than the controller 103 is beneficial as it reduces power consumption and leaves the controller 103 free to perform other tasks. In addition, it allows for motion events to be detected even when the controller 103 is operating in a low power mode.
The IMU 111 may be configured to detect when the electronic device 100 has been stationary but then begins to move, for example when left on a surface but then attached to the garment 200. The IMU 111 may be configured to detect that the wearer 600 of the garment 200, with the electronic device attached, is resting, or is moving, for example during exercise.
The IMU 111 communicates with the controller 103 over a serial protocol such as the Serial Peripheral Interface (SPI), Inter-Integrated Circuit (I2C), Controller Area Network (CAN), and Recommended Standard 232 (RS-232). Other serial protocols are within the scope of the present disclosure. The IMU 111 is also able to send interrupt signals to the controller 103 when required so as to transition the controller 103 from a low power model to a normal power mode when a motion event is detected, for example, or vice versa. The interrupt signals may be transmitted via one or more dedicated interrupt pins.
The electronics module 100 also includes a low-power secondary controller 110 coupled to the controller 103. The controller 103 of the electronics module 100 acts as a primary controller for the electronics module 100.
The secondary controller 110 is also coupled to a pulse-density modulated (PDM) microphone 112.
The primary controller 100 and/or secondary controller 110 include speech recognition functionality.
Speech recognition can be implemented using machine learning comprising neural network models. Such neural network models typically use the Recurrent Neural Network (RNN) or Convolutional Neural Network (CNN) architecture, for example..
The neural networks break down incoming speech from the microphone 112 into components, which are processed and analysed. The model trains on a dataset of known spoken words or phrases, and makes predictions on the new sounds, forming a hypothesis about what the user is saying.
In one embodiment of the invention, the speech recognition model is built on the secondary controller 110, and the model can then be trained and used to subsequently recognise input voice commands.
Speech recognition can also be implemented on the primary controller 103 or on the user electronics device 300.
Voice recognition and voice activation uses received voice signals which is then processed to recognise keywords and commands to deploy machine functions. This may include machine learning in order to recognise and interpret particular phrases and voices and so improve the functionality of the voice recognition and activation functionality.
Using machine learning requires, the collection of voice data from which features are extracted and which are then used to train a voice recognition machine learning model. Once the model is trained, then the model can then be deployed to carry out voice recognition.
Models may be, for example, a convolutional neural network which comprises an input layer, an output layer and a number of hidden layers and which perform operations on the input and derive a predictive output.
Voice datasets are available for machine learning applications. One example is the Google Speech commands dataset. Additional words can be added to the dataset if required. An example of this would be a particular keyword.
The next step is to extract features from the raw voice data that can be used as the input to the machine leaming model. As an example, the model can use Mel Frequency Cepstral Coefficients (MFCCs). MFCCs are representations of the short-term power spectrum of a sound.
The voice dataset is split into a training set and some for validation and testing. The training set trains the model whilst the validation set assesses the accuracy of each pass of training and the test set is used at the end to test the model.
Training of the speech recognition model comprises a calibration exercise and is used to detect a number of spoken commands. The microphone 112 is used to detect voice input for the training, and for subsequent operation The model can be continuously trained by listening to additional voice data.
The voice recognition functionality in the secondary controller 110 is typically hardcoded.
The model could be trained on the server 700 and then transferred directly to the electronics module 100 via a cellular radio network.
Speech recognition can be implemented using, for example, using TensorFlow. TensorFlow is a software library developed by Google for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. TensorFlow uses the programming language Python to provide a convenient front-end application programming;nterface and enables the neural network models to be built so as to recognize spoken words When the wearer 600 voices an activation command, the secondary controller 110 recognises the command as a keyword. In response, the secondary controller 110 is configured to send a command to the primary controller 103 to begin audio interpretation of subsequent incoming voice data i.e., the voice data that comes after the keyword.
An example of a keyword could be "Prevayl". The subsequent voice data could be an instruction to perform a selected function or operation.
Optionally, the secondary controller 110 may include a wireless communicator such as a Bluetooth® stack, so that, when the keyword is spoken, the secondary controller 110 is operable to send a Bluetooth command to the user electronic device 300 to activate the smart assistant application on the user electronic device 300. In this case, voice recognition is also implemented on the user electronic device 300.
Alternatively, in response to the keyword, the secondary controller 110 could be configured to transition to an audio streaming mode in which subsequent incoming voice data from the microphone 112, is sent to the user electronic device 300 for further processing on the user electronics device 300. The controller 305 of the user electronic device 300 could then process the incoming voice command and respond accordingly.
This means, for example, that the wearer 600 could issue commands such as,' Prevayl, start a workout' Prevayl, pause' 'Prevayl, stop' More complex commands could be: 'Prevayl, I'm taking a break/rest' Prevayl, what is my training zone?' Prevayl, did I beat my personal best? To understand more complex commands would require additional training and computing power.
In an alternative, the secondary controller 110 could be integrated with the controller 103.
In a further alternative, the controller 103 may have an ultra-low power mode of operation or can be configured to run at clock cycle rate. When the keyword is detected, the controller 103 is configured to switch to high power mode or increases its clock speed to process any subsequent voice data accordingly.
The user electronic device 300 in the example of Figures 2 and 6 is in the form of a mobile phone or tablet and comprises a controller 305, a memory 304, a wireless communicator 307, a display 301, a user input unit 306, a capturing device in the form of a camera 303 and an inertial measurement unit (IMU) 309. The controller 305 provides overall control to the user electronic device 300.
The user input unit 306 receives inputs from the user such as a user credential.
The memory 304 stores information for the user electronic device 300.
The display 301 is arranged to display a user interface 302 for applications operable on the user electronic device 300.
The IMU 309 provides motion and/or orientation detection and may comprise an accelerometer and optionally one or both of a gyroscope and a magnetometer.
The user electronic device 300 may also include a biometric sensor. The biometric sensor may be used to identify a user or users of device based on unique physiological features. The biometric sensor may be: a fingerprint sensor used to capture an image of a user's fingerprint; an iris scanner or a retina scanner configured to capture an image of a user's iris or retina; an ECG module used to measure the user's ECG; or the camera of the user electronic arranged to capture the face of the user. The biometric sensor may be an internal module of the user electronic device. The biometric module may be an external (stand-alone) device which may be coupled to the user electronic device by a wired or wireless link.
The controller 305 is configured to launch an application which is configured to display insights derived from the biosignal data processed by the ADC front end 139 of the electronics module 100, input to electronics module controller 103, and then transmitted from the electronics module 100. The transmitted data is received by the wireless communicator 307 of the user electronic device 300 and input to the controller 305.
Insights include, but are not limited to, an ECG signal trace i.e. the QRS complex, heart rate, respiration rate, core temperature but can also include identification data for the wearer 600 using the wearable assembly 500.
The display 301 is also configured to display a signal trace 800 as part of the user interface 302, for example an ECG signal trace 800 as illustrated in Figure 2.
The display 301 may be a presence-sensitive display and therefore may comprise the user input unit 306. The presence-sensitive display may include a display component and a presence-sensitive input component. The presence sensitive display may be a touch-screen display arranged as part of the user interface 302.
In one embodiment of the invention, the controller 305 is configured to launch a smart assistant application. Smart assistant applications -also known as a virtual assistant -are configured to perform tasks or services, or answer questions. The smart assistant includes voice recognition.
The controller 305 is configured to build and to train a speech recognition model. The controller 305 then communicates the trained model to the electronics module 100 using the wireless communicator 307.
User electronic devices in accordance with the present invention are not limited to mobile phones ortablets and may take the form of any electronic device which may be used by a userto perform the methods according to aspects of the present invention. The user electronic device 300 may be a electronics module such as a smartphone, tablet personal computer (PC), mobile phone, smart phone, video telephone, laptop PC, netbook computer, personal digital assistant (PDA), mobile medical device, camera or wearable device. The user electronic device 300 may include a head-mounted device such as an Augmented Reality, Virtual Reality or Mixed Reality head-mounted device. The user electronic device 300 may be desktop PC, workstations, television apparatus or a projector, e.g. arranged to project a display onto a surface.
In use, the electronics module 100 is configured to receive raw biosignal data from the sensors 209, 211 and which are coupled to the controller 103 via the interface 101 and the ADC front end 139 for further processing and transmission to the user electronic device 300 as described above. The data transmitted to the user electronics device 300 includes raw or processed biosignal data such as ECG data, heart rate, respiration data, core temperature and other insights as determined.
The controller 305 of the user electronics device 300 is also operable to launch an application which is configured to receive, process and display data, such as raw or processed biosignal data, from the electronics module 100. A user, such as the wearer 600, is able to configure the application, using user inputs, to receive, process and display the received data in accordance with these user inputs.
The user electronic device 300 is arranged to receive the transmitted data from the electronics module 100 via the communicator 307 and which are coupled to the controller 305, and then to process and display the data in accordance with the user configuration.
In terms of ECG data, the controller 305 of the user electronics device 300 is operable to display an ECG trace 800 on the display 301 as part of the user interface 302. Other insights and data can be displayed on the display 301 as part of the user interface 302 as required.
The ECG trace 800 can be displayed in real-time.
The ECG trace 800 provides the user e.g. the wearer 600, with a visual representation of the wearer's heart's QRS complex.
Referring to Figure 7, there is shown a flow diagram for an example method according to aspects of the present disclosure.
Step 3201 of the method comprises providing an electronics module 100 and a user electronics device 300. The electronics module is communicatively coupled to sensors on a wearable article In step S202, of the method comprises providing a speech recognition model on the electronics module.
In step 3203 the speech recognition model is trained using the input of a selection of voice commands and then deployed.
In step 3204, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect incoming voice data.
When voice data is detected by the microphone 112, the secondary controller 110 is operable, at step 3205 to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step 3206, then a command is sent to the primary controller 103, or to the user electronics device 300, to configure the primary controller 103, or the user electronics device, at step 3207, to receive and process subsequent incoming voice data which is passed from the microphone 112 and the secondary controller 110 or the. In response to receiving the subsequent voice data, the primary controller 103 (or user electronics device 300) is then operable, at step 5208, to respond to the processed subsequent voice data, for example to execute commands in response a processed voice data.
Referring to Figure 8, there is shown a swim flow diagram for another example method according to aspects of the present disclosure.
In this example, a wearer 600 is wearing a wearable assembly 500 comprising a garment 200 and an electronics module 100. The electronics module 100 is communicatively coupled to the user electronic device 300.
At step 3301 a speech recognition model is built on the user electronics device 300.
At step 5302 the speech recognition model is trained on the user electronic device 300.
At step 3303, the trained model is transmitted to the electronics module 100, for example using the wireless communicator 307.
At step S304, the trained model is deployed and ready for use In step S305, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect incoming voice data.
When voice data is detected by the microphone 112, the secondary controller 110 is operable, at step 3306, to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step 3307, then a command is sent to the primary controller 103 to configure the primary controller, at step 3308, to receive and process further subsequent incoming voice data which is passed from the microphone 112 and the secondary controller 110, to the primary controller 103. The primary controller 103 is then operable, at step 3309, to respond to the processed incoming voice data, for example to execute a command in response a received voice data.
Referring to Figure 9, there is shown a swim flow diagram for another example method according
to aspects of the present disclosure.
In this example, a wearer 600 is wearing a wearable assembly 500 comprising a garment 200 and an electronics module 100. The electronics module 100 is communicatively coupled to the user electronic device 300.
At step 3401 a speech recognition model is built on the secondary controller 110.
At step S402 the speech recognition model is trained and deployed on the electronics module 100.
In step 3403, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect incoming voice data.
When voice data is detected by the microphone 112, the secondary controller 110 is operable, at step 3404 to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step 3405, then a command is sent, at step 3406, to the user electronics device 300 to activate the smart assistant application on the user electronics device 300 at step 3407.
In response, the smart assistant is operable, at step S408 to enable the controller 305 of the user electronic device 300 to execute a command in response a received voice data.
Referring to Figure 10, there is shown a swim flow diagram for another example method according to aspects of the present disclosure.
In this example, a wearer 600 is wearing a wearable assembly 500 comprising a garment 200 and an electronics module 100. The electronics module 100 is communicatively coupled to the user electronic device 300.
At step 3501 a speech recognition model is built on the user electronics device 300.
At step 3502 the speech recognition model is trained on the user electronic device 300.
At step 8503, the trained model is transmitted to the electronics module 100, for example using the wireless communicator 307.
At step 3504, the trained model is deployed on the secondary controller 110 electronics module 100.
In step 3505, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect voice data.
When voice data is detected by the microphone 112, the secondary controller 110 is operable, at step 3506 to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step 8507, then a command is sent, at step 8508, to the user electronics device 300 to activate the smart assistant application on the user electronics device 300 at step 3509.
In response, the smart assistant is operable, at step 3510 to enable the controller 305 of the user electronic device 300 to execute a command in response to received voice data.
Referring to Figure 11, there is shown a swim flow diagram for another example method
according to aspects of the present disclosure.
In this example, a wearer 600 is wearing a wearable assembly 500 comprising a garment 200 and an electronics module 100. The electronics module 100 is communicatively coupled to the user electronic device 300.
At step 3601 a speech recognition model is built on the secondary controller 110 At step 8602 the speech recognition model is trained and deployed on the user electronic device 300 In step S603, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect voice data.
When voice data is detected by the microphone 112, the secondary controller 110 is operable, at step 3604 to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step S605, the secondary controller 110 is configured, in response to the detected keyword, at step 8606, to operate in a voice data streaming mode in which voice data can be transmitted from the secondary controller 110.
The primary controller 103 is configured, at step S607, to receive and incoming voice data which is passed from the microphone 112 and the secondary controller 110 to the primary controller 103.
The primary controller 103 is then operable, at step 8608, to transmit the subsequent voice data to the user electronics device 300, for example using the second antenna 109.
The controller 305 of the user electronic device 300 is then configured, at step 8609, to receive and process the received voice data and, at step 8610 to execute a command in response a received and processed voice data.
Referring to Figure 12, there is shown a swim flow diagram for another example method according to aspects of the present disclosure.
In this example, a wearer 600 is wearing a wearable assembly 500 comprising a garment 200 and an electronics module 100. The electronics module 100 is communicatively coupled to the user electronic device 300.
At step S701 a speech recognition model is built on the user electronics device 300 At step 8702 the speech recognition model is trained on the user electronic device 300.
At step 8703, the trained model is transmitted to the electronics module 100, for example using the wireless communicator 307.
At step 8704, the trained model is deployed on the secondary controller of the electronics module 100.
In step 8705, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect voice data.
When voice data is detected by the microphone 112, the secondary controller 110 is operable, at step S706 to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step S707, the secondary controller 110 is configured, in response to the detected keyword, at step 3708, to operate in a voice data streaming mode in which voice data can be transmitted from the secondary controller 110.
The primary controller 103 is configured, at step 8709, to receive and incoming voice data which is passed from the microphone 112 and the secondary controller 110 to the primary controller 103.
The primary controller 103 is then operable, at step S710, to transmit the subsequent voice data to the user electronics device 300, for example using the second antenna 109.
The controller 305 of the user electronic device 300 is then configured, at step S711, to receive and process the received voice data and, at step S712 to execute a command in response to the received and processed voice data.
Referring to Figure 13, there is shown a swim flow diagram for another example method according to aspects of the present disclosure.
In this example, a wearer 600 is wearing a wearable assembly 500 comprising a garment 200 and an electronics module 100. The electronics module 100 is communicatively coupled to the user electronic device 300.
At step 3801 a speech recognition model is built on the server 700.
At step S802 the speech recognition model is trained on the server 700.
At step 3803, the trained model is transmitted to the electronics module 100, for example using a cellular radio network.
At step 3804, the trained model is deployed and ready for use In step 3805, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect incoming voice data.
When voice data is detected by the microphone 112, the secondary controller 110 is operable, at step 3806, to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step S807, then a command is sent to the primary controller 103 to configure the primary controller, at step S808, to receive and process further subsequent incoming voice data which is passed from the microphone 112 and the secondary controller 110, to the primary controller 103. The primary controller 103 is then operable, at step 3809, to respond to the processed incoming voice data, for example to execute a command in response a received voice data.
Referring to Figure 14, there is shown a swim flow diagram for another example method according to aspects of the present disclosure.
In this example, a wearer 600 is wearing a wearable assembly 500 comprising a garment 200 and an electronics module 100. The electronics device 100 is communicatively coupled to the user electronics device 300 and the server 700.
At step 3901 a speech recognition model is built on the server 700.
At step 3902 the speech recognition model is trained on the server 700.
At step 3903, the trained model is transmitted to the electronics module 100, for example using a cellular radio network.
At step S904, the trained model is deployed on the secondary controller 110 electronics module 100.
In step 3905, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect voice data.
When voice data is detected by the microphone 112, the secondary controller 110 is operable, at step 3906 to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step S907, then a command is sent, at step S908, to the user electronics device 300 to activate the smart assistant application on the user electronics device 300 at step S909.
In response, the smart assistant is operable, at step S910 to enable the controller 305 of the user electronic device 300 to execute a command in response to received voice data.
Referring to Figure 15, there is shown a swim flow diagram for another example method
according to aspects of the present disclosure.
In this example, a wearer 600 is wearing a wearable assembly 500 comprising a garment 200 and an electronics module 100. The electronics module 100 is communicatively coupled to the user electronic device 300 and the server 700.
At step S1001 a speech recognition model is built on the server 700.
At step S1002 the speech recognition model is trained on the server 700.
At step S1003, the trained model is transmitted to the electronics module 100, for example using the wireless communicator 307.
At step S1004, the trained model is deployed on the secondary controller of the electronics module 100.
In step S1005, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect voice data.
Wien voice data is detected by the microphone 112, the secondary controller 110 is operable, at step S1006 to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step 31007, the secondary controller 110 is configured, in response to the detected keyword, at step S1008, to operate in a voice data streaming mode in which voice data can be transmitted from the secondary controller 110.
The primary controller 103 is configured, at step S1009, to receive and incoming voice data which is passed from the microphone 112 and the secondary controller 110 to the primary controller 103.
The primary controller 103 is then operable, at step S1010, to transmit the subsequent voice data to the user electronics device 300, for example using the second antenna 109.
The controller 305 of the user electronic device 300 is then configured, at step S1011, to receive and process the received voice data and, at step S1012 to execute a command in response to the received and processed voice data.
Referring to Figure 16, there is shown a swim flow diagram for another example method
according to aspects of the present disclosure.
In this example, a wearer 600 is wearing a wearable assembly 500 comprising a garment 200 and an electronics module 100. The electronics module 100 is communicatively coupled to the the server 700.
At step S1101 a speech recognition model is built on the server 700.
At step S1102 the speech recognition model is trained on the server 700.
At step 81103, the trained model is transmitted to the electronics module 100, for example using the wireless communicator 307.
At step S1104, the trained model is deployed on the secondary controller of the electronics module 100.
In step 81105, the electronics device is in a state where the microphone 112 coupled to the secondary controller 110 is operating to detect voice data.
Wien voice data is detected by the microphone 112, the secondary controller 110 is operable, at step S1106 to process the voice data and determine if the incoming voice data comprises the required keyword.
If the keyword is detected, at step 31107, the secondary controller 110 is configured, in response to the detected keyword, at step S1108, to operate in a voice data streaming mode in which voice data can be transmitted from the secondary controller 110.
The primary controller 103 is configured, at step S1109, to receive and incoming voice data which is passed from the microphone 112 and the secondary controller 110 to the primary controller 103.
The primary controller 103 is then operable, at step S1110, to transmit the subsequent voice data to the server 700, for example using the second antenna 109.
The server 700 is then configured, at step S1011, to receive and process the received voice data and, at step S1112 to execute a command in response to the received and processed voice data.
Whilst the steps example embodiments described above are implemented on specific components of the system 10, it will be understood that other combinations are possible. For example, steps implemented on the wearable assembly 500 user electronic device 300 or server 700 could equally be carried out on another of the wearable assembly 500, user electronic device or server. In some embodiments, the described elements may be configured to reside on a tangible, persistent, addressable storage medium and may be configured to execute on one or more processors. These functional elements may in some embodiments include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
Although the example embodiments have been described with reference to the components, modules and units discussed herein, such functional elements may be combined into fewer elements or separated into additional elements. Various combinations of optional features have been described herein, and it will be appreciated that described features may be combined in any suitable combination. In particular, the features of any one example embodiment may be combined with features of any other embodiment, as appropriate, except where such combinations are mutually exclusive. Throughout this specification, the term "comprising" or "comprises" means including the component(s) specified but not to the exclusion of the presence of others.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of the foregoing embodiment(s). The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.

Claims (22)

  1. CLAIMS1. A system comprising a primary controller, a secondary controller coupled to the primary controller and a microphone coupled to the secondary controller, wherein the secondary controller is configured to operate at a lower power than the primary controller and wherein the secondary controller is further configured to: receive incoming voice data input via the microphone; detect the presence of a keyword from the incoming voice data; and in response to the detection of a keyword, transmit a command to the primary controller.
  2. 2. A system according to claim 1, wherein the command is to configure the primary controller to receive and process subsequent incoming voice data from the secondary controller.
  3. 3. A system according to claim 1 or claim 2, wherein the secondary controller is configured to receive the subsequent incoming voice data from the microphone, and to pass on the subsequent voice data to the primary controller.
  4. 4. A system according to claim 3, wherein the secondary controller is configured to process the subsequent voice data prior to passing the subsequent voice data to the primary controller.
  5. 5. A system according to any preceding claim, comprising an electronics module for a wearable article, wherein the electronics module comprises the primary controller and the secondary controller.
  6. 6. A system according to any of claims 1 to 4, comprising an electronics module for a wearable article and a user electronic device communicatively coupled to the electronics module, wherein the user electronic device comprises the primary controller and the electronics module comprises the secondary controller.
  7. 7. A system according to claim 6, wherein the secondary controller includes a wireless communicator, and the secondary controller is configured to send the subsequent voice data to the primary controller via the wireless communicator.
  8. 8. A system according to claim 6 or 7, wherein the command is configured to activate a smart assistant service on the user electronics device.
  9. 9. A system according to any of claims 1 to 5 wherein the secondary controller is integrated with the primary controller.
  10. 10. An electronics module for a wearable article comprising a primary controller, a secondary controller coupled to the primary controller and a microphone coupled to the secondary controller, wherein the secondary controller is configured to operate at a lower power than the primary controller and wherein the secondary controller is further configured to: receive incoming voice data input via the microphone; detect the presence of a keyword from the incoming voice data; and in response to the detection of a keyword, transmit a command to the primary controller.
  11. 11. An electronics module according to claim 10, wherein the primary controller is configured to receives sensing data from sensing components of the wearable article.
  12. 12. An electronics module according to claim 11 further comprising an interface arranged to couple the primary controller to the sensing components of the wearable article.
  13. 13. An electronics module according to any of claims 10 to 12, wherein the primary controller is configured to perform a predetermined function in response to receiving the command.
  14. 14. A method comprising: deploying a speech recognition model to provide a speech recognition function for an electronics module; listening for incoming voice data; processing the incoming voice data; transmitting a command to a secondary device in response to a keyword detected as a result of the processing of the incoming voice data.
  15. 15. A method according to claim 14, wherein the transmitted command is sent to a primary controller of the electronics module.
  16. 16. A method according to claim 15, wherein the transmitted command is a command to listen for subsequent voice data.
  17. 17. A method according to any of claims 14, wherein the transmitted command is sent to a user electronics device.
  18. 18. A method according to claim 17, wherein the transmitted command is a command to listen for subsequent voice data.
  19. 19. A method according to claim 17, wherein the transmitted command is a command to activate a smart assistant service.
  20. 20. A method according to according to any of claims 14 to 19, and further comprising, prior to deploying the speech recognition model, providing a speech recognition model on a secondary controller of a user electronics module and training the speech recognition module with voice dataset.
  21. 21. A method according to claim 20, wherein the speech recognition model is provided and trained on the electronics module.
  22. 22. A method according to claim 20, wherein the speech recognition model is provided and trained on a remote device and transmitted to an electronics module for deployment.
GB2101686.0A 2021-02-08 2021-02-08 An electronics module for a wearable articel, a systemm, and a method of activation of an electronics module for a wearable article Pending GB2605121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2101686.0A GB2605121A (en) 2021-02-08 2021-02-08 An electronics module for a wearable articel, a systemm, and a method of activation of an electronics module for a wearable article

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2101686.0A GB2605121A (en) 2021-02-08 2021-02-08 An electronics module for a wearable articel, a systemm, and a method of activation of an electronics module for a wearable article

Publications (2)

Publication Number Publication Date
GB202101686D0 GB202101686D0 (en) 2021-03-24
GB2605121A true GB2605121A (en) 2022-09-28

Family

ID=74879048

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2101686.0A Pending GB2605121A (en) 2021-02-08 2021-02-08 An electronics module for a wearable articel, a systemm, and a method of activation of an electronics module for a wearable article

Country Status (1)

Country Link
GB (1) GB2605121A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593530A (en) * 2021-07-26 2021-11-02 国网安徽省电力有限公司建设分公司 Safety helmet system based on NLP technology and operation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013085507A1 (en) * 2011-12-07 2013-06-13 Hewlett-Packard Development Company, L.P. Low power integrated circuit to analyze a digitized audio stream
WO2014159581A1 (en) * 2013-03-12 2014-10-02 Nuance Communications, Inc. Methods and apparatus for detecting a voice command
US20150092520A1 (en) * 2013-09-27 2015-04-02 Google Inc. Adaptive Trigger Point For Smartwatch Gesture-to-Wake
US20150221307A1 (en) * 2013-12-20 2015-08-06 Saurin Shah Transition from low power always listening mode to high power speech recognition mode
EP3185244A1 (en) * 2015-12-22 2017-06-28 Nxp B.V. Voice activation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013085507A1 (en) * 2011-12-07 2013-06-13 Hewlett-Packard Development Company, L.P. Low power integrated circuit to analyze a digitized audio stream
WO2014159581A1 (en) * 2013-03-12 2014-10-02 Nuance Communications, Inc. Methods and apparatus for detecting a voice command
US20150092520A1 (en) * 2013-09-27 2015-04-02 Google Inc. Adaptive Trigger Point For Smartwatch Gesture-to-Wake
US20150221307A1 (en) * 2013-12-20 2015-08-06 Saurin Shah Transition from low power always listening mode to high power speech recognition mode
EP3185244A1 (en) * 2015-12-22 2017-06-28 Nxp B.V. Voice activation system

Also Published As

Publication number Publication date
GB202101686D0 (en) 2021-03-24

Similar Documents

Publication Publication Date Title
WO2021094775A1 (en) Method performed by an electronics arrangement for a wearable article
US20230414149A1 (en) Method and System for Measuring and Displaying Biosignal Data to a Wearer of a Wearable Article
GB2605121A (en) An electronics module for a wearable articel, a systemm, and a method of activation of an electronics module for a wearable article
US20230259191A1 (en) Electronics Module for a Wearable Device
GB2599673A (en) Method and system for measuring and displaying biosignal data to a wearer of a wearable article
WO2022106833A1 (en) Method and system for detecting peaks in a biosignal
WO2022023749A1 (en) An electronics module for a wearable article, a controller for an electronics module, and a wearable article incorporating an electronics module
GB2602645A (en) Method and system for detecting peaks in a heartrate signal
WO2021094777A1 (en) Method and electronics arrangement for a wearable article
GB2619337A (en) A wearable article, an electronics module for a wearable article and a method performed by a controller for an electronics module for a wearable article
GB2613591A (en) Electronic module, a controller for an electronics module and a method performed by a controller
GB2619291A (en) Electronics module for a wearable article and system incorporating an electronics module and a wearable article
GB2613249A (en) Method, system and article
GB2608622A (en) Wearable article, assembly and method
GB2599671A (en) Method and system for measuring and displaying biosignal data to a wearer of a wearable article
US20240106283A1 (en) Selective data transfer for efficient wireless charging
GB2599672A (en) Method and system for measuring and displaying biosignal data to a wearer of a wearable article
GB2611326A (en) Method and system for facilitating communication between an electronics module and an audio output device
WO2022269259A1 (en) A controller, electronics module, system and method
GB2612979A (en) Method and system for measuring and displaying biosignal data to a wearer of a wearable article
GB2611880A (en) Garment hanger and system
GB2608175A (en) Shirt collar assembly and wearable article
GB2608174A (en) Wearable article and system
GB2608173A (en) A controller, electronics module, system and method
GB2608172A (en) A controller, electronics module, system and method