AU2023200418A1 - Wireless Headset and Tablet Sign Language Communication System and Method - Google Patents
Wireless Headset and Tablet Sign Language Communication System and Method Download PDFInfo
- Publication number
- AU2023200418A1 AU2023200418A1 AU2023200418A AU2023200418A AU2023200418A1 AU 2023200418 A1 AU2023200418 A1 AU 2023200418A1 AU 2023200418 A AU2023200418 A AU 2023200418A AU 2023200418 A AU2023200418 A AU 2023200418A AU 2023200418 A1 AU2023200418 A1 AU 2023200418A1
- Authority
- AU
- Australia
- Prior art keywords
- text
- module
- language
- speech
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004891 communication Methods 0.000 title claims abstract description 231
- 238000000034 method Methods 0.000 title claims description 85
- 238000013519 translation Methods 0.000 claims abstract description 95
- 230000008569 process Effects 0.000 claims description 77
- 238000003909 pattern recognition Methods 0.000 claims description 35
- 238000005516 engineering process Methods 0.000 claims description 19
- 230000000007 visual effect Effects 0.000 claims description 15
- 238000003058 natural language processing Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 8
- 238000013461 design Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000023555 blood coagulation Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013518 transcription Methods 0.000 description 1
- 230000035897 transcription Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72475—User interfaces specially adapted for cordless or mobile telephones specially adapted for disabled users
- H04M1/72478—User interfaces specially adapted for cordless or mobile telephones specially adapted for disabled users for hearing-impaired users
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/009—Teaching or communicating with deaf persons
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/52—Details of telephonic subscriber devices including functional features of a camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/58—Details of telephonic subscriber devices including a multilanguage function
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/74—Details of telephonic subscriber devices with voice recognition means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Machine Translation (AREA)
- Telephonic Communication Services (AREA)
- Mobile Radio Communication Systems (AREA)
- Traffic Control Systems (AREA)
- Transceivers (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
2 The wireless headset and tablet sign language communication
3 system is a private communication system. The wireless headset
4 and tablet sign language communication system comprises a
5 routing device, a plurality of communication stations, and a
6 plurality of wireless communication links. The wireless headset
7 and tablet sign language communication system supports audio and
8 speech-based communication between the plurality of
9 communication stations. The wireless headset and tablet sign
0 language communication system further supports translation
1 services that translate audible and speech based messages into
2 visible sign language messages. The wireless headset and tablet
3 sign language communication system further supports translation
4 services that translate visible sign language messages into
5 audible and speech based messages.
68
Description
1 TITLE OF INVENTION
2 Wireless Headset and Tablet Sign Language Communication
3 System and Method.
4 CROSS REFERENCES TO RELATED APPLICATIONS
This non-provisional patent application claims priority to
6 provisional patent application 63/303,923 that was filed by the
7 applicant, Mr. Ronald Snagg, on January 27, 2022.
8 This non-provisional patent application incorporates by
9 reference subject matter of US Patent 11,013,050, which was also
filed by the applicant, Mr. Ronald Snagg.
1 STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH
2 Not Applicable
3 REFERENCE TO APPENDIX
4 Not Applicable
6 FIELD OF THE INVENTION
7 The present invention relates to the field of electricity
18 and electric communication technique including wireless
19 communication networks, more specifically, a direct mode
connection management device. (H04W76/14)
21 SUMMARY OF INVENTION
22 The wireless headset and tablet sign language communication
23 system is a private communication system. The wireless headset
24 and tablet sign language communication system comprises a
1 routing device, a plurality of communication stations, and a
2 plurality of wireless communication links. The wireless headset
3 and tablet sign language communication system supports audio and
4 speech-based communication between the plurality of
communication stations. Optionally, the sign language
6 communication system may be used for a text-based communication
7 system in lieu of the speech-based communication system. The
8 wireless headset and tablet sign language communication system
9 supports a broadcast model of communication. By broadcast is
meant that: a) any first individual communication station
1 selected from the plurality of communication stations receives
2 any audio and speech-based message transmitted by any second
3 individual communication station selected from the plurality of
4 communication stations; and, b) any audio and speech-based
message transmitted by any first individual communication
6 station selected from the plurality of communication stations is
7 received by all the unselected individual communication stations
18 remaining in the plurality of communication stations. The
19 routing device establishes a wireless communication link
selected from the plurality of wireless communication link with
21 each individual communication station contained in the plurality
22 of communication stations. The routing device is a repeater
23 that receives an audio or speech-based message from any first
24 individual communication station selected from the plurality of
1 communication stations and retransmits the received message to
2 each individual communication station remaining in the plurality
3 of communication stations.
4 These together with additional objects, features and
advantages of the wireless headset and tablet sign language
6 communication system will be readily apparent to those of
7 ordinary skill in the art upon reading the following detailed
8 description of the presently preferred, but nonetheless
9 illustrative, embodiments when taken in conjunction with the
accompanying drawings.
1 In this respect, before explaining the current embodiments
2 of the wireless headset and tablet sign language communication
3 system in detail, it is to be understood that the wireless
4 headset and tablet sign language communication system is not
limited in its applications to the details of construction and
6 arrangements of the components set forth in the following
7 description or illustration. Those skilled in the art will
18 appreciate that the concept of this disclosure may be readily
19 utilized as a basis for the design of other structures, methods,
and systems for carrying out the several purposes of the
21 wireless headset and tablet sign language communication system.
22 It is therefore important that the claims be regarded as
23 including such equivalent construction insofar as they do not
24 depart from the spirit and scope of the wireless headset and
1 tablet sign language communication system. It is also to be
2 understood that the phraseology and terminology employed herein
3 are for purposes of description and should not be regarded as
4 limiting.
6 The accompanying drawings, which are included to provide a
7 further understanding of the invention are incorporated in and
8 constitute a part of this specification, illustrate an
9 embodiment of the invention and together with the description
serve to explain the principles of the invention. They are
1 meant to be exemplary illustrations provided to enable persons
2 skilled in the art to practice the disclosure and are not
3 intended to limit the scope of the appended claims.
4 Figure 1 is a perspective view of an embodiment of the
disclosure.
6 Figure 2 is a detail view of an embodiment of the
7 disclosure.
18 Figure 3 is a detail view of an embodiment of the
19 disclosure.
Figure 4 is a schematic diagram of an embodiment of the
21 disclosure.
22 Figure 5 is an in-use view of an embodiment of the
23 disclosure.
1 Figure 6 is a detail view of an embodiment of the
2 disclosure.
3 Figure 7 is a detail view of an embodiment of the
4 disclosure.
Figure 8 is a detail view of an embodiment of the
6 disclosure.
7 Figure 9 is a schematic view of an embodiment of the
8 disclosure.
9 Figure 10 is a view of an embodiment with a pair of
headsets.
1 DETAILED DESCRIPTION OF THE EMBODIMENT
2 The following detailed description is merely exemplary in
3 nature and is not intended to limit the described embodiments of
4 the application and uses of the described embodiments. As used
herein, the word "exemplary" or "illustrative" means "serving as
6 an example, instance, or illustration." Any implementation
7 described herein as "exemplary" or "illustrative" is not
18 necessarily to be construed as preferred or advantageous over
19 other implementations. All of the implementations described
below are exemplary implementations provided to enable persons
21 skilled in the art to practice the disclosure and are not
22 intended to limit the scope of the appended claims. Furthermore,
23 there is no intention to be bound by any expressed or implied
1 theory presented in the preceding technical field, background,
2 brief summary or the following detailed description.
3 Detailed reference will now be made to one or more
4 potential embodiments of the disclosure, which are illustrated
in Figures 1 through 10.
6 The wireless headset and tablet sign language communication
7 system 100 (hereinafter invention) is a private communication
8 system. The invention 100 comprises a routing device 101, a
9 plurality of communication stations 102, and a plurality of
wireless communication links 103. The plurality of wireless
1 communication links 103 establishes communication links between
2 the routing device 101 and each of the plurality of
3 communication stations 102.
4 It shall be noted that the invention 100 may be used for a
text-based communication system in lieu of the speech-based
6 communication system. This would involve the same processes, but
7 instead of speech would utilize a text-based system. The claims
18 listed below will reflect both iterations.
19 The invention 100 supports audio and speech-based
communication between the plurality of communication stations
21 102. The invention 100 supports a broadcast model of
22 communication. By broadcast is meant that: a) any first
23 individual communication station selected from the plurality of
24 communication stations 102 receives any audio or speech-based
1 message transmitted by any second individual communication
2 station selected from the plurality of communication stations
3 102; and, b) any audio or speech-based message transmitted by
4 any first individual communication station selected from the
plurality of communication stations is received by all the
6 unselected individual communication station remaining in the
7 plurality of communication stations 102.
8 The applicant notes that the primary advantage of the
9 invention 100 is that it allows a plurality of clients to
communicate messages between themselves through a preferred
1 mode. By preferred mode is meant that any first client selected
2 from the plurality of clients can choose to exchange messages
3 with any second client selected from the plurality of clients
4 using a mode selected from the group consisting of: a) a speech
based message; b) a speech based message; and, c) a sign
6 language based message. Each client selected from the plurality
7 of clients has the ability to select a preferred mode of message
18 exchange. A consequence of allowing each client to select a
19 preferred mode is that a duality must exist in the conversion
technology that translates a message from any first selected
21 preferred mode and any second selected preferred mode. Without
22 this duality, two way communication between a first client using
23 a first selected preferred mode and a second client using a
24 second selected preferred mode would not be possible.
1 To make this duality more explicit, this disclosure assumes
2 that the existence of a speech to speech conversion in an
3 instantiation of the invention 100 implies the existence of a
4 speech to speech conversion in the same instantiation of the
invention 100. The disclosure also assumes that the existence
6 of a sign language to speech conversion in an instantiation of
7 the invention 100 implies the existence of a speech to sign
8 language conversion in the same instantiation of the invention
9 100. The disclosure finally assumes that the existence of a
sign language to speech conversion in an instantiation of the
1 invention 100 implies the existence of a speech to sign language
2 conversion in the same instantiation of the invention 100. The
3 sign language to speech translation process may be a module.
4 The routing device 101 establishes a wireless communication
link selected from the plurality of wireless communication links
6 103 with each individual communication station contained in the
7 plurality of communication stations 102. The routing device 101
18 is a repeater that receives an audio or speech-based message
19 from any first individual communication station selected from
the plurality of communication stations 102 and retransmits the
21 audio or speech-based message to each unselected individual
22 communication station remaining in the plurality of
23 communication stations 102.
1 Each of the plurality of wireless communication links 103
2 is a structured data exchange mechanism that is established
3 between the routing device 101 and an individual headset 121
4 selected from the plurality of headsets 104. Each of the
plurality of wireless communication links 103 uses a wireless
6 IEEE 802.llx communication protocol such that a physical
7 electric connection between any individual headset 121 selected
8 from the plurality of communication stations 102 and the routing
9 device 101 is not required.
The routing device 101 is an electrical device. The
1 routing device 101 acts as a repeater. The routing device 101
2 establishes a wireless communication link selected from the
3 plurality of wireless communication links 103 with each
4 individual communication station selected from the plurality of
communication stations 102. The routing device 101 receives an
6 audio message from any first individual communication station
7 selected from the plurality of communication stations 102 and
18 retransmits the audio message to each unselected individual
19 communication station remaining in the plurality of
communication stations 102. The routing device 101 terminates
21 the plurality of wireless communication links 103. Each
22 wireless communication link established by the routing device
23 101 uses an IEEE 802.11x communication protocol. The routing
24 device 101 is a commercially available electric device commonly
1 sold as a "WiFiTm router". The design and use of the routing
2 device 101 described above are well-known and documented in the
3 electrical arts.
4 The routing device 101 comprises a plurality of routing
transceivers 111, a logical device 112, and a reset switch 113.
6 The plurality of routing transceivers 111, the logical device
7 112, and the reset switch 113 are electrically interconnected.
8 Each of the plurality of routing transceivers 111 is an
9 electrical device. The transceiver is defined elsewhere in this
disclosure. Each of the plurality of routing transceivers 111
1 creates and maintains the plurality of wireless communication
2 links 103 between the routing device 101 and the plurality of
3 communication stations 102. Each of the plurality of routing
4 transceivers 111 is a multichannel device. By multichannel
device is meant that each of the plurality of routing
6 transceivers 111 simultaneously manages and maintains each of
7 the multiple wireless communication links selected from the
18 plurality of wireless communication links 103 formed between the
19 plurality of communication stations 102 and the routing device
101.
21 The plurality of routing transceivers 111 comprises a
22 headset routing transceiver 114 and a speech station routing
23 transceiver 115.
1 The headset routing transceiver 114 is an electrical
2 device. The transceiver is defined elsewhere in this
3 disclosure. The headset routing transceiver 114 creates and
4 maintains the plurality of wireless communication links 103
between the routing device 101 and the plurality of headsets
6 104. The headset routing transceiver 114 is a multichannel
7 device. By multichannel device is meant that the headset
8 routing transceiver 114 simultaneously manages and maintains
9 each of the multiple wireless communication links selected from
the plurality of wireless communication links 103 formed between
1 the plurality of communication stations 102 and the routing
2 device 101.
3 The speech station routing transceiver 115 is an electrical
4 device. The transceiver is defined elsewhere in this
disclosure. The speech station routing transceiver 115 creates
6 and maintains the plurality of wireless communication links 103
7 between the routing device 101 and the plurality of speech
18 stations 105. The speech station routing transceiver 115 is a
19 multichannel device. By multichannel device is meant that the
speech station routing transceiver 115 simultaneously manages
21 and maintains each of the multiple wireless communication links
22 selected from the plurality of wireless communication links 103
23 formed between the plurality of communication stations 102 and
24 the routing device 101.
1 The logical device 112 is an electric circuit. The logical
2 device 112 manages, regulates, and operates the routing device
3 101. The logical device 112 controls the operation of the
4 plurality of routing transceivers 111. The logical device 112
monitors the reset switch 113. The logic module 112 operates:
6 a) a speech to speech technology 701; and, b) a speech to speech
7 technology 702. The speech to speech technology 701 and the
8 speech to speech technology 702 are known technologies used in
9 speech recognition systems. The logic module 112 processes
audio messages received from the plurality of headset 104
1 through the speech to speech technology 701 such that the logic
2 module 112 transmits a speech translation of the received audio
3 message to the plurality of speech stations 105 through the
4 speech station routing transceiver 115. The logic module 112
processes speech-based messages received from the plurality of
6 speech stations 105 through the speech to speech technology 102
7 such that the logic module 112 transmits an audio transcription
18 of the received speech-based message to the plurality of
19 headsets 104 through the headset routing transceiver 114.
The reset switch 113 is a momentary switch. The actuation
21 of the reset switch 113 indicates to the logical device 112 that
22 the plurality of routing transceivers 111 should terminate each
23 individual wireless communication link selected from the
24 plurality of wireless communication links 103 such that the
1 communication network formed by the invention 100 is shut down.
2 The communication network is then reestablished by
3 reestablishing each individual wireless communication link
4 selected from the plurality of wireless communication links 103.
The plurality of communication stations 102 comprises a
6 plurality of headsets 104 and a plurality of speech stations
7 105.
8 Each of the plurality of headsets 104 is a hands-free
9 communication device selected from the group consisting of
earbuds and headphones. The term earbuds are defined elsewhere
1 in this disclosure. The term headphones are defined elsewhere
2 in this disclosure. Each of the plurality of headsets 104 forms
3 a wireless communication link selected from the plurality of
4 wireless communication links 103 with the routing device 101.
Each wireless communication link uses an IEEE 802.llx
6 communication protocol. The plurality of headsets 104 comprises
7 a collection of individual headsets 121.
18 Each of the plurality of headsets 104 is a hands-free
19 communication device selected from the group consisting of
earbuds and headphones. The term earbuds are defined elsewhere
21 in this disclosure. The term headphones are defined elsewhere
22 in this disclosure. Each of the plurality of headsets 104 forms
23 a wireless communication link selected from the plurality of
24 wireless communication links 103 with the routing device 101.
1 Each wireless communication link uses an IEEE 802.11x
2 communication protocol. Each of the plurality of headsets 104
3 generates the audio message transmitted over the wireless
4 communication link. Each of the plurality of headsets 104
announces any audio messages received over the wireless
6 communication link.
7 Any first individual headset 121 selected from the
8 plurality of headsets 104 transmits an audio message over the
9 wireless communication link to the routing device 101. Any
first individual headset 121 selected from the plurality of
1 headsets 104 receives any audio messages transmitted over the
2 wireless communication link by the routing device 101 that were
3 transmitted by any second individual headset 121 selected from
4 the plurality of headsets 104.
The plurality of headsets 104 comprises a collection of
6 individual headsets 121.
7 The individual headset 121 is a wireless communication
18 device. The individual headset 121 is selected from the group
19 consisting of headphones and earbuds. The individual headset
121 establishes a wireless communication link selected from the
21 plurality of wireless communication links 103 with the routing
22 device 101. The individual headset 121 generates and transmits
23 an audio message to the routing device 101 over the wireless
24 communication link. The individual headset 121 receives an
1 audio message that is transmitted from the routing device 101
2 over the wireless communication link. Each individual headset
3 121 further comprises an individual headset 121 transceiver 122,
4 an individual headset 121 speaker 123, an individual headset 121
microphone 124, and an individual headset 121 linking switch
6 125. The individual headset 121 transceiver 122, the individual
7 headset 121 speaker 123, the individual headset 121 microphone
8 124, and the individual headset 121 linking switch 125 are
9 electrically interconnected.
The individual headset 121 transceiver 122 is an electrical
1 device. The transceiver is defined elsewhere in this
2 disclosure. The individual headset 121 transceiver 122 creates
3 and maintains the wireless communication link selected from the
4 plurality of wireless communication links 103 between the
routing device 101 and the individual headset 121 transceiver
6 122.
7 The individual headset 121 speaker 123 is a transducer.
18 The individual headset 121 speaker 123 receives electrical
19 signals from the individual headset 121 transceiver 122. The
individual headset 121 speaker 123 converts the received
21 electrical signals into acoustic energy that is used to announce
22 the received audio message.
23 The individual headset 121 microphone 124 is a transducer.
24 The individual headset 121 microphone 124 detects acoustic
1 energy in the vicinity of the individual headset 121 and
2 converts the detected acoustic energy into an electric signal.
3 The individual headset 121 microphone 124 transmits the
4 generated electric signal to the individual headset 121
transceiver 122. The individual headset 121 transceiver 122
6 transmits the received electric signals to the routing device
7 101 over the wireless communication link as an audio message.
8 The individual headset 121 transceiver 122 monitors the
9 individual headset 121 linking switch 125. The individual
headset 121 linking switch 125 is a momentary switch. The
1 actuation of the individual headset 121 linking switch 125
2 indicates to the individual headset 121 transceiver 122 that the
3 individual headset 121 should establish a wireless
4 communication link selected from the plurality of wireless
communication links 103 with the routing device 101 such that
6 the individual headset 121 joins the communication network
7 formed by the invention 100.
18 In the first potential embodiment of the disclosure, the
19 plurality of headsets 104 comprises a first headset 131 and a
second headset 141.
21 The first headset 131 further comprises a first headset 131
22 transceiver 132, a first headset 131 speaker 133, a first
23 headset 131 microphone 134, and a first headset 131 linking
24 switch 135. The first headset 131 transceiver 132, the first
1 headset 131 speaker 133, the first headset 131 microphone 134,
2 and the first headset 131 linking switch 135 are electrically
3 interconnected.
4 The first headset 131 is a headset used to join the
communication network formed by the invention 100. The first
6 headset 131 transceiver 132 is the individual headset 121
7 transceiver 122 associated with the first headset 131. The
8 first headset 131 speaker 133 is the individual headset 121
9 speaker 123 associated with the first headset 131. The first
headset 131 microphone 134 is the individual headset 121
1 microphone 124 associated with the first headset 131. The first
2 headset 131 linking switch 135 is the individual headset 121
3 linking switch 125 associated with the first headset 131.
4 The second headset 141 further comprises a second headset
141 transceiver 142, a second headset 141 speaker 143, a second
6 headset 141 microphone 144, and a second headset 141 linking
7 switch 145. The second headset 141 transceiver 142, the second
18 headset 141 speaker 143, the second headset 141 microphone 144,
19 and the second headset 141 linking switch 145 are electrically
interconnected.
21 The second headset 141 is a headset used to join the
22 communication network formed by the invention 100. The second
23 headset 141 transceiver 142 is the individual headset 121
24 transceiver 122 associated with the second headset 141. The
1 second headset 141 speaker 143 is the individual headset 121
2 speaker 123 associated with the second headset 141. The second
3 headset 141 microphone 144 is the individual headset 121
4 microphone 124 associated with the second headset 141. The
second headset 141 linking switch 145 is the individual headset
6 121 linking switch 125 associated with the second headset 141.
7 Each of the plurality of speech stations 105 is a speech
8 based communication device selected from the group consisting of
9 earbuds and headphones. The term earbuds are defined elsewhere
in this disclosure. The term headphones are defined elsewhere
1 in this disclosure. Each of the plurality of speech stations
2 105 forms a wireless communication link selected from the
3 plurality of wireless communication links 103 with the routing
4 device 101. Each wireless communication link uses an IEEE
802.llx communication protocol. The plurality of speech
6 stations 105 comprises a collection of individual speech
7 stations 521.
18 The individual speech station 521 is a wireless
19 communication device. The individual speech station 521
establishes a wireless communication link selected from the
21 plurality of wireless communication links 103 with the routing
22 device 101. The individual speech station 521 generates and
23 transmits a speech-based message to the routing device 101 over
24 the wireless communication link. The individual speech station
1 521 receives the speech-based message that is transmitted from
2 the routing device 101 over the wireless communication link.
3 The individual speech station 521 transceiver 522 creates and
4 maintains the wireless communication link selected from the
plurality of wireless communication links 103 between the speech
6 station routing transceiver 115 and each individual speech
7 station 521 transceiver 522.
8 Any first individual speech station 521 selected from the
9 plurality of speech stations 105 transmits a speech-based
message over the wireless communication link to the routing
1 device 101. Any first individual speech station 521 selected
2 from the plurality of speech stations 105 receives any speech
3 based messages transmitted over the wireless communication link
4 by the routing device 101 that were transmitted by any second
individual speech station 521. Each of the plurality of speech
6 stations 105 generates the speech-based message transmitted over
7 the wireless communication link. Each of the plurality of
18 speech stations 105 displays the speech-based messages received
19 over the wireless communication link. Each individual speech
station 521 further comprises an individual speech station 521
21 transceiver 522, an individual speech station 521 logic module
22 523, an individual speech station 521 display 524, and an
23 individual speech station 521 interface 525.
1 The individual speech station 521 transceiver 522, the
2 individual speech station 521 logic module 523, the individual
3 speech station 521 display 524 and the individual speech station
4 521 interface 525 are electrically interconnected.
The individual speech station 521 transceiver 522 is an
6 electrical device. The transceiver is defined elsewhere in this
7 disclosure. The individual speech station 521 transceiver 522
8 creates and maintains the wireless communications link selected
9 from the plurality of wireless communication links 103 between
the plurality of speech stations 105 and each individual speech
1 station 521 transceiver 522.
2 The individual speech station 521 logic module 523 is a
3 programmable electrical device. The individual speech station
4 521 logic module 523 receives a speech-based message from the
individual speech station 521 transceiver 522. The individual
6 speech station 521 logic module 523 converts the received
7 electrical signals into a speech-based message that is displayed
18 on the individual speech station 521 display 524.
19 The individual speech station 521 display 524 is an
electrical device used to display the speech-based message
21 transmitted from the individual speech station 521 logic module
22 523. The individual speech station 521 interface 525 receives a
23 speech-based message from the user of the individual speech
24 station 521 and transmits the speech-based message to the
1 individual speech station 521 logic module 523 for transmission
2 to the plurality of speech stations 105 by the individual speech
3 station 521 transceiver 522.
4 In the first potential embodiment of the disclosure, the
plurality of speech stations 105 further comprises a first
6 speech station 531 and a second speech station 541.
7 The first speech station 531 further comprises a first
8 speech station 531 transceiver 532, a first speech station 531
9 logic module 533, a first speech station 531 display 534, and a
first speech station 531 interface 535. The first speech
1 station 531 transceiver 532, the first speech station 531 logic
2 module 533, the first speech station 531 display 534 and the
3 first speech station 531 interface 535 are electrically
4 interconnected.
The first speech station 531 is a headset used to join the
6 communication network formed by the invention 100. The first
7 speech station 531 transceiver 532 is the individual speech
18 station 521 transceiver 522 associated with the first speech
19 station 531. The first speech station 531 logic module 533 is
the individual speech station 521 logic module 523 associated
21 with the first speech station 531. The first speech station 531
22 display 534 is the individual speech station 521 display 524
23 associated with the first speech station 531. The first speech
1 station 531 interface 535 is the individual speech station 521
2 interface 525 associated with the first speech station 531.
3 The second speech station 541 further comprises a second
4 speech station 541 transceiver 542, a second speech station 541
logic module 543, a second speech station 541 display 544, and a
6 second speech station 541 interface 545. The second speech
7 station 541 transceiver 542, the second speech station 541 logic
8 module 543, the second speech station 541 display 544 and the
9 second speech station 541 interface 545 are electrically
interconnected.
1 The second speech station 541 is a headset used to join the
2 communication network formed by the invention 100. The second
3 speech station 541 transceiver 542 is the individual speech
4 station 521 transceiver 522 associated with the second speech
station 541. The second speech station 541 logic module 543 is
6 the individual speech station 521 logic module 523 associated
7 with the second speech station 541. The second speech station
18 541 display 544 is the individual speech station 521 display 524
19 associated with the second speech station 541. The second
speech station 541 interface 545 is the individual speech
21 station 521 interface 525 associated with the second speech
22 station 541.
23 In a second potential embodiment of the disclosure, the
24 logical device 112 further comprises a plurality of language
1 modules 802. The plurality of language modules 802 further
2 comprises a sign language to speech translation module 703 and a
3 speech to sign language translation module 704. The logical
4 device 112 forms the platform that supports the operation of the
sign language to speech translation module 703 and the speech to
6 sign language translation module 704.
7 Each language module selected from the plurality of
8 language modules 802 forms a natural language processing
9 structure. Each language module selected from the plurality of
language modules 802 receives a first natural language
1 communication. Each language module selected from the plurality
2 of language modules 802 translates the intellectual content of
3 the first natural language message into a second natural
4 language. In the second potential embodiment of the disclosure,
sign language is chosen as the natural language communication
6 for a natural language selected from the group consisting of the
7 first natural language and the second natural language.
18 The sign language to speech translation module 703 is the
19 natural language processing structure that translates sign
language as the first natural language into the second natural
21 language. The sign language to speech translation module 703 is
22 an automated process. The sign language to speech translation
23 module 703 comprises an image sensor 813, a pattern recognition
24 module 821, an image to speech natural language module 822, and
1 a speech generation module 824. The pattern recognition module
2 821 further comprises a first output 841 and a first input 851.
3 The image to speech natural language module 822 further
4 comprises a second output 842 and a second input 852. The
speech generation module 824 further comprises a third output
6 844 and a third input 854.
7 The sign language to speech translation module 703
8 generates the translated second natural language communication
9 into a speech based third output 844. The third output 844 is
transmitted to the logical device 112. The logical device 112
1 processes the third output 844 as a message selected from the
2 group consisting of: a) a speech based message that is
3 displayed on an individual speech station 521; and, b) an
4 audible message that is announced over an individual headset
121.
6 The image sensor 813 is an electric sensor. The image
7 sensor 813 converts light into one or more electric signals.
18 The image sensor 813 captures the images of the sign language
19 communication that forms the first natural language
communication. The image sensor 813 transmits the captured
21 images of the first natural language communication to the
22 pattern recognition module 821.
23 The pattern recognition module 821 receives the first input
24 851 from the image sensor 813. The pattern recognition module
1 821 generates the first output 841 that becomes the second input
2 852 of the image to speech natural language module 822. The
3 image to speech natural language module 822 generates the second
4 output 842 that becomes the third input 854 of the speech
generation module 824. The speech generation module 824
6 generates the third output 844 that becomes an input into the
7 logical device 112.
8 The pattern recognition module 821 receives the captured
9 images of the sign language communication from the image sensor
813. The pattern recognition module 821 identifies within the
1 captured images of the sign language communication the signs and
2 gestures that form the sign language communication. The pattern
3 recognition module 821 identifies within the captured images of
4 the sign language communication the order in which the signs and
gestures occur. The pattern recognition module 821 transmits
6 the identified order of the signs and gestures identified within
7 the captured images of the sign language communication to the
18 first output 841.
19 The image to speech natural language module 822 receives
the identified order of the signs and gestures from the first
21 output 841 as the second input 852. The image to speech natural
22 language module 822 translates the identified order of the signs
23 and gestures received from the pattern recognition module 821
24 into a sentiment expressed in the second natural language. The
1 image to speech natural language module 822 transmits the
2 translated sentiment to the second output 842.
3 The speech generation module 824 receives the translated
4 sentiment from the second output 842 as the third input 854.
The speech generation module 824 converts the translated
6 sentiment into a speech based data file that is transmitted to
7 the third output 844. The data file generated by the speech
8 generation module 824 provides a speech version of the
9 translated sentiment in the second language that is suitable for
display on the individual speech station 521 display 524.
1 This paragraph and the following paragraph describe the
2 operation of the logical device 112 in relation to the operation
3 of the sign language to speech translation module 703. The
4 logical device 112 receives the speech based third output 844
from the speech generation module 824 of the sign language to
6 speech translation module 703. The logical device 112
7 determines the delivery requirement of the third output 844.
18 The delivery requirement is selected from the group consisting
19 of: a) an announcement of the third output 844 through an
individual headset 121; and b) a visibly displayed speech
21 message displayed on an individual speech station 521. The
22 logical device 112 determines the delivery requirement from an
23 externally provided input method.
1 If the logical device 112 is required to process the third
2 output 844 through an announcement on the individual headset
3 121, the speech based message received as the third output 844
4 is initially processed through the speech to speech technology
701 and the subsequently transmitted to the individual headset
6 121. If the logical device 112 is required to process the third
7 output 844 through the visual display of speech, the logical
8 device 112 transmits the third output 844 directly to the
9 individual speech station 521.
The speech to sign language translation module 704 is the
1 natural language processing structure that translates the first
2 natural language into sign language as the second natural
3 language. The speech to sign language translation module 704 is
4 an automated process. The speech to sign language translation
module 704 further comprises a speech to image natural language
6 module 825 and an image generation module 826. The speech to
7 image natural language module 825 further comprises a fourth
18 output 845 and a fourth input 855. The image generation module
19 826 further comprises a fifth output 846 and a fifth input 856.
The logical device 112 transmits a speech message that
21 becomes the fourth input 855 of the speech to image natural
22 language module 825. The speech to image natural language
23 module 825 generates the fourth output 845 that becomes an input
24 into the fifth input 856 of the image generation module 826.
1 The image generation module 826 generates the fifth output 846
2 that becomes an input into the logical device 112.
3 The speech to sign language translation module 704
4 generates the translated second natural language communication
into a visually based fifth output 846 in the form of sign
6 language. The third output 844 is transmitted to the logical
7 device 112. The logical device 112 transmits the fifth output
8 846 as a video image that is displayed on the individual speech
9 station 521 display 524 of the individual speech station 521.
The speech to image natural language module 825 receives a
1 speech format communication in a first natural language. The
2 speech to image natural language module 825 converts the
3 received speech format communication into the order of a series
4 of the signs and gestures of a sign language based second
natural language. The speech to image natural language module
6 825 transmits the signs and gestures of the sign language based
7 communication to the fourth output 845.
18 The image generation module 826 receives the identified
19 order of the signs and gestures from the fourth output 845 as
the fifth output 846. The image generation module 826
21 translates the identified order of the signs and gestures
22 received from the speech to image natural language module 825
23 into a sentiment expressed in the in the visual images of the
24 sign language that forms the second natural language. The image
1 generation module 826 transmits the sentiment of the speech
2 based communication in the form of the generated images of the
3 sign language used to from the second natural language.
4 This paragraph and the following paragraph describe the
operation of the logical device 112 in relation to the operation
6 of the speech to sign language translation module 704. The
7 logical device 112 receives a communication from a message
8 source selected from the group consisting of: a) a speech based
9 message generated from an individual headset 121; and, b) a
speech based message generated from an individual speech station
1 521.
2 If the speech to sign language translation module 704 is
3 required to generate a sign language translation of a speech
4 message, the logical device 112 transmits the received speech
based message directly to the fourth input 855 of the speech to
6 image natural language module 825. If the speech to sign
7 language translation module 704 is required to generate a sign
18 language translation of an audible message, the logical device
19 112 first processes the received audible message through the
speech to speech technology 702 and then subsequently transmits
21 the converted speech format message to the fourth input
22 855speech to image natural language module 825. The logical
23 device 112 receives the visual images of the sign language
24 translation of the message received from the message source
1 through the fifth output 846 of the image generation module 826.
2 The logical device 112 transmits the received sign language
3 translation to the individual speech station 521.
4 The following six paragraphs describe the operating theory
for the primary conversion processes implemented by the
6 invention 100. Specifically, the following conversion processes
7 are described: a) the speech to speech conversion process; b)
8 the speech to speech conversion process; c) the sign language to
9 speech conversion process; d) the speech to sign language
conversion process; e) the sign language to speech conversion
1 process; and, f) the speech to sign language conversion process.
2 This paragraph describes the operating theory for the
3 speech to speech conversion process. The individual headset 121
4 receives a speech based message that is intended for delivery as
a speech based message. The individual headset 121 transmits
6 the speech based message to the headset routing transceiver 114.
7 The headset routing transceiver 114 retransmits the speech based
18 message to the logical device 112. The logical device 112
19 processes the received speech based message through the speech
to speech technology 701 to generate the sentiment conveyed
21 through the speech based message into a corresponding speech
22 based message. The logical device 112 transmits the
23 corresponding speech based message to the speech station routing
24 transceiver 115. The speech station routing transceiver 115
1 retransmits the corresponding speech based message to the
2 individual speech station 521 display 524 designated to receive
3 the originally generated message.
4 This paragraph describes the operating theory for the
speech to speech conversion process. The individual speech
6 station 521 receives a speech based message that is intended for
7 delivery as a speech based message. The individual speech
8 station 521 transmits the speech based message to the speech
9 station routing transceiver 115. The speech station routing
transceiver 115 retransmits the speech based message to the
1 logical device 112. The logical device 112 processes the
2 received speech based message through the speech to speech
3 technology 702 to generate the sentiment conveyed through the
4 speech based message into a corresponding speech based message.
The logical device 112 transmits the corresponding speech based
6 message to the headset routing transceiver 114. The headset
7 routing transceiver 114 retransmits the corresponding speech
18 based message to the individual headset 121 designated to
19 receive the originally generated message.
This paragraph describes the operating theory for the sign
21 language to speech conversion process. The image sensor 813
22 receives a sign language based message that is intended for
23 delivery as a speech based message. The image sensor 813
24 transmits the captured sign language based message to the
1 pattern recognition module 821. The pattern recognition module
2 821 analyzes the gesture pattern captured by the image sensor
3 813 to determine the words and syntax of the captured sign
4 language based message. The pattern recognition module 821
transmits the words and syntax information to the image to
6 speech natural language module 822. The image to speech natural
7 language module 822 determines the sentiment of the words and
8 syntax of the captured sign language based message. The image
9 to speech natural language module 822 transmits the determined
sentiment to the speech generation module 824 which generates
1 the speech based message corresponding the sentiment expressed
2 by the original sign language based message. The speech
3 generation module 824 transmits the corresponding speech based
4 message to the logical device 112. The logical device 112
retransmits the corresponding speech based message to the
6 individual speech station 521 display 524 designated to receive
7 the originally generated message.
18 This paragraph describes the operating theory for the
19 speech to sign language conversion process. The logical device
112 receives a speech based message that is intended for
21 delivery as a sign language based message. The logical device
22 112 transmits the received speech based message to the speech to
23 image natural language module 825. The speech to image natural
24 language module 825 analyzes the received speech based message
1 to determine the gestures required to convey the sentiment of
2 the received speech based message into a sign language based
3 message. The speech to image natural language module 825
4 transmits the required gestures to the image generation module
826. The image generation module 826 generates the images
6 necessary to generate the visual display necessary for the
7 presentation of the sign language based message. The image
8 generation module 826 transmits the necessary visual display to
9 the logical device 112. The logical device 112 retransmits the
necessary visual display to the individual speech station 521
1 display 524 designated to receive the originally generated
2 message. The individual speech station 521 display 524 visually
3 displays the original message as a sign language based message.
4 This paragraph describes the operating theory for the sign
language to speech conversion process. The image sensor 813
6 receives a sign language based message that is intended for
7 delivery as a speech based message. The image sensor 813
18 transmits the captured sign language based message to the
19 pattern recognition module 821. The pattern recognition module
821 analyzes the gesture pattern captured by the image sensor
21 813 to determine the words and syntax of the captured sign
22 language based message. The pattern recognition module 821
23 transmits the words and syntax information to the image to
24 speech natural language module 822. The image to speech natural
1 language module 822 determines the sentiment of the words and
2 syntax of the captured sign language based message. The image
3 to speech natural language module 822 transmits the determined
4 sentiment to the speech generation module 824 which generates
the speech based message corresponding the sentiment expressed
6 by the original sign language based message. The speech
7 generation module 824 transmits the corresponding speech based
8 message to the logical device 112. The logical device 112
9 processes the received speech based message through the speech
to speech technology 702 to generate the sentiment conveyed
1 through the speech based message into a corresponding speech
2 based message. The logical device 112 transmits the
3 corresponding speech based message to the headset routing
4 transceiver 114. The logical device 112 retransmits the
corresponding speech based message to the individual headset 121
6 designated to receive the originally generated message through
7 the headset routing transceiver 114.
18 This paragraph describes the operating theory for the
19 speech to sign language conversion process. The individual
headset 121 receives a speech based message that is intended for
21 delivery as a sign language based message. The individual
22 headset 121 transmits the speech based message to the headset
23 routing transceiver 114. The headset routing transceiver 114
24 retransmits the speech based message to the logical device 112.
1 The logical device 112 processes the received speech based
2 message through the speech to speech technology 701. The speech
3 to speech technology 701 converts the speech based message into
4 a speech based message. The logical device 112 receives the
converted speech based message. The logical device 112
6 transmits the converted speech based message to the speech to
7 image natural language module 825. The speech to image natural
8 language module 825 analyzes the received speech based message
9 to determine the gestures required to convey the sentiment of
the received speech based message into a sign language based
1 message. The speech to image natural language module 825
2 transmits the required gestures to the image generation module
3 826. The image generation module 826 generates the images
4 necessary to generate the visual display necessary for the
presentation of the sign language based message. The image
6 generation module 826 transmits the necessary visual display to
7 the logical device 112. The logical device 112 retransmits the
18 necessary visual display to the individual speech station 521
19 display 524 designated to receive the originally generated
message. The individual speech station 521 display 524 visually
21 displays the original message as a sign language based message.
22 The following definitions were used in this disclosure:
23 Algorithm: As used in this disclosure, an algorithm is a
24 previously defined procedure used to perform a specified task.
1 A device that is capable of implementing an algorithm randomly
2 selected from a plurality of algorithms is called a programmable
3 device.
4 Announce: As used in this disclosure, to announce means to
generate audible sounds over a transducer.
6 Application or App: As used in this disclosure, an
7 application or app is a self-contained piece of software that is
8 especially designed or downloaded for use with a personal data
9 device.
Artificial Intelligence Device: As used in this
1 disclosure, an artificial intelligence refers to a device (AI
2 device) that is configured to perform tasks in a manner that
3 simulates human intelligence. By simulating human intelligence
4 is meant that: a) the AI device is autonomous; b) is capable of
receiving inputs from and generating outputs into an operating
6 environment; c) that the received inputs are processed through a
7 utility function; d) that the utility function generates the
18 outputs; e) that the generated outputs of the utility function
19 are optimized in some fashion (such as the use of a maximum
likelihood function); and f) the utility function is modified
21 over time through the use of a feedback mechanism (often
22 referred to as training). Use Feedback. See pattern
23 recognition software and error function.
1 Audio: As used in this disclosure, audio refers to the
2 reproduction of a sound that simulates the sound that was
3 originally created.
4 Audio Device: As used in this disclosure, an audio device
is a device that generates audible sound waves.
6 Audio Source: As used in this disclosure, an audio source
7 is a device that generates electrical signals that can be
8 converted into audible sounds by an audio device such as a
9 speaker.
Broadcast: As used in this disclosure, a broadcast refers
1 to a radio frequency transmission intended to be received by a
2 plurality of receivers.
3 Communication Link: As used in this disclosure, a
4 communication link refers to the structured exchange of data
between two objects.
6 Disability: As used in this disclosure, a disability
7 refers a physiological induced limitation of the senses or of
18 the movement of an individual.
19 Display: As used in this disclosure, a display is a
surface upon which is presented an image, potentially including,
21 but not limited to, graphic images and speech, that is
22 interpretable by an individual viewing the projected image in a
23 meaningful manner. A display device refers to an electrical
24 device used to present these images.
1 Earphone: As used in this disclosure, an earphone refers
2 to a device that converts electrical signals into audible sounds
3 that are worn or listened to in contact with the ear.
4 Email: As used in this disclosure, email describes a
communication between a sender and one or more receivers that is
6 delivered through a network wherein the nodes of the network
7 comprise a plurality of logical devices. An email will
8 generally comprise a speech based communication component.
9 Facial Recognition: As used in this disclosure, facial
recognition refers to a series of algorithms used to identify an
1 individual by comparing a captured image of the face of the
2 individual with a database of one or more previously captured
3 and stored images of the faces of people.
4 Feedback: As used in this disclosure, feedback refers to a
system, including engineered systems, or a subsystem further
6 comprising an "input" and an "output" wherein the difference
7 between the output of the engineered system or subsystem and a
18 reference is used as, or fed back into, a portion of the input
19 of the system or subsystem. Examples of feedback in engineered
systems include, but are not limited to, a fluid level control
21 device such as those typically used in a toilet tank, a cruise
22 control in an automobile, a fly ball governor, a thermostat, and
23 almost any electronic device that comprises an amplifier.
24 Feedback systems in nature include, but are not limited to,
1 thermal regulation in animals and blood clotting in animals
2 (wherein the platelets involved in blood clotting release
3 chemical to attract other platelets).
4 Hands-Free: As used in this disclosure, hands-free refers
to a design characteristic of a device that allows the device to
6 be used or operated without the use of the hands.
7 Headphone: As used in this disclosure, a headphone is a
8 device that comprises one or two earphones that are held to the
9 ear, typically through the use of a band placed on top of the
head. The headphone comprises one or more speakers and an
1 optional microphone to allow for: 1) private access to an audio
2 communication system; and, 2) hands free access to an audio
3 communication system. Headset is a synonym for headphone.
4 IEEE: As used in this disclosure, the IEEE (pronounced "I
triple E") is an acronym for the Institute of Electrical and
6 Electronic Engineers.
7 Image: As used in this disclosure, an image is an optical
18 representation or reproduction of an indicia or of the
19 appearance of something or someone. See indicia sentiment
optical character recognition. See Label.
21 Indicia: As used in this disclosure, the term indicia
22 refers to a set of markings that identify a sentiment. See
23 sentiment.
1 Instantiation: As used in this disclosure, an
2 instantiation refers to a specific physical object or process
3 that is created using a specification.
4 Interface: As used in this disclosure, an interface is a
physical or virtual boundary that separates two different
6 systems across which information is exchanged.
7 Language: As used in this disclosure, a language comprises
8 a system of articulations, symbols, physical motions, and rules
9 that are used to communication data and information between
members of a community. A machine language, often called a
1 programming language, refers to a system of symbols and rules
2 used to exchange data and instructions between an individual and
3 a logical device such as a logic module or a personal data
4 device.
Logic Module: As used in this disclosure, a logic module
6 is a readily and commercially available electrical device that
7 accepts digital and analog inputs, processes the digital and
18 analog inputs according to previously specified logical
19 processes and provides the results of these previously specified
logical processes as digital or analog outputs. The disclosure
21 allows, but does not assume, that the logic module is
22 programmable.
23 Logical Device: As used in this disclosure, a logical
24 device is an electrical device that processes externally
1 provided inputs to generate outputs that are determined from a
2 previously determined logical functions. A logical device may
3 or may not be programmable.
4 Messaging Facility: As used in this disclosure, a
messaging facility a messaging facility is a previously
6 determined formatting structure through which a speech or image
7 (referred to in this definition as speech) based communication
8 is transmitted for delivery. A messaging facility is selected
9 from the group consisting of a traditional messaging facility, a
o direct messaging facility and a broadcast messaging facility. A
1 traditional messaging facility includes the delivery of a
2 physical object containing the speech based communication. The
3 direct messaging facility includes communications that are
4 addressed to a previously identified group of recipients. The
broadcast messaging facility includes communications that are
6 transmitted without the prior identification of the intended
7 group of recipients. An example of a traditional messaging
18 facility includes, but are not limited to, postal delivery.
19 Examples of a direct messaging facilities include, but are not
limited to, email and SMS messages. A social media service is
21 an example of a broadcast messaging facility.
22 Microphone: As used in this disclosure, a microphone is a
23 transducer that converts the energy from vibration into
1 electrical energy. The sources of vibrations include, but are
2 not limited to, acoustic energy.
3 Momentary Switch: As used in this disclosure, a momentary
4 switch is a biased switch in the sense that the momentary switch
has a baseline position that only changes when the momentary
6 switch is actuated (for example, when a pushbutton switch is
7 pushed or a relay coil is energized). The momentary switch then
8 returns to the baseline position once the actuation is
9 completed. This baseline position is called the "normal"
o position. For example, a "normally open" momentary switch
1 interrupts (open) the electric circuit in the baseline position
2 and completes (closes) the circuit when the momentary switch is
3 activated. Similarly, a "normally closed" momentary switch
4 will complete (close) an electric circuit in the baseline
position and interrupt (open) the circuit when the momentary
6 switch is activated.
7 Natural Language: As used in this disclosure, a natural
18 language refers to a language used by individuals within a
19 society to communicate directly with each other.
Natural Language Processing: As used in this disclosure, a
21 natural language processing refers to a collection of algorithms
22 that use one or more natural languages as an input. The
23 elements of natural language processing include, but are not
24 limited to: a) capturing a sample of a first natural language
1 from spoken or speech (written) based sources; b) comprehending
2 the captured sample of the first natural language; c) acting on
3 the comprehension of the captured sample of the first natural
4 language to generate an output; and, d) presenting the output as
a natural language response. The natural language response is
6 presented in a language selected from the group consisting of:
7 e) the first natural language; or, f) a second natural language
8 that is different from the first natural language. A device
9 that processes a first natural language into a second natural
language is called a translation device.
1 Network: As used in this disclosure, a network refers to a
2 data communication or data exchange structure where data is
3 electronically transferred between nodes, also known as
4 terminals, which are electrically attached to the network. In
common usage, the operator of the network is often used as an
6 adjective to describe the network. For example, a
7 telecommunication network would refer to a network run by a
18 telecommunication organization while a banking network will
19 refer to a network operated by an organization involved in
banking.
21 Pattern Recognition Software: As used in this disclosure,
22 pattern recognition software refers to a series of algorithms
23 used to identify a pattern from a database of one or more
24 previously captured and stored data structures. The captured
1 data structure is assumed to be captured by a sensor. The
2 pattern recognition software is often associated with artificial
3 intelligence.
4 Repeater: As used in this disclosure, a repeater is an
electrical device that receives a first signal from a first
6 communication channel and transmits a duplicate second signal
7 over a second communication channel. When a radio frequency
8 wireless communication channel is used as both the first
9 communication channel and the second communication channel the
o frequencies of operation of the first communication channel and
1 the second communication channels may or may not be identical.
2 Sentiment: As used in this disclosure, a sentiment refers
3 to a symbolic meaning or message that is communicated through
4 the use of an image, potentially including a speech based image.
See image and optical character recognition.
6 Sentiment: As used in this disclosure, a sentiment can
7 also refer to a symbolic meaning or message that is communicated
18 through the announcement of an audible sound.
19 Sign Language: As used in this disclosure, a sign language
is a natural language that is based on visually distinct signs
21 and gestures. The sign language is commonly used by individuals
22 with hearing disabilities.
1 Speaker: As used in this disclosure, a speaker is an
2 electrical transducer that converts an electrical signal into an
3 audible sound.
4 Speech Recognition: As used in this disclosure, an speech
recognition refers to a collection of commercially available
6 algorithms that capture process a digital representation of an
7 audible sound in a manner that allows an electronically operated
8 device, such as a computer, to extract data from the digital
9 representation of an audible sound and take a subsequent action
based on the data extracted from the audible sound.
1 Switch: As used in this disclosure, a switch is an
2 electrical device that starts and stops the flow of electricity
3 through an electric circuit by completing or interrupting an
4 electric circuit. The act of completing or breaking the
electrical circuit is called actuation. Completing or
6 interrupting an electric circuit with a switch is often referred
7 to as closing or opening a switch respectively. Completing or
18 interrupting an electric circuit is also often referred to as
19 making or breaking the circuit respectively.
Transceiver: As used in this disclosure, a transceiver is
21 a device that is used to generate, transmit, and receive
22 electromagnetic radiation such as radio signals.
23 Transducer: As used in this disclosure, a transducer is a
24 device that converts a physical quantity, such as pressure or
1 brightness into an electrical signal or a device that converts
2 an electrical signal into a physical quantity.
3 Translate: As used in this disclosure, to translate means
4 to convert data contained in a first organizational or
operational structure into a second organizational or operation
6 structure. The term translate often refers to the conversion of
7 data existing in a first natural language into a second natural
8 language.
9 WiFi m : As used in this disclosure, WiFiTm refers to the
physical implementation of a collection of wireless electronic
1 communication standards commonly referred to as IEEE 802.11x.
2 Wireless: As used in this disclosure, wireless is an
3 adjective that is used to describe a communication channel
4 between two devices that does not require the use of physical
cabling.
6 With respect to the above description, it is to be realized
7 that the optimum dimensional relationship for the various
18 components of the invention described above and in Figures 1
19 through 10 include variations in size, materials, shape, form,
function, and manner of operation, assembly and use, are deemed
21 readily apparent and obvious to one skilled in the art, and all
22 equivalent relationships to those illustrated in the drawings
23 and described in the specification are intended to be
24 encompassed by the invention.
1 It shall be noted that those skilled in the art will
2 readily recognize numerous adaptations and modifications which
3 can be made to the various embodiments of the present invention
4 which will result in an improved invention, yet all of which
will fall within the spirit and scope of the present invention
6 as defined in the following claims. Accordingly, the invention
7 is to be limited only by the scope of the following claims and
8 their equivalents.
9
1 CLAIMS
2 What is claimed is:
3 1. A translation process comprising
4 a logical device;
wherein the logical device further comprises a
6 plurality of language modules;
7 wherein the plurality of language modules further
8 comprises a sign language to speech translation
9 module and a speech to sign language translation
module;
1 wherein the logical device forms the platform that
2 supports the operation of the sign language to
3 speech translation module and the speech to sign
4 language translation module.
2. The translation process according to claim 1
6 wherein each language module selected from the
7 plurality of language modules forms a natural
18 language processing structure;
19 wherein each language module selected from the
plurality of language modules receives a first
21 natural language communication;
22 wherein each language module selected from the
23 plurality of language modules translates the
1 intellectual content of the first natural language
2 message into a second natural language;
3 wherein a sign language is chosen as the natural
4 language communication for a natural language
selected from the group consisting of the first
6 natural language and the second natural language.
7 3. The translation process according to claim 2
8 wherein the sign language to speech translation module
9 is the natural language processing structure that
translates sign language as the first natural
1 language into the second natural language;
2 wherein the sign language to speech translation module
3 is an automated process.
4 4. The translation process according to claim 3
wherein the speech to sign language translation module
6 is the natural language processing structure that
7 translates the first natural language into sign
18 language as the second natural language;
19 wherein the speech to sign language translation module
is an automated process.
21 5. The translation process according to claim 4
22 wherein the sign language to speech translation module
23 comprises an image sensor, a pattern recognition
1 module, an image to speech natural language module,
2 and a speech generation module;
3 wherein the pattern recognition module further
4 comprises a first output and a first input;
wherein the image to speech natural language module
6 further comprises a second output and a second
7 input;
8 wherein the speech generation module further comprises
9 a third output and a third input;
wherein the sign language to speech translation module
1 generates the translated second natural language
2 communication into a speech based third output;
3 wherein the third output is transmitted to the logical
4 device;
wherein the logical device processes the third output
6 as a message selected from the group consisting of:
7 a) a speech based message that is displayed on an
18 individual speech station; and, b) an audible
19 message that is announced over an individual
headset.
21 6. The translation process according to claim 5
22 wherein the speech to sign language translation module
23 further comprises a speech to image natural language
24 module and an image generation module;
1 wherein the speech to image natural language module
2 further comprises a fourth output and a fourth
3 input;
4 wherein the image generation module further comprises a
fifth output and a fifth input;
6 wherein the logical device transmits a speech message
7 that becomes the fourth input of the speech to image
8 natural language module;
9 wherein the speech to image natural language module
generates the fourth output that becomes an input
1 into the fifth input of the image generation module;
2 wherein the image generation module generates the fifth
3 output that becomes an input into the logical
4 device;
wherein the speech to sign language translation module
6 generates the translated second natural language
7 communication into a visually based fifth output in
18 the form of sign language.
19 7. The translation process according to claim 6
wherein the image sensor is an electric sensor;
21 wherein the image sensor converts light into one or
22 more electric signals;
1 wherein the image sensor captures the images of the
2 sign language communication that forms the first
3 natural language communication;
4 wherein the image sensor transmits the captured images
of the first natural language communication to the
6 pattern recognition module.
7 8. The translation process according to claim 7
8 wherein the pattern recognition module receives the
9 first input from the image sensor;
wherein the pattern recognition module generates the
1 first output that becomes the second input of the
2 image to speech natural language module;
3 wherein the image to speech natural language module
4 generates the second output that becomes the third
input of the speech generation module;
6 wherein the speech generation module generates the
7 third output that becomes an input into the logical
18 device;
19 wherein the pattern recognition module receives the
captured images of the sign language communication
21 from the image sensor;
22 wherein the pattern recognition module identifies
23 within the captured images of the sign language
1 communication the signs and gestures that form the
2 sign language communication;
3 wherein the pattern recognition module identifies
4 within the captured images of the sign language
communication the order in which the signs and
6 gestures occur.
7 9. The translation process according to claim 8
8 wherein the pattern recognition module transmits the
9 identified order of the signs and gestures
identified within the captured images of the sign
1 language communication to the first output;
2 wherein the image to speech natural language module
3 receives the identified order of the signs and
4 gestures from the first output as the second input;
wherein the image to speech natural language module
6 translates the identified order of the signs and
7 gestures received from the pattern recognition
18 module into a sentiment expressed in the second
19 natural language;
wherein the image to speech natural language module
21 transmits the translated sentiment to the second
22 output.
23 10. The translation process according to claim 9
1 wherein the speech generation module receives the
2 translated sentiment from the second output as the
3 third input;
4 wherein the speech generation module converts the
translated sentiment into a speech based data file
6 that is transmitted to the third output;
7 wherein the data file generated by the speech
8 generation module provides a speech version of the
9 translated sentiment in the second language that is
suitable for display on the individual speech
1 station display.
2 11. The translation process according to claim 10
3 wherein the logical device receives the speech based
4 third output from the speech generation module of
the sign language to speech translation module;
6 wherein the logical device determines the delivery
7 requirement of the third output;
18 wherein the delivery requirement is selected from the
19 group consisting of: a) an announcement of the third
output through an individual headset; and b) a
21 visibly displayed speech message displayed on an
22 individual speech station;
1 wherein the logical device determines the delivery
2 requirement from an externally provided input
3 method;
4 wherein if the logical device is required to process
the third output through an announcement on the
6 individual headset, the speech based message
7 received as the third output is initially processed
8 through the speech to speech technology and the
9 subsequently transmitted to the individual headset;
wherein if the logical device is required to process
1 the third output through the visual display of
2 speech, the logical device transmits the third
3 output directly to the individual speech station.
4 12. The translation process according to claim 11 wherein the
logical device transmits the fifth output as a video image
6 that is displayed on the individual speech station display
7 of the individual speech station.
18 13. The translation process according to claim 12
19 wherein the speech to image natural language module
receives a speech format communication in a first
21 natural language;
22 wherein the speech to image natural language module
23 converts the received speech format communication
1 into the order of a series of the signs and gestures
2 of a sign language based second natural language;
3 wherein the speech to image natural language module
4 transmits the signs and gestures of the sign
language based communication to the fourth output.
6 14. The translation process according to claim 13
7 wherein the image generation module receives the
8 identified order of the signs and gestures from the
9 fourth output as the fifth output;
wherein the image generation module translates the
1 identified order of the signs and gestures received
2 from the speech to image natural language module
3 into a sentiment expressed in the in the visual
4 images of the sign language that forms the second
natural language;
6 wherein the image generation module transmits the
7 sentiment of the speech based communication in the
18 form of the generated images of the sign language
19 used to from the second natural language.
15. The translation process according to claim 14
21 wherein the logical device receives a communication
22 from a message source selected from the group
23 consisting of: a) a speech based message generated
1 from an individual headset; and, b) a speech based
2 message generated from an individual speech station;
3 wherein if the speech to sign language translation
4 module is required to generate a sign language
translation of a speech message, the logical device
6 transmits the received speech based message directly
7 to the fourth input of the speech to image natural
8 language module;
9 wherein if the speech to sign language translation
module is required to generate a sign language
1 translation of an audible message, the logical
2 device first processes the received audible message
3 through the speech to speech technology and then
4 subsequently transmits the converted speech format
message to the fourth input of the speech to image
6 natural language module;
7 wherein the logical device receives the visual images
18 of the sign language translation of the message
19 received from the message source through the fifth
output of the image generation module;
21 wherein the logical device transmits the received sign
22 language translation to the individual speech
23 station.
24 16. A translation process comprising
Claims (1)
1 a logical device;
2 wherein the logical device further comprises a
3 plurality of language modules;
4 wherein the plurality of language modules further
comprises a sign language to text translation module
6 and a text to sign language translation module;
7 wherein the logical device forms the platform that
8 supports the operation of the sign language to text
9 translation module and the text to sign language
translation module.
1 17. The translation process according to claim 16
2 wherein each language module selected from the
3 plurality of language modules forms a natural
4 language processing structure;
wherein each language module selected from the
6 plurality of language modules receives a first
7 natural language communication;
18 wherein each language module selected from the
19 plurality of language modules translates the
intellectual content of the first natural language
21 message into a second natural language;
22 wherein a sign language is chosen as the natural
23 language communication for a natural language
1 selected from the group consisting of the first
2 natural language and the second natural language.
3 18. The translation process according to claim 17
4 wherein the sign language to text translation module is
the natural language processing structure that
6 translates sign language as the first natural
7 language into the second natural language;
8 wherein the sign language to text translation module is
9 an automated process.
19. The translation process according to claim 18
1 wherein the text to sign language translation module is
2 the natural language processing structure that
3 translates the first natural language into sign
4 language as the second natural language;
wherein the text to sign language translation module is
6 an automated process.
7 20. The translation process according to claim 19
18 wherein the sign language to text translation module
19 comprises an image sensor, a pattern recognition
module, an image to text natural language module,
21 and a text generation module;
22 wherein the pattern recognition module further
23 comprises a first output and a first input;
1 wherein the image to text natural language module
2 further comprises a second output and a second
3 input;
4 wherein the text generation module further comprises a
third output and a third input;
6 wherein the sign language to text translation module
7 generates the translated second natural language
8 communication into a text based third output;
9 wherein the third output is transmitted to the logical
device;
1 wherein the logical device processes the third output
2 as a message selected from the group consisting of:
3 a) a text based message that is displayed on an
4 individual text station; and, b) an audible message
that is announced over an individual headset.
6 21. The translation process according to claim 20
7 wherein the text to sign language translation module
18 further comprises a text to image natural language
19 module and an image generation module;
wherein the text to image natural language module
21 further comprises a fourth output and a fourth
22 input;
23 wherein the image generation module further comprises a
24 fifth output and a fifth input;
1 wherein the logical device transmits a text message
2 that becomes the fourth input of the text to image
3 natural language module;
4 wherein the text to image natural language module
generates the fourth output that becomes an input
6 into the fifth input of the image generation module;
7 wherein the image generation module generates the fifth
8 output that becomes an input into the logical
9 device;
wherein the text to sign language translation module
1 generates the translated second natural language
2 communication into a visually based fifth output in
3 the form of sign language.
4 22. The translation process according to claim 21
wherein the image sensor is an electric sensor;
6 wherein the image sensor converts light into one or
7 more electric signals;
18 wherein the image sensor captures the images of the
19 sign language communication that forms the first
natural language communication;
21 wherein the image sensor transmits the captured images
22 of the first natural language communication to the
23 pattern recognition module.
24 23. The translation process according to claim 22
1 wherein the pattern recognition module receives the
2 first input from the image sensor;
3 wherein the pattern recognition module generates the
4 first output that becomes the second input of the
image to text natural language module;
6 wherein the image to text natural language module
7 generates the second output that becomes the third
8 input of the text generation module;
9 wherein the text generation module generates the third
output that becomes an input into the logical
1 device;
2 wherein the pattern recognition module receives the
3 captured images of the sign language communication
4 from the image sensor;
wherein the pattern recognition module identifies
6 within the captured images of the sign language
7 communication the signs and gestures that form the
18 sign language communication;
19 wherein the pattern recognition module identifies
within the captured images of the sign language
21 communication the order in which the signs and
22 gestures occur.
23 24. The translation process according to claim 23
1 wherein the pattern recognition module transmits the
2 identified order of the signs and gestures
3 identified within the captured images of the sign
4 language communication to the first output;
wherein the image to text natural language module
6 receives the identified order of the signs and
7 gestures from the first output as the second input;
8 wherein the image to text natural language module
9 translates the identified order of the signs and
gestures received from the pattern recognition
1 module into a sentiment expressed in the second
2 natural language;
3 wherein the image to text natural language module
4 transmits the translated sentiment to the second
output.
6 25. The translation process according to claim 24
7 wherein the text generation module receives the
18 translated sentiment from the second output as the
19 third input;
wherein the text generation module converts the
21 translated sentiment into a text based data file
22 that is transmitted to the third output;
23 wherein the data file generated by the text generation
24 module provides a text version of the translated
1 sentiment in the second language that is suitable
2 for display on the individual text station display.
3 26. The translation process according to claim 25
4 wherein the logical device receives the text based
third output from the text generation module of the
6 sign language to text translation module;
7 wherein the logical device determines the delivery
8 requirement of the third output;
9 wherein the delivery requirement is selected from the
group consisting of: a) an announcement of the third
1 output through an individual headset; and b) a
2 visibly displayed text message displayed on an
3 individual text station;
4 wherein the logical device determines the delivery
requirement from an externally provided input
6 method;
7 wherein if the logical device is required to process
18 the third output through an announcement on the
19 individual headset, the text based message received
as the third output is initially processed through
21 the speech to text technology and the subsequently
22 transmitted to the individual headset;
23 wherein if the logical device is required to process
24 the third output through the visual display of text,
1 the logical device transmits the third output
2 directly to the individual text station.
3 27. The translation process according to claim 26 wherein the
4 logical device transmits the fifth output as a video image
that is displayed on the individual text station display
6 of the individual text station.
7 28. The translation process according to claim 27
8 wherein the text to image natural language module
9 receives a text format communication in a first
natural language;
1 wherein the text to image natural language module
2 converts the received text format communication into
3 the order of a series of the signs and gestures of a
4 sign language based second natural language;
wherein the text to image natural language module
6 transmits the signs and gestures of the sign
7 language based communication to the fourth output.
18 29. The translation process according to claim 28
19 wherein the image generation module receives the
identified order of the signs and gestures from the
21 fourth output as the fifth output;
22 wherein the image generation module translates the
23 identified order of the signs and gestures received
24 from the text to image natural language module into
1 a sentiment expressed in the in the visual images of
2 the sign language that forms the second natural
3 language;
4 wherein the image generation module transmits the
sentiment of the text based communication in the
6 form of the generated images of the sign language
7 used to from the second natural language.
8 30. The translation process according to claim 29
9 wherein the logical device receives a communication
from a message source selected from the group
1 consisting of: a) a speech based message generated
2 from an individual headset; and, b) a text based
3 message generated from an individual text station;
4 wherein if the text to sign language translation module
is required to generate a sign language translation
6 of a text message, the logical device transmits the
7 received text based message directly to the fourth
18 input of the text to image natural language module;
19 wherein the text to sign language translation module is
able to operate in reverse in terms of first
21 receiving the received audible message and
22 thereafter converting to the text message;
23 wherein if the text to sign language translation module
24 is required to generate a sign language translation
1 of an audible message, the logical device first
2 processes the received audible message through the
3 text to speech technology and then subsequently
4 transmits the converted text format message to the
fourth input of the text to image natural language
6 module;
7 wherein the logical device receives the visual images
8 of the sign language translation of the message
9 received from the message source through the fifth
output of the image generation module;
1 wherein the logical device transmits the received sign
2 language translation to the individual text station.
3
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263303923P | 2022-01-27 | 2022-01-27 | |
US63/303,923 | 2022-01-27 | ||
US202217868116A | 2022-07-19 | 2022-07-19 | |
US17/868,116 | 2022-07-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2023200418A1 true AU2023200418A1 (en) | 2023-08-10 |
Family
ID=85476654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2023200418A Pending AU2023200418A1 (en) | 2022-01-27 | 2023-01-26 | Wireless Headset and Tablet Sign Language Communication System and Method |
Country Status (3)
Country | Link |
---|---|
AU (1) | AU2023200418A1 (en) |
CA (1) | CA3187860A1 (en) |
GB (1) | GB2616719A (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101605158A (en) * | 2008-06-13 | 2009-12-16 | 鸿富锦精密工业(深圳)有限公司 | Mobile phone dedicated for deaf-mutes |
KR102450803B1 (en) * | 2016-02-11 | 2022-10-05 | 한국전자통신연구원 | Duplex sign language translation apparatus and the apparatus for performing the duplex sign language translation method |
AU2021101436A4 (en) * | 2021-03-20 | 2021-05-13 | Assadi, Mustafa Shihab MR | Wearable sign language detection system |
-
2023
- 2023-01-26 AU AU2023200418A patent/AU2023200418A1/en active Pending
- 2023-01-27 GB GB2301216.4A patent/GB2616719A/en active Pending
- 2023-01-27 CA CA3187860A patent/CA3187860A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
GB202301216D0 (en) | 2023-03-15 |
CA3187860A1 (en) | 2023-07-27 |
GB2616719A (en) | 2023-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9111545B2 (en) | Hand-held communication aid for individuals with auditory, speech and visual impairments | |
US11792577B2 (en) | Differential amplification relative to voice of speakerphone user | |
WO2016052018A1 (en) | Home appliance management system, home appliance, remote control device, and robot | |
JP2007537650A (en) | Method for transmitting message from recipient to recipient, message transmission system and message conversion means | |
EP1526706A2 (en) | System and method for providing communication channels that each comprise at least one property dynamically changeable during social interactions | |
WO2017068816A1 (en) | Information processing system and information processing method | |
CN104010267A (en) | Method and system for supporting a translation-based communication service and terminal supporting the service | |
KR20090065212A (en) | Robot chatting system and method | |
Matthews et al. | Scribe4Me: Evaluating a mobile sound transcription tool for the deaf | |
Ramirez-Garibay et al. | MyVox—Device for the communication between people: blind, deaf, deaf-blind and unimpaired | |
WO2018136111A1 (en) | Privacy control in a connected environment based on speech characteristics | |
TW202347096A (en) | Smart glass interface for impaired users or users with disabilities | |
Chen et al. | From Gap to Synergy: Enhancing Contextual Understanding through Human-Machine Collaboration in Personalized Systems | |
Budkov et al. | Event-driven content management system for smart meeting room | |
AU2023200418A1 (en) | Wireless Headset and Tablet Sign Language Communication System and Method | |
Kumar et al. | Voice Email Based on SMTP For Physically Handicapped | |
US11917092B2 (en) | Systems and methods for detecting voice commands to generate a peer-to-peer communication link | |
Angkananon et al. | Technology enhanced interaction framework and method for accessibility in Thai museums | |
Sawhney | Contextual awareness, messaging and communication in nomadic audio environments | |
Aashritha et al. | Assistive Technology for Blind and Deaf People: A Case Study | |
Heckendorf | Assistive technology for individuals who are deaf or hard of hearing | |
CA3086585C (en) | Wireless communication headset system | |
Thibodeau | Advanced practices: assistive technology in the age of smartphones and tablets | |
US20210100048A1 (en) | Wireless communication headset system | |
KR20150115436A (en) | Method and apparatus for providing relay communication service |