WO2021189652A1 - 语言输出方法、头戴设备、存储介质及电子设备 - Google Patents

语言输出方法、头戴设备、存储介质及电子设备 Download PDF

Info

Publication number
WO2021189652A1
WO2021189652A1 PCT/CN2020/093753 CN2020093753W WO2021189652A1 WO 2021189652 A1 WO2021189652 A1 WO 2021189652A1 CN 2020093753 W CN2020093753 W CN 2020093753W WO 2021189652 A1 WO2021189652 A1 WO 2021189652A1
Authority
WO
WIPO (PCT)
Prior art keywords
language
head
mounted device
translation result
target object
Prior art date
Application number
PCT/CN2020/093753
Other languages
English (en)
French (fr)
Inventor
刘若鹏
栾琳
Original Assignee
深圳光启超材料技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光启超材料技术有限公司 filed Critical 深圳光启超材料技术有限公司
Publication of WO2021189652A1 publication Critical patent/WO2021189652A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/0406Accessories for helmets
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/30Mounting radio sets or communication systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications

Definitions

  • the present invention relates to the field of communication, and in particular to a language output method, a head-mounted device, a storage medium and an electronic device.
  • Smart helmets are technical personnel who combine high-tech products to upgrade an ordinary helmet to achieve the smart functions required by users. Smart helmets have been widely used in the security industry.
  • the existing head-mounted device has a single function and cannot perform voice translation on the voice received by the head-mounted device, and no effective solution has been proposed.
  • the embodiments of the present invention provide a language output method, a head-mounted device, a storage medium, and an electronic device, so as to at least solve the problem that in related technologies, the existing head-mounted device has a single function and cannot perform voice on the voice received by the head-mounted device.
  • Technical issues of translation are described below.
  • a language output method including: receiving a first language emitted by a target object through a head-mounted device; in response to the first language, controlling the head-mounted device to acquire the first language Wherein the translation result is used to at least indicate a second language corresponding to the first language; the second language is output through the head-mounted device.
  • the above method further includes: configuring the correspondence between the first language and the second language to Enabling the headset to acquire a second language corresponding to the first language.
  • the first language is an encrypted first language
  • the above method further includes: decrypting the first language
  • the method before outputting the second language through the head-mounted device, the method further includes: encrypting the second language.
  • the method further includes: generating a normalized form according to the received first language.
  • receiving the first language uttered by the target object through a head-mounted device includes: directly receiving the first language uttered by the target object through the head-mounted device; or receiving the first language transmitted by the wearable device through the head-mounted device The first language spoken by the target audience.
  • the above method further includes: receiving an instruction to turn on the wearable device of the target object, wherein the turn-on instruction is used to turn on the head-mounted device
  • controlling the head-mounted device to obtain the translation result of the first language includes: transmitting the received first language to a voice server, so that the voice server determines the The translation result in the first language; receiving the translation result translated by the voice server.
  • outputting the second language through the head-mounted device includes: outputting the second language through a headset or a speaker of the head-mounted device.
  • the above method further includes: saving the translation result corresponding to the first language in the head-mounted device, so that the head-mounted device will receive the translation result next time In the case of the first language, output the translation result corresponding to the first language through the head-mounted device.
  • a head-mounted device including; a receiving module, configured to receive a first language emitted by a target object; a processing module, configured to respond to the first language and control the headset The device obtains the translation result in the first language, where the translation result is used to at least indicate a second language corresponding to the first language; an output module is used to output the second language.
  • the processing module is further configured to configure the correspondence between the first language and the second language, so that the head-mounted device can obtain the second language corresponding to the first language.
  • the processing module is also used to decrypt the first language.
  • the output module is also used to encrypt the second language.
  • the receiving module is further configured to generate a normalized form according to the received first language.
  • the above-mentioned receiving module is further configured to directly receive the first language sent by the target object through the head-mounted device; or receive the first language sent by the target object forwarded by the wearable device through the head-mounted device.
  • the above-mentioned receiving module is further configured to receive a turn-on instruction of the wearable device of the target object, wherein the turn-on instruction is used to turn on the translation function of the head-mounted device, wherein the translation function is turned on after the translation function is turned on.
  • the turn-on instruction is used to turn on the translation function of the head-mounted device, wherein the translation function is turned on after the translation function is turned on.
  • the aforementioned processing module is further configured to transmit the received first language to a voice server, so that the voice server determines the translation result of the first language; and receives the translation result translated by the voice server.
  • the aforementioned output module is further configured to output the second language through the earphone or speaker of the head-mounted device.
  • the above-mentioned head-mounted device further includes a saving module, configured to save the translation result corresponding to the first language in the head-mounted device, so that the head-mounted device receives the situation in the first language next time Next, output the translation result corresponding to the first language through the head-mounted device.
  • a saving module configured to save the translation result corresponding to the first language in the head-mounted device, so that the head-mounted device receives the situation in the first language next time Next, output the translation result corresponding to the first language through the head-mounted device.
  • a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned language output method when running .
  • an electronic device including a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor executes the above-mentioned computer program through the computer program.
  • Language output method
  • the first language emitted by the target object is received through a head-mounted device; in response to the first language, the head-mounted device is controlled to obtain the translation result of the first language, wherein the translation result is at least It is used to indicate the second language corresponding to the first language; the second language is output by the head-mounted device, and the above technical solution is adopted to solve the problem of the single function of the existing head-mounted device in the related art.
  • Perform voice translation on the voice received by the headset and then translate the first language emitted by the target object received by the headset, and output the second language required by the target object, which improves the use scenario of the headset .
  • Fig. 1 is a schematic diagram of a product application of a language output method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a terminal system framework of a smart helmet according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of the composition of a terminal system of a smart helmet according to an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of an application environment of a language output method according to an embodiment of the present invention.
  • Figure 5 is a flowchart of a language output method according to an embodiment of the present invention.
  • Fig. 6 is a flowchart of an optional head-mounted device operating to execute a voice translation function application according to an embodiment of the present invention
  • Fig. 7 is a schematic structural diagram of a head-mounted device according to an embodiment of the present invention.
  • Fig. 8 is a schematic structural diagram of an optional electronic device according to an embodiment of the present invention.
  • Fig. 1 is a schematic diagram of in-product application of a language output method according to an embodiment of the present invention; taking the helmet body as a reference standard, the smart helmet (equivalent to the "head mounted device" in the embodiment of the present invention) is divided into regions, The smart helmet is divided into seven areas: the outer front side 1, the outer top side 2, the outer left and right sides 3, the outer rear side 4, the inner front side 5, the inner top side 6, and the inner rear side 7.
  • the outer front side is the information collection area, which is used to place the camera.
  • the outer top side is the communication area
  • the outer rear side is the energy supply area
  • the inner top side is the main board and the heat dissipation area
  • the outer left and right sides are the functional areas
  • the inner front side is the AR module.
  • the technical solution of the embodiment of the present invention is applied to the functional areas on the left and right sides of the exterior, and the language information received by the head-mounted device is translated through the main board in the functional area.
  • Functional layer In terms of function, smart helmets must meet the requirements of informatization, intelligence, and modernization. In addition to providing the most basic protection, communication, and live video functions, it also provides navigation, face recognition, ID card recognition, and license plate. Recognition, event push, voice translation and other functions.
  • Supporting layer In addition to the smart helmets and smart watches in the hardware part, it also includes a back-end smart helmet management system and a third-party Internet application service platform to provide hardware and service support for the intelligentization of the terminal system of the wearable smart helmet.
  • the smart helmet terminal system will connect to the system face recognition retrieval database, "one person, one file”, “one vehicle, one file”, OSS cloud, RDS and other databases of the relevant platform through cloud services and smart helmet back-end servers. Realize the intelligence and informationization of wearable smart helmets.
  • the terminal system composition of the smart helmet is shown in Figure 3.
  • the smart helmet terminal system composed of the smart helmet, smart watch, and smart helmet management system improves the convenience of language communication between the target object of the smart helmet and the target object that cannot communicate with the language. Communication efficiency when language barriers occur in actual work.
  • a language output method is provided.
  • the above-mentioned language output method can be, but is not limited to, applied to the application environment as shown in FIG. 4.
  • the head-mounted device 102 runs an application capable of recognizing the first language and performing translation
  • the head-mounted device 102 includes a motherboard.
  • the head-mounted device 102 may receive the first language emitted by the target object through the head-mounted device; in response to the first language, control the head-mounted device to obtain the translation result of the first language, wherein the translation result is used at least for Indicate the second language corresponding to the first language; the second language is output through the head-mounted device, and the server 104 may be a voice server.
  • the embodiment of the present invention does not limit this, and the above is only an example.
  • the embodiments of the present application are not limited here.
  • FIG. 5 is a flowchart of a language output method according to an embodiment of the present invention. As shown in FIG. 5, the above-mentioned language output method flow includes the following steps:
  • Step S202 receiving the first language sent by the target object through the headset
  • Step S204 In response to the first language, control the head-mounted device to obtain a translation result in the first language, where the translation result is at least used to indicate a second language corresponding to the first language;
  • Step S206 output the second language through the head-mounted device.
  • the first language sent by the target object is received through the head-mounted device; in response to the first language, the head-mounted device is controlled to obtain the translation result of the first language, wherein the translation result is at least used to indicate The second language corresponding to the first language; the second language is output by the head-mounted device, and the above-mentioned technical solution is adopted to solve the problem that the existing head-mounted device has a single function in the related art and cannot be used on the head-mounted device.
  • the voice received by the device is subjected to problems such as voice translation, and then the first language sent by the target object received by the head-mounted device can be translated to output the second language required by the target object, which improves the use scene of the head-mounted device.
  • the above method further includes: configuring the correspondence between the first language and the second language to To enable the headset to acquire the second language corresponding to the first language, that is to say, when translating the acquired first language of the target object into the second language through the headset, it needs to be translated into the second language in advance.
  • the corresponding relationship between the first language and the second language is configured so that the headset can accurately translate the first language into the second language required by the target object.
  • the first language is an encrypted first language
  • the above method further includes: decrypting the first language, in order to protect language security, the target obtained by the headset
  • the target language is the encrypted first language.
  • language translation it is necessary to decrypt the first language and perform the translation according to the configured corresponding relationship between the first language and the second language.
  • the method before outputting the second language through the head-mounted device, the method further includes: encrypting the second language.
  • the head-mounted device outputs the translation result in the first language, The second language obtained as a result is encrypted.
  • the method further includes: generating a normalized form according to the received first language, that is, after receiving the first language uttered by the target object, wearing the first language
  • the device can normalize the table according to the feedback result of the first language, where the normalized table includes generating a language that can be quickly responded to by the head-mounted device, which improves the translation efficiency.
  • receiving the first language uttered by the target object through the head-mounted device can be achieved by the following implementations: 1) Directly receiving the first language uttered by the target object through the head-mounted device, that is, through the head-mounted device itself Perform translation feedback on the language of the target object; 2) receive the first language sent by the target object forwarded by the wearable device through the head-mounted device.
  • the wearable device can be a smart watch
  • the target object can speak the first language that needs to be translated to the smart watch
  • the wearable device forwards the first language to the head-mounted device
  • the head-mounted device can execute the first language
  • the operation of translating into a second language language can also be that the headset sends the first language to the voice server, and the first language is translated through the voice server to obtain the second language.
  • the above method further includes: receiving an instruction to turn on the wearable device of the target object, wherein the turn-on instruction is used to turn on the head-mounted device
  • the head-mounted device receives the first language sent by the target object, it responds in time, and translates the first target language into the corresponding second language. , The headset can obtain the translation results in the first language for communication.
  • controlling the head-mounted device to obtain the translation result of the first language includes: transmitting the received first language to a voice server, so that the voice server determines the The translation result in the first language; receiving the translation result translated by the voice server, that is, uploading the received first language to the voice server connected to the headset, and searching the first language through the voice server, Find out the second language corresponding to the first language, complete the translation operation, and send the translation result to the headset.
  • the operation of translating the language in the above embodiment is implemented by the voice server.
  • the received language can also be analyzed through the headset itself, that is, the headset itself It has the function of translating the language.
  • outputting the second language through the head-mounted device includes: outputting the second language through the earphone or speaker of the head-mounted device, when the head-mounted device converts the first language into the first language through a translation function After the second language, in order to facilitate the target object to understand in time, the second language is converted into voice information and output through the earphone or speaker on the head-mounted device.
  • the above method further includes: saving the translation result corresponding to the first language in the head-mounted device, so that the head-mounted device will receive the translation result next time In the case of the first language, output the translation result corresponding to the first language through the head-mounted device.
  • the head-mounted device can also save the translation result corresponding to the first language on the head-mounted device.
  • the head-mounted device can quickly output the translation result corresponding to the first language.
  • commonly used first language packs are generated, which is convenient for the target object to communicate when the language is different.
  • the above-mentioned output of the second language through the head-mounted device may specifically be output through the speaker of the head-mounted device, or the translation result may be displayed on the display screen of the head-mounted device, which is not limited in the embodiment of the present invention. .
  • the computer device 104 in the above-mentioned embodiment and the above-mentioned background server can run in the background of the head-mounted device, and the background client can be a kind of client running on the computer device 104.
  • the manager can operate the computer device 104 to perform processing such as clicking and viewing on the background client.
  • the first language emitted by the target object is received through a head-mounted device; in response to the first language, the head-mounted device is controlled to obtain the translation result in the first language, wherein the translation
  • the result is used to at least indicate a second language corresponding to the first language, and the corresponding relationship between the first language and the second language is configured in advance; and the second language is output through the head-mounted device.
  • Figure 6 is a flow chart of the head-mounted device operating to execute the application of the voice translation function, in which the logical relationship of the head-mounted device performing the voice translation function is as follows:
  • the embodiments of the present invention and optional embodiments provide a head-mounted device, such as a smart helmet.
  • the main style is intelligent and information wearable equipment integrating cloud computing, big data, Internet of Things, communications, artificial intelligence and augmented reality technology.
  • This equipment (equivalent to the head-mounted device in the embodiment of the present invention) is connected to related background systems through Bluetooth connection, voice input, manual input, etc., and can realize mobile communication, camera recording, GPS/Beidou positioning, intelligent voice, and intelligent image.
  • /Video recognition and other functions can effectively improve the work efficiency of security personnel and enhance wearing comfort and safety, and finally achieve the goal of "three modernizations", namely:
  • the method according to the above embodiment can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes a number of instructions to enable a terminal device (which can be a mobile phone, a computer, a server, or a network device, etc.) to execute the method of each embodiment of the present invention.
  • FIG. 7 is a structure of a head-mounted device according to an embodiment of the present invention. Schematic diagram, the equipment includes:
  • the receiving module 42 is used to receive the first language sent by the target object
  • the processing module 44 is configured to respond to the first language and control the head-mounted device to obtain the translation result of the first language, wherein the translation result is at least used to indicate the language corresponding to the first language Second language;
  • the output module 46 is used to output the second language.
  • the first language sent by the target object is received through the head-mounted device; in response to the first language, the head-mounted device is controlled to obtain the translation result of the first language, wherein the translation result is at least used to indicate The second language corresponding to the first language; the second language is output by the head-mounted device, and the above-mentioned technical solution is adopted to solve the problem that the existing head-mounted device has a single function in the related art and cannot be used on the head-mounted device.
  • the voice received by the device is subjected to problems such as voice translation, and then the first language sent by the target object received by the head-mounted device can be translated to output the second language required by the target object, which improves the use scene of the head-mounted device.
  • the processing module is further configured to configure the correspondence between the first language and the second language, so that the head-mounted device obtains the second language corresponding to the first language, that is, When translating the acquired first language of the target object into the second language through the headset, the corresponding relationship between the first language and the second language needs to be configured in advance, so that the headset can accurately translate the first language into The second language needed by the target audience.
  • the processing module is also used to decrypt the first language.
  • the language of the target object acquired by the headset is the encrypted first language.
  • the first language needs to be After the language is decrypted, translation is performed according to the configured corresponding relationship between the first language and the second language.
  • the output module is further configured to encrypt the second language.
  • the headset when the headset outputs the translation result in the first language, it encrypts the second language obtained through the translation result.
  • the receiving module is further configured to generate a normalized form according to the received first language information.
  • the head-mounted device may normalize the form according to the feedback result of the first language, where:
  • the standardized form includes the generation of a language that can quickly respond to the headset, which improves the translation efficiency.
  • the above-mentioned receiving module is further configured to directly receive the first language emitted by the target object through the head-mounted device, that is, the language of the target object can be translated and fed back through the head-mounted device itself;
  • the device receives the first language sent by the target object forwarded by the wearable device.
  • the wearable device can be a smart watch, the target object can speak the first language that needs to be translated to the smart watch, and then the wearable device forwards the first language to the head-mounted device, and the head-mounted device can execute the first language The operation of translating into a second language language.
  • the above-mentioned receiving module is further configured to receive a turn-on instruction of the wearable device of the target object, wherein the turn-on instruction is used to turn on the translation function of the head-mounted device, wherein the translation function is turned on after the translation function is turned on.
  • the turn-on instruction is used to turn on the translation function of the head-mounted device, wherein the translation function is turned on after the translation function is turned on.
  • the head-mounted device turns on the translation function.
  • the head-mounted device receives the first language sent by the target object, it responds in time, and translates the first target language into the corresponding second language.
  • the headset can obtain the translation results in the first language for communication.
  • the above-mentioned processing module is further configured to transmit the received first language to a voice server, so that the voice server determines the translation result of the first language; receives the translation result translated by the voice server, and also In other words, upload the received first language to the voice server connected to the headset, search the first language through the voice server, find the second language corresponding to the first language, complete the translation operation, and The translation result is sent to the headset.
  • the operation of translating the language in the above embodiment is implemented by the voice server.
  • the received language can also be analyzed through the headset itself, that is, the headset itself It has the function of translating the language.
  • the aforementioned output module is further configured to output the second language through the earphone or speaker of the head-mounted device.
  • the head-mounted device converts the first language into the second language through the translation function, in order to facilitate the target object It can be learned in time that the second language is converted into voice information and output through the earphone or speaker on the headset.
  • the above-mentioned head-mounted device further includes a saving module, configured to save the translation result corresponding to the first language in the head-mounted device, so that the head-mounted device receives the situation in the first language next time Next, output the translation result corresponding to the first language through the head-mounted device.
  • a saving module configured to save the translation result corresponding to the first language in the head-mounted device, so that the head-mounted device receives the situation in the first language next time Next, output the translation result corresponding to the first language through the head-mounted device.
  • the head-mounted device can also save the translation result corresponding to the first language on the head-mounted device.
  • the head-mounted device can quickly output the translation result corresponding to the first language.
  • commonly used first language packs are generated, which is convenient for the target object to communicate when the language is different.
  • a storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps in any one of the foregoing method embodiments when running.
  • the aforementioned storage medium may be configured to store a computer program for executing the following steps:
  • control the head-mounted device In response to the first language, control the head-mounted device to obtain a translation result in the first language, where the translation result is at least used to indicate a second language corresponding to the first language;
  • the storage medium may include: flash disk, ROM (Read-Only Memory), RAM (Random Access Memory), magnetic disk or optical disk, etc.
  • an electronic device for implementing the above-mentioned language output method.
  • the electronic device may be the above-mentioned head-mounted device or other devices that apply the above-mentioned language output method.
  • the present invention The embodiment does not limit this.
  • the electronic device includes a memory 1002 and a processor 1004.
  • the memory 1002 stores a computer program.
  • the processor 1004 is configured to execute any of the above methods through the computer program. Steps in the embodiment.
  • the above-mentioned electronic device may be located in at least one network device among a plurality of network devices in a computer network.
  • the foregoing processor may be configured to execute the following steps through a computer program:
  • control the head-mounted device In response to the first language, control the head-mounted device to obtain a translation result in the first language, where the translation result is at least used to indicate a second language corresponding to the first language;
  • the structure shown in FIG. 8 is only for illustration, and the electronic device may also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, and a mobile Internet device (Mobile Internet Devices, MID), PAD and other terminal devices.
  • FIG. 8 does not limit the structure of the above-mentioned electronic device.
  • the electronic device may also include more or fewer components (such as a network interface, etc.) than shown in FIG. 8, or have a configuration different from that shown in FIG. 8.
  • the memory 1002 can be used to store software programs and modules, such as the language output method and program instructions/modules corresponding to the head-mounted device in the embodiment of the present invention.
  • the processor 1004 runs the software programs and modules stored in the memory 1002, thereby Perform various functional applications and data processing, that is, realize the above-mentioned language output method.
  • the memory 1002 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 1002 may further include a memory remotely provided with respect to the processor 1004, and these remote memories may be connected to the terminal through a network.
  • Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the foregoing memory 1002 may, but is not limited to, include the receiving module 42, the processing module 44, and the output module 46 in the foregoing head-mounted device. In addition, it may also include, but is not limited to, other module units in the aforementioned head-mounted device, which will not be repeated in this example.
  • the aforementioned transmission device 1006 is used to receive or send data via a network.
  • the above-mentioned specific examples of the network may include a wired network and a wireless network.
  • the transmission device 1006 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers through a network cable so as to communicate with the Internet or a local area network.
  • the transmission device 1006 is a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF radio frequency
  • the above-mentioned electronic device further includes: a display 1008 for displaying the shooting results of the camera of the head-mounted device; and a connection bus 1010 for connecting various module components in the above-mentioned electronic device.
  • the aforementioned terminal or server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be communicated by the multiple nodes through a network Distributed system formed by form connection.
  • nodes can form a peer-to-peer (P2P, Peer To Peer) network, and any form of computing equipment, such as servers, terminals, and other electronic devices, can become a node in the blockchain system by joining the peer-to-peer network.
  • P2P peer-to-peer
  • Peer To Peer Peer To Peer
  • the storage medium may include: a flash disk, a read-only memory (Read-Only Memory, ROM), a random access device (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
  • the integrated unit in the foregoing embodiment is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in the foregoing computer-readable storage medium.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, A number of instructions are included to enable one or more computer devices (which may be personal computers, servers, or network devices, etc.) to execute all or part of the steps of the methods in the various embodiments of the present invention.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of units is only a logical function division.
  • multiple units or components can be combined or integrated into Another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

本发明公开了一种语言输出方法、头戴设备、存储介质及电子设备,其中,上述语言输出方法包括:通过头戴设备接收目标对象发出的第一语言;响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;通过所述头戴设备输出所述第二语言,采用上述技术方案,解决了相关技术中,现有的头戴设备的功能单一,无法对头戴设备接收到的语音进行语音翻译等问题,进而可以对头戴设备接收到的目标对象发出的第一语言进行翻译进而输出目标对象需要的第二语言,提高了头戴设备的使用场景。

Description

语言输出方法、头戴设备、存储介质及电子设备 技术领域
本发明涉及通信领域,具体而言,涉及一种语言输出方法、头戴设备、存储介质及电子设备。
背景技术
智能头盔是技术人员结合高新科技产品把一个普通头盔升级,达到使用者所需的智能功能,智能头盔在安保行业得到了广泛应用。
目前,安保人员显著不足,一线安保人员工作强度大,负担重,安保人员在进行安保工作时,头盔则是安保人员在执行任务时不可或缺的个体防护设备之一,在使用智能头盔时,可能还需随身携带的PDA、电台等计算机装置,在一定程度上存在“负荷重、功能集成度不高、业务体验差”等不足,对实际情况的环境和需求考虑不够,后台系统在对智能头盔的部分业务进行工作支撑不力,系统响应速度较慢,应用软件与实际情况的需求贴合不够,而且,在实际工作中还会遇到不同语言的人员,在进行交流时,安保人员无法准确的了解信息以及无法在语言上进行及时的沟通。
因此,针对安保人员在工作中遇到不同语言的人员无法进行有效交流的问题,急需一种能够支持语音翻译的头戴设备。
针对相关技术中,现有的头戴设备的功能单一,无法对头戴设备接收到的语音进行语音翻译等问题,尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种语言输出方法、头戴设备、存储介质及电子设备,以至少解决相关技术中,现有的头戴设备的功能单一,无法对头戴设备接收到的语音进行语音翻译的技术问题。
根据本发明的一个实施例,提供了一种语言输出方法,包括:通过头戴设备接收目标对象发出的第一语言;响应所述第一语言,控制所述头戴 设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;通过所述头戴设备输出所述第二语言。
可选的,响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果之前,上述方法还包括:配置所述第一语言和所述第二语言的对应关系,以使所述头戴设备获取到与所述第一语言对应的第二语言。
可选的,所述第一语言为加密的第一语言,响应所述第一语言之后,上述方法还包括:对所述第一语言进行解密。
可选的,通过所述头戴设备输出所述第二语言之前,还包括:对所述第二语言进行加密。
可选的,通过头戴设备接收目标对象发出的第一语言之后,还包括:根据接收的所述第一语言生成规范化表格。
可选的,通过头戴设备接收目标对象发出的第一语言,包括:直接通过所述头戴设备接收所述目标对象发出的第一语言;或通过所述头戴设备接收可穿戴设备转发的目标对象发出的第一语言。
可选的,通过头戴设备接收目标对象发出的第一语言之前,上述方法还包括:接收所述目标对象的可穿戴设备的开启指示,其中,所述开启指示用于开启所述头戴设备的翻译功能,其中,在所述翻译功能开启的情况下,允许所述头戴设备响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果。
可选的,响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,包括:将接收到的第一语言传输至语音服务器,以使所述语音服务器确定所述第一语言的翻译结果;接收所述语音服务器翻译的翻译结果。
可选的,通过所述头戴设备输出所述第二语言,包括:通过所述头戴设备的耳机或者扬声器输出所述第二语言。
可选的,通过所述头戴设备输出所述第二语言之后,上述方法还包括:在所述头戴设备保存所述第一语言对应的翻译结果,以使所述头戴设备在 下一次接收到所述第一语言的情况下,通过所述头戴设备输出所述第一语言对应的翻译结果。
根据本发明的另一个实施例,提供了一种头戴设备,包括;接收模块,用于接收目标对象发出的第一语言;处理模块,用于响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;输出模块,用于输出所述第二语言。
可选的,处理模块还用于配置所述第一语言和所述第二语言的对应关系,以使所述头戴设备获取到与所述第一语言对应的第二语言。
可选的,处理模块还用于对所述第一语言进行解密。
可选的,输出模块还用于对所述第二语言进行加密。
可选的,接收模块还用于根据接收的所述第一语言生成规范化表格。
可选的,上述接收模块,还用于直接通过所述头戴设备接收所述目标对象发出的第一语言;或通过所述头戴设备接收可穿戴设备转发的目标对象发出的第一语言。
可选的,上述接收模块,还用于接收所述目标对象的可穿戴设备的开启指示,其中,所述开启指示用于开启所述头戴设备的翻译功能,其中,在所述翻译功能开启的情况下,允许所述头戴设备响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果。
可选的,上述处理模块,还用于将接收到的第一语言传输至语音服务器,以使所述语音服务器确定所述第一语言的翻译结果;接收所述语音服务器翻译的翻译结果。
可选的,上述输出模块,还用于通过所述头戴设备的耳机或者扬声器输出所述第二语言。
可选的,上述头戴装置还包括保存模块,用于在所述头戴设备保存所述第一语言对应的翻译结果,以使所述头戴设备在下一次接收到所述第一 语言的情况下,通过所述头戴设备输出所述第一语言对应的翻译结果。
根据本发明实施例的又一方面,还提供了一种计算机可读的存储介质,该计算机可读的存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述语言输出方法。
根据本发明实施例的又一方面,还提供了一种电子装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,上述处理器通过计算机程序执行上述的语言输出方法。
在本发明实施例中,通过头戴设备接收目标对象发出的第一语言;响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;通过所述头戴设备输出所述第二语言,采用上述技术方案,解决了相关技术中,现有的头戴设备的功能单一,无法对头戴设备接收到的语音进行语音翻译等问题,进而可以对头戴设备接收到的目标对象发出的第一语言进行翻译进而输出目标对象需要的第二语言,提高了头戴设备的使用场景。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的一种语言输出方法的在产品应用的示意图;
图2是根据本发明实施例的一种智能头盔的终端系统框架示意图;
图3是根据本发明实施例的一种智能头盔的终端系统组成示意图;
图4是根据本发明实施例的一种语言输出方法的应用环境的示意图;
图5是根据本发明实施例的语言输出方法的流程图;
图6是根据本发明实施例的一种可选的头戴设备进行操作执行语音翻译功能应用的流程图;
图7是根据本发明实施例的一种头戴设备结构示意图;
图8是根据本发明实施例的一种可选的电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
为了更好的理解本发明实施例以及可选实施例的技术方案,以下对本发明实施例以及可选实施例中可能出现的应用场景进行说明,但不用于限定以下场景的应用。
图1是根据本发明实施例的一种语言输出方法的在产品应用的示意图;以盔体为参考基准,对智能头盔(相当于本发明实施例中的“头戴设备”)进行区域划分,将智能头盔划分为外部前侧1、外部顶侧2、外部左右侧3、外部后侧4、内部前侧5、内部顶侧6、内部后侧7七个区域。外部前侧为信息采集区域,用来安置摄像头,外部顶侧为通讯区域,外部后侧为能源供应区域,内部顶侧为主板及散热区域,外部左右侧为功能区域,内部前侧为AR模组,可选地,本发明实施例的技术方案应用在外部左右侧的功能区域,通过功能区域中的主板对头戴设备接收到的语言信息进行翻译。
智能头盔的终端系统框架示意图如图2所示,具体的系统层如下:
功能层:在功能上,智能头盔要满足了信息化、智能化、现代化的要求,除了提供最基本的防护、通信、视频直播功能外,还提供了导航、人脸识别、身份证识别、车牌识别、事件推送、语音翻译等功能。
支撑层:除了硬件部分的智能头盔、智能手表,还包括后台智能头盔管理系统以及第三方互联网应用服务平台,为实现穿戴式智能头盔的终端系统的智能化提供硬件及服务支撑。
资源层:智能头盔终端系统将通过云服务及智能头盔后台服务器,与相关平台的系统人脸识别检索数据库、“一人一档”、“一车一档”及OSS云、RDS等数据库连接,真正实现了穿戴式智能头盔的智能化、信息化。
智能头盔的终端系统组成如图3所示,通过智能头盔、智能手表、智能头盔管理系统组成的智能头盔终端系统,提高了智能头盔目标对象与语言不通目标对象进行语言交流的便捷性,提高了实际工作中的出现语言不通时的沟通效率。
根据本发明实施例的一个方面,提供了一种语言输出方法。可选地,上述语言输出方法可以但不限于应用于如图4所示的应用环境中。如图4所示,头戴设备102运行有能够识别第一语言并进行翻译的应用,且头戴设备102包括主板。头戴设备102可以通过头戴设备接收目标对象发出的第一语言;响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;通过所述头戴设备输出所述第二语言,服务器104可以是语音服务器,结果本发明实施例对此不作限定,以上仅为一种示例,本申请实施例在此不作限定。
在本实施例中提供了一种语言输出方法,图5是根据本发明实施例的语言输出方法的流程图,如图5所示,上述语言输出方法流程包括如下步骤:
步骤S202,通过头戴设备接收目标对象发出的第一语言;
步骤S204,响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应 的第二语言;
步骤S206,通过所述头戴设备输出所述第二语言。
通过上述步骤,通过头戴设备接收目标对象发出的第一语言;响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;通过所述头戴设备输出所述第二语言,采用上述技术方案,解决了相关技术中,现有的头戴设备的功能单一,无法对头戴设备接收到的语音进行语音翻译等问题,进而可以对头戴设备接收到的目标对象发出的第一语言进行翻译进而输出目标对象需要的第二语言,提高了头戴设备的使用场景。
可选的,响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果之前,上述方法还包括:配置所述第一语言和所述第二语言的对应关系,以使所述头戴设备获取到与所述第一语言对应的第二语言,也就是说,通过头戴设备将已经获取的目标对象所发出的第一语言翻译成第二语言时,还需预先配置第一语言和第二语言的对应关系,便于头戴设备将第一语言准确地翻译为目标对象的需要的第二语言。
可选的,所述第一语言为加密的第一语言,响应所述第一语言之后,上述方法还包括:对所述第一语言进行解密,为了保护语言安全,头戴设备获取到的目标对象的语言是加密的第一语言,在进行语言翻译时,需要对第一语言进行解密后,根据已配置好的第一语言和第二语言的对应关系,进行翻译。
可选的,通过所述头戴设备输出所述第二语言之前,还包括:对所述第二语言进行加密,为了保护语言安全,头戴设备输出第一语言的翻译结果时,对通过翻译结果获取的第二语言进行加密。
可选的,通过头戴设备接收目标对象发出的第一语言之后,还包括:根据接收的所述第一语言生成规范化表格,也就是说,接收到目标对象发出的第一语言后,头戴设备可以根据对第一语言的反馈结果规范化表格,其中,规范化表格中包括了生成一个包括头戴设备能够快速响应的语言,提高了翻译效率。
可选的,通过头戴设备接收目标对象发出的第一语言可以通过以下实现方式实现:1)直接通过所述头戴设备接收所述目标对象发出的第一语言,即可以通过头戴设备自身对目标对象的语言进行翻译反馈;2)通过所述头戴设备接收可穿戴设备转发的目标对象发出的第一语言。例如,可穿戴设备可以是智能手表,目标对象可以对智能手表说出需要进行翻译的第一语言,然后可穿戴设备将第一语言转发至头戴设备,进而头戴设备可以执行将第一语言翻译为第二语言语言的操作,也可以是头戴设备将第一语言发送至语音服务器,通过语音服务器对第一语言翻译,得到第二语言。
可选的,通过头戴设备接收目标对象发出的第一语言之前,上述方法还包括:接收所述目标对象的可穿戴设备的开启指示,其中,所述开启指示用于开启所述头戴设备的翻译功能,其中,在所述翻译功能开启的情况下,允许所述头戴设备响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果。也就是说,通过目标对象发出的开启指示,头戴设备打开翻译功能,当头戴设备接收到目标对象发出的第一语言及时做出响应,将第一目标语言翻译为对应需要的第二语言,头戴设备可以获取第一语言的翻译结果用于沟通交流。
可选的,响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,包括:将接收到的第一语言传输至语音服务器,以使所述语音服务器确定所述第一语言的翻译结果;接收所述语音服务器翻译的翻译结果,也就是说,将接收到的第一语言上传至与头戴设备相连接的语音服务器,通过语音服务器对第一语言进行搜索,找出第一语言相对应的第二语言,完成翻译操作,并将翻译结果发送到头戴设备上。
需要说明的是,上述实施例中对语言进行翻译的操作是通过语音服务器实现的,在一个可选实施例中,还可以通过头戴设备本身对接收到的语言进行分析,即头戴设备本身具备对语言进行翻译的功能。
可选的,通过所述头戴设备输出所述第二语言,包括:通过所述头戴设备的耳机或者扬声器输出所述第二语言,当头戴设备通过翻译功能 将第一语言转化为第二语言后,为了便于目标对象可以及时了解到,通过头戴设备上的耳机或者扬声器将第二语言转化语音信息输出。
可选的,通过所述头戴设备输出所述第二语言之后,上述方法还包括:在所述头戴设备保存所述第一语言对应的翻译结果,以使所述头戴设备在下一次接收到所述第一语言的情况下,通过所述头戴设备输出所述第一语言对应的翻译结果。
通过头戴设备还可以将第一语言对应的翻译结果,可以保存在头戴设备上,在下次出现同样的第一语言的情况下,头戴设备可以快速的输出第一语言对应的翻译结果,相应的生成常用第一语言包,便于目标对象在语言不同时进行交流。
需要说明的是,上述通过头戴设备输出第二语言,具体可以是通过头戴设备的扬声器输出,也可以是将翻译结果显示在头戴设备的显示屏上,本发明实施例对此不作限定。
需要说明的是,在一个可选实施例中,上述实施例中的计算机装置104可以和上述后台服务器一起运行在头戴设备的后台,后台客户端可以是运行在计算机装置104的一种客户端,管理人员可以通过对计算机装置104进行操作,以对后台客户端进行点击、查看等处理。
为了更好的理解上述语言输出方法的流程,以下结合一可选实施例进行说明,但不用于限定本发明实施例的技术方案。
在本发明可选实施例中,通过头戴设备接收目标对象发出的第一语言;响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言,所述第一语言和第二语言的对应关系预先配置好;通过所述头戴设备输出所述第二语言。图6为头戴设备进行操作执行语音翻译功能应用的流程图,其中,进行语音翻译功能的头戴设备的逻辑关系如下表1:
表1
Figure PCTCN2020093753-appb-000001
Figure PCTCN2020093753-appb-000002
通过相关应用场景以及本发明可选实施例的说明,面对当前各种安保终端设备存在的问题及不足,本发明实施例以及可选实施例提供了一 种头戴设备,以智能头盔等为主要样式,集云计算、大数据、物联网、通信、人工智能及增强现实技术于一体的智能化、信息化穿戴装备。该装备(相当于本发明实施例的头戴设备)通过蓝牙连接、语音输入、手动输入等操控方式,对接相关后台系统,可实现移动通信、拍照录像、GPS/北斗定位、智能语音、智能图像/视频识别等功能,能有效提高安保人员的工作效率并增强佩戴舒适性和安全性,最终实现“三化”目标,即:
(1)智能化:通过现场语音、图像、视频数据即时与相关平台的后台业务系统和大数据进行交互关联,实现安保终端智能化,释放安保人员的双手,让执勤工作更先进、更高效;
(2)一体化:通过集成身体防护、信息采集输入、通信、信息反馈输出于一体,实现终端一体化,让执勤工作更安全、更便捷;
(3)人性化:通过采用高科技材料阻热降温、人体工学轻量化设计等技术,实现终端人性化,让工作人员的佩戴更舒适、更易维护。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例的方法。
在本实施例中还提供了一种头戴设备,该设备用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的设备较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的,图7是根据本发明实施例的一种头戴设备的结构示意图,该设备包括:
(1)接收模块42,用于接收目标对象发出的第一语言;
(2)处理模块44,用于响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;
(3)输出模块46,用于输出所述第二语言。
通过上述设备,通过头戴设备接收目标对象发出的第一语言;响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;通过所述头戴设备输出所述第二语言,采用上述技术方案,解决了相关技术中,现有的头戴设备的功能单一,无法对头戴设备接收到的语音进行语音翻译等问题,进而可以对头戴设备接收到的目标对象发出的第一语言进行翻译进而输出目标对象需要的第二语言,提高了头戴设备的使用场景。
可选的,处理模块还用于配置所述第一语言和所述第二语言的对应关系,以使所述头戴设备获取到与所述第一语言对应的第二语言,也就是说,通过头戴设备将已经获取的目标对象所发出的第一语言翻译成第二语言时,还需预先配置第一语言和第二语言的对应关系,便于头戴设备将第一语言准确地翻译为目标对象的需要的第二语言。
可选的,处理模块还用于对所述第一语言进行解密,为了保护语言安全,头戴设备获取到的目标对象的语言是加密的第一语言,在进行语言翻译时,需要对第一语言进行解密后,根据已配置好的第一语言和第二语言的对应关系,进行翻译。
可选的,输出模块还用于对所述第二语言进行加密为了保护语言安全,头戴设备输出第一语言的翻译结果时,对通过翻译结果获取的第二语言进行加密。
可选的,接收模块还用于根据接收的所述第一语言信息生成规范化表格,接收到目标对象发出的第一语言后,头戴设备可以根据对第一语言的反馈结果规范化表格,其中,规范化表格中包括了生成一个包括头戴设备能够快速响应的语言,提高了翻译效率。
可选的,上述接收模块,还用于直接通过所述头戴设备接收所述目标 对象发出的第一语言,即可以通过头戴设备自身对目标对象的语言进行翻译反馈;通过所述头戴设备接收可穿戴设备转发的目标对象发出的第一语言。例如,可穿戴设备可以是智能手表,目标对象可以对智能手表说出需要进行翻译的第一语言,然后可穿戴设备将第一语言转发至头戴设备,进而头戴设备可以执行将第一语言翻译为第二语言语言的操作。
可选的,上述接收模块,还用于接收所述目标对象的可穿戴设备的开启指示,其中,所述开启指示用于开启所述头戴设备的翻译功能,其中,在所述翻译功能开启的情况下,允许所述头戴设备响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果。
也就是说,通过目标对象发出的开启指示,头戴设备打开翻译功能,当头戴设备接收到目标对象发出的第一语言及时做出响应,将第一目标语言翻译为对应需要的第二语言,头戴设备可以获取第一语言的翻译结果用于沟通交流。
可选的,上述处理模块,还用于将接收到的第一语言传输至语音服务器,以使所述语音服务器确定所述第一语言的翻译结果;接收所述语音服务器翻译的翻译结果,也就是说,将接收到的第一语言上传至与头戴设备相连接的语音服务器,通过语音服务器对第一语言进行搜索,找出第一语言相对应的第二语言,完成翻译操作,并将翻译结果发送到头戴设备上。
需要说明的是,上述实施例中对语言进行翻译的操作是通过语音服务器实现的,在一个可选实施例中,还可以通过头戴设备本身对接收到的语言进行分析,即头戴设备本身具备对语言进行翻译的功能。
可选的,上述输出模块,还用于通过所述头戴设备的耳机或者扬声器输出所述第二语言,当头戴设备通过翻译功能将第一语言转化为第二语言后,为了便于目标对象可以及时了解到,通过头戴设备上的耳机或者扬声器将第二语言转化语音信息输出。
可选的,上述头戴装置还包括保存模块,用于在所述头戴设备保存所述第一语言对应的翻译结果,以使所述头戴设备在下一次接收到所述第一语言的情况下,通过所述头戴设备输出所述第一语言对应的翻译结果。
通过头戴设备还可以将第一语言对应的翻译结果,可以保存在头戴设备上,在下次出现同样的第一语言的情况下,头戴设备可以快速的输出第一语言对应的翻译结果,相应的生成常用第一语言包,便于目标对象在语言不同时进行交流。
根据本发明的实施例的又一方面,还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,通过头戴设备接收目标对象发出的第一语言;
S2,响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;
S3,通过所述头戴设备输出所述第二语言。
可选地,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、ROM(Read-Only Memory,只读存储器)、RAM(Random Access Memory,随机存取器)、磁盘或光盘等。
根据本发明实施例的又一个方面,还提供了一种用于实施上述语言输出方法的电子设备,该电子设备可以是上述头戴设备,也可以是其他应用上述语言输出方法的设备,本发明实施例对此不进行限定,如图8所示,该电子设备包括存储器1002和处理器1004,该存储器1002中存储有计算机程序,该处理器1004被设置为通过计算机程序执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述电子设备可以位于计算机网络的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,通过头戴设备接收目标对象发出的第一语言;
S2,响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;
S3,通过所述头戴设备输出所述第二语言。
可选地,本领域普通技术人员可以理解,图8所示的结构仅为示意,电子设备也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图8其并不对上述电子设备的结构造成限定。例如,电子设备还可包括比图8中所示更多或者更少的组件(如网络接口等),或者具有与图8所示不同的配置。
其中,存储器1002可用于存储软件程序以及模块,如本发明实施例中的语言输出方法和头戴设备对应的程序指令/模块,处理器1004通过运行存储在存储器1002内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的语言输出方法。存储器1002可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1002可进一步包括相对于处理器1004远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。作为一种示例,上述存储器1002中可以但不限于包括上述头戴设备中的接收模块42、处理模块44、输出模块46。此外,还可以包括但不限于上述头戴设备中的其他模块单元,本示例中不再赘述。
可选地,上述的传输设备1006用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输设备1006包括一个网络适配器(Network Interface Controller,NIC),其可 通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输设备1006为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
此外,上述电子设备还包括:显示器1008,用于显示头戴设备的摄像头的拍摄结果;和连接总线1010,用于连接上述电子设备中的各个模块部件。
在其他实施例中,上述终端或者服务器可以是一个分布式系统中的一个节点,其中,该分布式系统可以为区块链系统,该区块链系统可以是由该多个节点通过网络通信的形式连接形成的分布式系统。其中,节点之间可以组成点对点(P2P,Peer To Peer)网络,任意形式的计算设备,比如服务器、终端等电子设备都可以通过加入该点对点网络而成为该区块链系统中的一个节点。
可选地,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例方法的全部或部分步骤。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的设备实施例仅仅是示意性的,例如单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (12)

  1. 一种语言输出方法,其特征在于,包括:
    通过头戴设备接收目标对象发出的第一语言;
    响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;
    通过所述头戴设备输出所述第二语言。
  2. 根据权利要求1所述的语言输出方法,响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果之前,所述方法还包括:
    配置所述第一语言和所述第二语言的对应关系,以使所述头戴设备获取到与所述第一语言对应的第二语言。
  3. 根据权利要求1所述的语言输出方法,其特征在于,所述第一语言为加密的第一语言,响应所述第一语言之后,所述方法还包括:
    对所述第一语言进行解密。
  4. 根据权利要求1所述的语言输出方法,其特征在于,通过所述头戴设备输出所述第二语言之前,还包括:
    对所述第二语言进行加密。
  5. 根据权利要求1所述的语言输出方法,其特征在于,通过头戴设备接收目标对象发出的第一语言之后,还包括:
    根据接收的所述第一语言生成规范化表格。
  6. 根据权利要求1所述的方法,其特征在于,通过头戴设备接收目标对象发出的第一语言,包括:
    直接通过所述头戴设备接收所述目标对象发出的第一语言;或
    通过所述头戴设备接收可穿戴设备转发的目标对象发出的第一语言。
  7. 根据权利要求1所述的方法,其特征在于,通过头戴设备接收目标对象发出 的第一语言之前,所述方法还包括:
    接收所述目标对象的可穿戴设备的开启指示,其中,所述开启指示用于开启所述头戴设备的翻译功能,其中,在所述翻译功能开启的情况下,允许所述头戴设备响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果。
  8. 根据权利要求1所述的方法,其特征在于,响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,包括:
    将接收到的第一语言传输至语音服务器,以使所述语音服务器确定所述第一语言的翻译结果;
    接收所述语音服务器翻译的翻译结果。
  9. 根据权利要求1所述的方法,其特征在于,通过所述头戴设备输出所述第二语言,包括:
    通过所述头戴设备的耳机或者扬声器输出所述第二语言。
  10. 根据权利要求1至9任一项所述的方法,其特征在于,通过所述头戴设备输出所述第二语言之后,所述方法还包括:
    在所述头戴设备保存所述第一语言对应的翻译结果,以使所述头戴设备在下一次接收到所述第一语言的情况下,通过所述头戴设备输出所述第一语言对应的翻译结果。
  11. 一种头戴设备,其特征在于,包括:
    接收模块,用于接收目标对象发出的第一语言;
    处理模块,用于响应所述第一语言,控制所述头戴设备获取所述第一语言的翻译结果,其中,所述翻译结果至少用于指示与所述第一语言对应的第二语言;
    输出模块,用于输出所述第二语言。
  12. 一种计算机可读的存储介质,所述计算机可读的存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至10任一项中所述的方法。
PCT/CN2020/093753 2020-03-27 2020-06-01 语言输出方法、头戴设备、存储介质及电子设备 WO2021189652A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010231570.7A CN111476040A (zh) 2020-03-27 2020-03-27 语言输出方法、头戴设备、存储介质及电子设备
CN202010231570.7 2020-03-27

Publications (1)

Publication Number Publication Date
WO2021189652A1 true WO2021189652A1 (zh) 2021-09-30

Family

ID=71747929

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093753 WO2021189652A1 (zh) 2020-03-27 2020-06-01 语言输出方法、头戴设备、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN111476040A (zh)
WO (1) WO2021189652A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100250231A1 (en) * 2009-03-07 2010-09-30 Voice Muffler Corporation Mouthpiece with sound reducer to enhance language translation
CN107590135A (zh) * 2016-07-07 2018-01-16 三星电子株式会社 自动翻译方法、设备和系统
CN107832309A (zh) * 2017-10-18 2018-03-23 广东小天才科技有限公司 一种语言翻译的方法、装置、可穿戴设备及存储介质
US20180322875A1 (en) * 2016-07-08 2018-11-08 Panasonic Intellectual Property Management Co., Ltd. Translation device
CN108923810A (zh) * 2018-06-15 2018-11-30 Oppo广东移动通信有限公司 翻译方法及相关设备
CN109067965A (zh) * 2018-06-15 2018-12-21 Oppo广东移动通信有限公司 翻译方法、翻译装置、可穿戴装置及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6457706B1 (ja) * 2018-03-26 2019-02-06 株式会社フォルテ 翻訳システム、翻訳方法、及び翻訳装置
CN108710615B (zh) * 2018-05-03 2020-03-03 Oppo广东移动通信有限公司 翻译方法及相关设备
CN109275057A (zh) * 2018-08-31 2019-01-25 歌尔科技有限公司 一种翻译耳机语音输出方法、系统及翻译耳机和存储介质
CN110889294A (zh) * 2018-09-06 2020-03-17 重庆好德译信息技术有限公司 一种在短时间内提供准确翻译的辅助系统及方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100250231A1 (en) * 2009-03-07 2010-09-30 Voice Muffler Corporation Mouthpiece with sound reducer to enhance language translation
CN107590135A (zh) * 2016-07-07 2018-01-16 三星电子株式会社 自动翻译方法、设备和系统
US20180322875A1 (en) * 2016-07-08 2018-11-08 Panasonic Intellectual Property Management Co., Ltd. Translation device
CN107832309A (zh) * 2017-10-18 2018-03-23 广东小天才科技有限公司 一种语言翻译的方法、装置、可穿戴设备及存储介质
CN108923810A (zh) * 2018-06-15 2018-11-30 Oppo广东移动通信有限公司 翻译方法及相关设备
CN109067965A (zh) * 2018-06-15 2018-12-21 Oppo广东移动通信有限公司 翻译方法、翻译装置、可穿戴装置及存储介质

Also Published As

Publication number Publication date
CN111476040A (zh) 2020-07-31

Similar Documents

Publication Publication Date Title
Satyanarayanan et al. An open ecosystem for mobile-cloud convergence
WO2021189650A1 (zh) 实时视频流的显示方法、头戴设备、存储介质及电子设备
US20220256647A1 (en) Systems and Methods for Collaborative Edge Computing
JP6442441B2 (ja) 翻訳システム及び翻訳方法
US20150172538A1 (en) Wearable Camera Systems
WO2021136114A1 (zh) 占用设备的方法以及电子设备
US10440103B2 (en) Method and apparatus for digital media control rooms
US11823675B2 (en) Display mode dependent response generation with latency considerations
CN109670159A (zh) 视图的创建及管理方法、装置、电子设备和存储介质
US9332580B2 (en) Methods and apparatus for forming ad-hoc networks among headset computers sharing an identifier
WO2021189648A1 (zh) 轨迹信息的显示方法、头戴设备、存储介质及电子设备
WO2021189647A1 (zh) 多媒体信息的确定方法、头戴设备、存储介质及电子设备
WO2021189652A1 (zh) 语言输出方法、头戴设备、存储介质及电子设备
US20220113690A1 (en) Method and system for providing energy audits
CN114025215A (zh) 文件处理方法、移动终端及存储介质
WO2023093092A1 (zh) 会议记录方法、终端设备和会议记录系统
WO2021189651A1 (zh) 头戴设备的轨迹确定方法和装置、存储介质及电子装置
CN113554542A (zh) 5g移动流调仪的突发公共卫生事件应急指挥调度平台
KR102254821B1 (ko) 대화형 콘텐츠 제공 시스템
WO2021189653A1 (zh) 通话进程的处理方法和头戴设备、存储介质及电子设备
KR102621301B1 (ko) 메타버스 기반의 비대면 사무실 분양을 위한 스마트 업무지원 시스템
Matthews et al. Cost-Effective Medical Robotic Telepresence Solution using Plastic Mannequin.
CN114765675A (zh) 一种指挥中心操控方法、装置和系统
US20230127607A1 (en) Methods, devices, and computer program products for authenticating peripheral device
WO2022170156A1 (en) Systems and methods for collaborative edge computing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20927690

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20927690

Country of ref document: EP

Kind code of ref document: A1