CN111476040A - Language output method, head-mounted device, storage medium, and electronic device - Google Patents

Language output method, head-mounted device, storage medium, and electronic device Download PDF

Info

Publication number
CN111476040A
CN111476040A CN202010231570.7A CN202010231570A CN111476040A CN 111476040 A CN111476040 A CN 111476040A CN 202010231570 A CN202010231570 A CN 202010231570A CN 111476040 A CN111476040 A CN 111476040A
Authority
CN
China
Prior art keywords
language
head
mounted device
target object
translation result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010231570.7A
Other languages
Chinese (zh)
Inventor
刘若鹏
栾琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kuang Chi Space Technology Co Ltd
Original Assignee
Shenzhen Kuang Chi Super Material Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kuang Chi Super Material Technology Co ltd filed Critical Shenzhen Kuang Chi Super Material Technology Co ltd
Priority to CN202010231570.7A priority Critical patent/CN111476040A/en
Priority to PCT/CN2020/093753 priority patent/WO2021189652A1/en
Publication of CN111476040A publication Critical patent/CN111476040A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/0406Accessories for helmets
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/30Mounting radio sets or communication systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a language output method, a head-mounted device, a storage medium and an electronic device, wherein the language output method comprises the following steps: receiving, by a headset, a first language spoken by a target object; controlling the head-mounted device to obtain a translation result of the first language in response to the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language; through head-mounted device output the second language adopts above-mentioned technical scheme, has solved in the correlation technique, and current head-mounted device's function is single, can't carry out speech translation scheduling problem to head-mounted device received pronunciation, and then can translate the second language that and then output target object needs to the first language that target object received sent to head-mounted device, has improved head-mounted device's use scene.

Description

Language output method, head-mounted device, storage medium, and electronic device
Technical Field
The invention relates to the field of communication, in particular to a language output method, a head-mounted device, a storage medium and an electronic device.
Background
The intelligent helmet is characterized in that technicians combine high-tech products to upgrade a common helmet to achieve the intelligent function required by a user, and the intelligent helmet is widely applied to the security industry.
At present, security personnel are obviously insufficient, the working strength of security personnel at the front line is high, the burden is heavy, when the security personnel carry out security work, a helmet is one of indispensable individual protective devices when the security personnel carry out tasks, when an intelligent helmet is used, Personal Digital Assistants (PDA), radio stations and other computer devices possibly need to be carried along, and the defects of load, low function integration level, poor business experience and the like exist to a certain extent.
Therefore, in order to solve the problem that security personnel cannot effectively communicate with people in different languages in work, a headset capable of supporting voice translation is urgently needed.
Aiming at the problems that the existing head-mounted equipment has single function and cannot perform voice translation on voice received by the head-mounted equipment in the related art, and the like, an effective solution is not provided.
Disclosure of Invention
The embodiment of the invention provides a language output method, a head-mounted device, a storage medium and an electronic device, which at least solve the technical problems that the existing head-mounted device has single function and cannot perform voice translation on voice received by the head-mounted device in the related art.
According to an embodiment of the present invention, there is provided a language output method including: receiving, by a headset, a first language spoken by a target object; controlling the head-mounted device to obtain a translation result of the first language in response to the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language; outputting, by the headset device, the second language.
Optionally, before controlling the headset to obtain the translation result of the first language in response to the first language, the method further includes: and configuring the corresponding relation between the first language and the second language so that the head-mounted device acquires the second language corresponding to the first language.
Optionally, the first language is an encrypted first language, and after responding to the first language, the method further includes: decrypting the first language.
Optionally, before outputting the second language through the headset, the method further includes: encrypting the second language.
Optionally, after receiving, by the head-mounted device, the first language spoken by the target object, the method further includes: generating a normalized table from the received first language.
Optionally, receiving, by the headset device, the first language spoken by the target object includes: receiving, directly through the head-mounted device, a first language spoken by the target object; or receiving, by the headset device, a first language emitted by a target object forwarded by the wearable device.
Optionally, before receiving, by the headset, the first language spoken by the target object, the method further includes: receiving an opening instruction of the wearable device of the target object, wherein the opening instruction is used for opening a translation function of the head-mounted device, and when the translation function is opened, the head-mounted device is allowed to respond to the first language and control the head-mounted device to acquire a translation result of the first language.
Optionally, controlling the headset to obtain the translation result of the first language in response to the first language includes: transmitting the received first language to a voice server so that the voice server determines a translation result of the first language; and receiving a translation result translated by the voice server.
Optionally, outputting, by the headset device, the second language includes: outputting the second language through headphones or speakers of the headset.
Optionally, after the second language is output by the headset, the method further includes: and when the head-mounted device stores the translation result corresponding to the first language, so that the head-mounted device outputs the translation result corresponding to the first language through the head-mounted device when receiving the first language next time.
According to another embodiment of the present invention, there is provided a head-mounted device including; the receiving module is used for receiving a first language sent by a target object; the processing module is used for responding to the first language and controlling the head-mounted equipment to obtain a translation result of the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language; and the output module is used for outputting the second language.
Optionally, the processing module is further configured to configure a corresponding relationship between the first language and the second language, so that the headset acquires the second language corresponding to the first language.
Optionally, the processing module is further configured to decrypt the first language.
Optionally, the output module is further configured to encrypt the second language.
Optionally, the receiving module is further configured to generate a normalized table according to the received first language.
Optionally, the receiving module is further configured to directly receive, through the headset, a first language spoken by the target object; or receiving, by the headset device, a first language emitted by a target object forwarded by the wearable device.
Optionally, the receiving module is further configured to receive an opening instruction of the wearable device of the target object, where the opening instruction is used to open a translation function of the head-mounted device, and when the translation function is opened, the head-mounted device is allowed to respond to the first language and control the head-mounted device to obtain a translation result of the first language.
Optionally, the processing module is further configured to transmit the received first language to a voice server, so that the voice server determines a translation result of the first language; and receiving a translation result translated by the voice server.
Optionally, the output module is further configured to output the second language through an earphone or a speaker of the headset.
Optionally, the headset further includes a saving module, configured to save, by the headset, the translation result corresponding to the first language, so that when the headset receives the first language next time, the translation result corresponding to the first language is output by the headset.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned language output method when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the language output method through the computer program.
In the embodiment of the invention, a first language emitted by a target object is received through a head-mounted device; controlling the head-mounted device to obtain a translation result of the first language in response to the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language; through head-mounted device output the second language adopts above-mentioned technical scheme, has solved in the correlation technique, and current head-mounted device's function is single, can't carry out speech translation scheduling problem to head-mounted device received pronunciation, and then can translate the second language that and then output target object needs to the first language that target object received sent to head-mounted device, has improved head-mounted device's use scene.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a language output method in a product application according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a terminal system framework of an intelligent helmet according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a terminal system of an intelligent helmet according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an application environment of a language output method according to an embodiment of the present invention;
FIG. 5 is a flow diagram of a method of language output according to an embodiment of the present invention;
FIG. 6 is a flow diagram of an alternative head-mounted device operating to perform speech translation functions applications in accordance with embodiments of the present invention;
FIG. 7 is a schematic diagram of a headset configuration according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to better understand the technical solutions of the embodiments and the alternative embodiments of the present invention, the following description is made on possible application scenarios in the embodiments and the alternative embodiments of the present invention, but is not limited to the application of the following scenarios.
FIG. 1 is a schematic diagram of a language output method in a product application according to an embodiment of the present invention; the intelligent helmet (corresponding to the "head-mounted device" in the embodiment of the present invention) is divided into seven regions, namely, an external front side 1, an external top side 2, an external left and right sides 3, an external rear side 4, an internal front side 5, an internal top side 6, and an internal rear side 7, by using the helmet body as a reference. Optionally, the technical scheme of the embodiment of the invention is applied to the functional areas on the left and right sides of the outside, and language information received by the head-mounted device is translated through the main board in the functional area.
A schematic diagram of a terminal system framework of the intelligent helmet is shown in fig. 2, and specific system layers are as follows:
functional layer: functionally, the intelligent helmet meets the requirements of informatization, intellectualization and modernization, and not only provides the most basic functions of protection, communication and video live broadcast, but also provides the functions of navigation, face recognition, identity card recognition, license plate recognition, event pushing, voice translation and the like.
A support layer: except for the intelligent helmet and the intelligent watch of the hardware part, the intelligent helmet further comprises a background intelligent helmet management system and a third-party Internet application service platform, and hardware and service support is provided for achieving intellectualization of a terminal system of the wearable intelligent helmet.
Resource layer: the intelligent helmet terminal system is connected with a system face recognition retrieval database, a 'one person one file', 'one car one file' of a related platform, OSS cloud, RDS and other databases through cloud service and an intelligent helmet background server, and intellectualization and informatization of the wearable intelligent helmet are really realized.
The terminal system of the intelligent helmet is shown in fig. 3, and the intelligent helmet terminal system formed by the intelligent helmet, the intelligent watch and the intelligent helmet management system improves the convenience of language communication between the intelligent helmet target object and the language-blocked target object, and improves the communication efficiency when the language is blocked in the actual work.
According to an aspect of an embodiment of the present invention, there is provided a language output method. Alternatively, the above language output method may be applied, but not limited, to the application environment as shown in fig. 4. As shown in fig. 4, the head mounted device 102 runs an application capable of recognizing and translating a first language, and the head mounted device 102 includes a motherboard. The head-mounted device 102 may receive, via the head-mounted device, a first language spoken by a target object; controlling the head-mounted device to obtain a translation result of the first language in response to the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language; as the headset outputs the second language, the server 104 may be a voice server, and as a result, the embodiment of the present invention is not limited thereto, and the above is only an example, and the embodiment of the present invention is not limited thereto.
In this embodiment, a language output method is provided, and fig. 5 is a flowchart of a language output method according to an embodiment of the present invention, and as shown in fig. 5, the flow of the language output method includes the following steps:
step S202, receiving a first language emitted by a target object through a head-mounted device;
step S204, responding to the first language, controlling the head-mounted device to obtain a translation result of the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language;
step S206, outputting the second language through the head-mounted device.
Through the steps, receiving a first language emitted by a target object through the head-mounted equipment; controlling the head-mounted device to obtain a translation result of the first language in response to the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language; through head-mounted device output the second language adopts above-mentioned technical scheme, has solved in the correlation technique, and current head-mounted device's function is single, can't carry out speech translation scheduling problem to head-mounted device received pronunciation, and then can translate the second language that and then output target object needs to the first language that target object received sent to head-mounted device, has improved head-mounted device's use scene.
Optionally, before controlling the headset to obtain the translation result of the first language in response to the first language, the method further includes: and configuring the corresponding relationship between the first language and the second language so that the head-mounted device acquires the second language corresponding to the first language, that is, when the head-mounted device translates the acquired first language emitted by the target object into the second language, the corresponding relationship between the first language and the second language needs to be configured in advance, so that the head-mounted device can accurately translate the first language into the second language required by the target object.
Optionally, the first language is an encrypted first language, and after responding to the first language, the method further includes: and decrypting the first language, wherein the language of the target object acquired by the head-mounted device is the encrypted first language in order to protect the language safety, and when the language translation is performed, the translation is performed according to the configured corresponding relationship between the first language and the second language after the first language needs to be decrypted.
Optionally, before outputting the second language through the headset, the method further includes: and encrypting the second language, and encrypting the second language obtained through the translation result when the head-mounted device outputs the translation result of the first language in order to protect the language safety.
Optionally, after receiving, by the head-mounted device, the first language spoken by the target object, the method further includes: the normalized table is generated according to the received first language, that is, after the first language sent by the target object is received, the headset device may normalize the table according to the feedback result of the first language, where the normalized table includes a language that the headset device can respond quickly, and the translation efficiency is improved.
Optionally, receiving, by the headset, the first language uttered by the target object may be implemented by: 1) directly receiving the first language sent by the target object through the head-mounted device, namely, performing translation feedback on the language of the target object through the head-mounted device; 2) receiving, by the head-mounted device, a first language emitted by a target object forwarded by the wearable device. For example, the wearable device may be a smart watch, the target object may speak a first language to be translated to the smart watch, and then the wearable device forwards the first language to the head-mounted device, so that the head-mounted device may perform an operation of translating the first language into a second language, or the head-mounted device sends the first language to a voice server, and the first language is translated by the voice server to obtain the second language.
Optionally, before receiving, by the headset, the first language spoken by the target object, the method further includes: receiving an opening instruction of the wearable device of the target object, wherein the opening instruction is used for opening a translation function of the head-mounted device, and when the translation function is opened, the head-mounted device is allowed to respond to the first language and control the head-mounted device to acquire a translation result of the first language. That is to say, through the start instruction sent by the target object, the head-mounted device starts the translation function, and when the head-mounted device receives the first language sent by the target object and makes a response in time, the first target language is translated into the second language correspondingly required, and the head-mounted device can acquire the translation result of the first language for communication.
Optionally, controlling the headset to obtain the translation result of the first language in response to the first language includes: transmitting the received first language to a voice server so that the voice server determines a translation result of the first language; and receiving a translation result translated by the voice server, namely uploading the received first language to the voice server connected with the head-mounted equipment, searching the first language through the voice server, finding out a second language corresponding to the first language, completing translation operation, and sending the translation result to the head-mounted equipment.
It should be noted that, in the above embodiment, the operation of translating the language is implemented by a voice server, and in an alternative embodiment, the received language may also be analyzed by the headset itself, that is, the headset itself has a function of translating the language.
Optionally, outputting, by the headset device, the second language includes: the second language is output through the earphone or the loudspeaker of the head-mounted device, and after the head-mounted device converts the first language into the second language through the translation function, the second language is converted into voice information to be output through the earphone or the loudspeaker on the head-mounted device so that a target object can know the second language in time.
Optionally, after the second language is output by the headset, the method further includes: and when the head-mounted device stores the translation result corresponding to the first language, so that the head-mounted device outputs the translation result corresponding to the first language through the head-mounted device when receiving the first language next time.
The translation result corresponding to the first language can be stored on the head-mounted device through the head-mounted device, and under the condition that the same first language appears next time, the translation result corresponding to the first language can be rapidly output by the head-mounted device, so that the common first language package is correspondingly generated, and the target object can conveniently communicate when the languages are different.
It should be noted that, the outputting of the second language by the head-mounted device may specifically be outputting by a speaker of the head-mounted device, or displaying the translation result on a display screen of the head-mounted device, which is not limited in this embodiment of the present invention.
It should be noted that, in an alternative embodiment, the computer device 104 in the foregoing embodiment may run in the background of the head-mounted device together with the foregoing background server, the background client may be a client running in the computer device 104, and the administrator may operate the computer device 104 to click, view, and the like on the background client.
In order to better understand the flow of the above language output method, the following description is made with reference to an alternative embodiment, but the technical solution of the embodiment of the present invention is not limited thereto.
In an alternative embodiment of the invention, a first language spoken by a target object is received by a head-mounted device; responding to the first language, controlling the head-mounted device to obtain a translation result of the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language, and the corresponding relation between the first language and the second language is configured in advance; outputting, by the headset device, the second language. Fig. 6 is a flowchart of an application for performing a speech translation function when a head-mounted device performs an operation, where a logical relationship of the head-mounted device performing the speech translation function is as follows in table 1:
TABLE 1
Figure BDA0002429447380000101
Figure BDA0002429447380000111
In order to solve the problems and the defects of various current security terminal devices through the description of related application scenes and optional embodiments of the invention, the embodiments and the optional embodiments of the invention provide a head-mounted device, which is an intelligent and information-based wearing device integrating cloud computing, big data, internet of things, communication, artificial intelligence and augmented reality technologies. The equipment (equivalent to the head-mounted device in the embodiment of the invention) is connected with a relevant background system through control modes such as Bluetooth connection, voice input and manual input, can realize functions such as mobile communication, photographing and video recording, GPS/Beidou positioning, intelligent voice, intelligent image/video identification and the like, can effectively improve the working efficiency of security personnel and enhance the wearing comfort and safety, and finally realizes a 'three-purpose' target, namely:
(1) intelligentization: through the real-time interactive association of the on-site voice, image and video data with the background service system and the big data of the related platform, the intellectualization of the security terminal is realized, the hands of security personnel are released, and the duty work is more advanced and efficient;
(2) integration: by integrating body protection, information acquisition and input, communication and information feedback and output, the terminal integration is realized, and the on-duty work is safer and more convenient;
(3) humanization: through adopting high-tech materials to hinder technologies such as heat cooling, ergonomic lightweight design, realize that terminal is humanized, let wearing of staff more comfortable, change the maintenance.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on this understanding, the technical solution of the present invention, which essentially or partly contributes to the prior art, can be embodied in the form of a software product stored on a storage medium
(e.g., ROM/RAM, magnetic disk, optical disk) includes instructions for causing a terminal device (e.g., mobile phone, computer, server, or network device) to perform the methods of the embodiments of the present invention.
In this embodiment, a head-mounted device is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used hereinafter, the terms
The "module" may be a combination of software and/or hardware that implements a predetermined function. Although the devices described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated, and FIG. 7 is a schematic diagram of a headset according to an embodiment of the invention, the device including:
(1) a receiving module 42, configured to receive a first language issued by a target object;
(2) a processing module 44, configured to, in response to the first language, control the headset to obtain a translation result of the first language, where the translation result is at least used to indicate a second language corresponding to the first language;
(3) an output module 46 for outputting the second language.
Receiving, by the device, a first language spoken by a target object via a head-mounted device; controlling the head-mounted device to obtain a translation result of the first language in response to the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language; through head-mounted device output the second language adopts above-mentioned technical scheme, has solved in the correlation technique, and current head-mounted device's function is single, can't carry out speech translation scheduling problem to head-mounted device received pronunciation, and then can translate the second language that and then output target object needs to the first language that target object received sent to head-mounted device, has improved head-mounted device's use scene.
Optionally, the processing module is further configured to configure a corresponding relationship between the first language and the second language, so that the head-mounted device acquires the second language corresponding to the first language, that is, when the head-mounted device translates the acquired first language sent by the target object into the second language, the corresponding relationship between the first language and the second language needs to be configured in advance, so that the head-mounted device accurately translates the first language into the second language needed by the target object.
Optionally, the processing module is further configured to decrypt the first language, in order to protect language safety, the language of the target object acquired by the head-mounted device is the encrypted first language, and when performing language translation, after the first language needs to be decrypted, translation is performed according to the configured corresponding relationship between the first language and the second language.
Optionally, the output module is further configured to encrypt the second language to protect language security, and when the head-mounted device outputs the translation result of the first language, encrypt the second language obtained through the translation result.
Optionally, the receiving module is further configured to generate a normalized table according to the received first language information, and after receiving the first language sent by the target object, the head-mounted device may normalize the table according to a feedback result of the first language, where the normalized table includes a language that the head-mounted device can respond quickly, and translation efficiency is improved.
Optionally, the receiving module is further configured to directly receive, through the head-mounted device, the first language sent by the target object, that is, the language of the target object may be translated and fed back through the head-mounted device itself; receiving, by the head-mounted device, a first language emitted by a target object forwarded by the wearable device. For example, the wearable device may be a smart watch, and the target object may speak a first language to be translated to the smart watch, and then the wearable device forwards the first language to the head-mounted device, and the head-mounted device may perform an operation of translating the first language to a second language.
Optionally, the receiving module is further configured to receive an opening instruction of the wearable device of the target object, where the opening instruction is used to open a translation function of the head-mounted device, and when the translation function is opened, the head-mounted device is allowed to respond to the first language and control the head-mounted device to obtain a translation result of the first language.
That is to say, through the start instruction sent by the target object, the head-mounted device starts the translation function, and when the head-mounted device receives the first language sent by the target object and makes a response in time, the first target language is translated into the second language correspondingly required, and the head-mounted device can acquire the translation result of the first language for communication.
Optionally, the processing module is further configured to transmit the received first language to a voice server, so that the voice server determines a translation result of the first language; and receiving a translation result translated by the voice server, namely uploading the received first language to the voice server connected with the head-mounted equipment, searching the first language through the voice server, finding out a second language corresponding to the first language, completing translation operation, and sending the translation result to the head-mounted equipment.
It should be noted that, in the above embodiment, the operation of translating the language is implemented by a voice server, and in an alternative embodiment, the received language may also be analyzed by the headset itself, that is, the headset itself has a function of translating the language.
Optionally, the output module is further configured to output the second language through an earphone or a speaker of the head-mounted device, and after the head-mounted device converts the first language into the second language through the translation function, in order to facilitate the target object to know timely, the second language is converted into the voice information through the earphone or the speaker of the head-mounted device and output the voice information.
Optionally, the headset further includes a saving module, configured to save, by the headset, the translation result corresponding to the first language, so that when the headset receives the first language next time, the translation result corresponding to the first language is output by the headset.
The translation result corresponding to the first language can be stored on the head-mounted device through the head-mounted device, and under the condition that the same first language appears next time, the translation result corresponding to the first language can be rapidly output by the head-mounted device, so that the common first language package is correspondingly generated, and the target object can conveniently communicate when the languages are different.
According to a further aspect of embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, receiving a first language emitted by the target object through the head-mounted device;
s2, responding to the first language, controlling the head-mounted device to obtain a translation result of the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language;
s3, outputting the second language through the head-mounted device.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, ROM (Read-Only Memory), RAM (Random Access Memory), magnetic or optical disks, and the like.
According to another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the language output method, where the electronic device may be the head-mounted device or another device to which the language output method is applied, and the electronic device is not limited in this respect, as shown in fig. 8, the electronic device includes a memory 1002 and a processor 1004, the memory 1002 stores a computer program, and the processor 1004 is configured to execute the steps in any one of the method embodiments through the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, receiving a first language emitted by the target object through the head-mounted device;
s2, responding to the first language, controlling the head-mounted device to obtain a translation result of the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language;
s3, outputting the second language through the head-mounted device.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 8 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 8 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
The memory 1002 may be used to store software programs and modules, such as program instructions/modules corresponding to the language output method and the head-mounted device in the embodiment of the present invention, and the processor 1004 executes various functional applications and data processing by running the software programs and modules stored in the memory 1002, that is, implementing the language output method. The memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1002 may further include memory located remotely from the processor 1004, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. As an example, the memory 1002 may include, but is not limited to, the receiving module 42, the processing module 44, and the output module 46 in the head-mounted device. In addition, other module units in the head-mounted device may also be included, but are not limited to these, and are not described in detail in this example.
Optionally, the transmission device 1006 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transport device 1006 includes a Network adapter (NIC) that can be connected to a router via a Network cable to communicate with the internet or a local area Network. In one example, the transmission device 1006 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1008 for displaying a shooting result of a camera of the head-mounted device; and a connection bus 1010 for connecting the respective module parts in the above-described electronic apparatus.
In other embodiments, the terminal or the server may be a node in a distributed system, wherein the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication form. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, or network devices) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (12)

1. A method of language output, comprising:
receiving, by a headset, a first language spoken by a target object;
controlling the head-mounted device to obtain a translation result of the first language in response to the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language;
outputting, by the headset device, the second language.
2. The language output method according to claim 1, before controlling the headset to acquire a translation result of the first language in response to the first language, the method further comprising:
and configuring the corresponding relation between the first language and the second language so that the head-mounted device acquires the second language corresponding to the first language.
3. A method of outputting speech according to claim 1 and wherein said first language is an encrypted first language and, in response to said first language, said method further comprises:
decrypting the first language.
4. The speech output method according to claim 1, further comprising, before outputting the second language by the head-mounted device:
encrypting the second language.
5. The method for outputting language according to claim 1, further comprising, after receiving, by the head-mounted device, the first language uttered by the target object:
generating a normalized table from the received first language.
6. The method of claim 1, wherein receiving, by the headset device, the first language spoken by the target object comprises:
receiving, directly through the head-mounted device, a first language spoken by the target object; or
Receiving, by the head-mounted device, a first language emitted by a target object forwarded by the wearable device.
7. The method of claim 1, wherein prior to receiving, by the headset, the first language spoken by the target object, the method further comprises:
receiving an opening instruction of the wearable device of the target object, wherein the opening instruction is used for opening a translation function of the head-mounted device, and when the translation function is opened, the head-mounted device is allowed to respond to the first language and control the head-mounted device to acquire a translation result of the first language.
8. The method of claim 1, wherein controlling the headset to obtain translation results in the first language in response to the first language comprises:
transmitting the received first language to a voice server so that the voice server determines a translation result of the first language;
and receiving a translation result translated by the voice server.
9. The method of claim 1, wherein outputting, by the headset, the second language comprises:
outputting the second language through headphones or speakers of the headset.
10. The method of any of claims 1-9, wherein after outputting the second language via the headset, the method further comprises:
and when the head-mounted device stores the translation result corresponding to the first language, so that the head-mounted device outputs the translation result corresponding to the first language through the head-mounted device when receiving the first language next time.
11. A head-mounted device, comprising:
the receiving module is used for receiving a first language sent by a target object;
the processing module is used for responding to the first language and controlling the head-mounted equipment to obtain a translation result of the first language, wherein the translation result is at least used for indicating a second language corresponding to the first language;
and the output module is used for outputting the second language.
12. A computer-readable storage medium comprising a stored program, wherein the program when executed performs the method of any of claims 1 to 10.
CN202010231570.7A 2020-03-27 2020-03-27 Language output method, head-mounted device, storage medium, and electronic device Pending CN111476040A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010231570.7A CN111476040A (en) 2020-03-27 2020-03-27 Language output method, head-mounted device, storage medium, and electronic device
PCT/CN2020/093753 WO2021189652A1 (en) 2020-03-27 2020-06-01 Language output method, head-mounted device, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010231570.7A CN111476040A (en) 2020-03-27 2020-03-27 Language output method, head-mounted device, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN111476040A true CN111476040A (en) 2020-07-31

Family

ID=71747929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010231570.7A Pending CN111476040A (en) 2020-03-27 2020-03-27 Language output method, head-mounted device, storage medium, and electronic device

Country Status (2)

Country Link
CN (1) CN111476040A (en)
WO (1) WO2021189652A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710615A (en) * 2018-05-03 2018-10-26 Oppo广东移动通信有限公司 Interpretation method and relevant device
CN109275057A (en) * 2018-08-31 2019-01-25 歌尔科技有限公司 A kind of translation earphone speech output method, system and translation earphone and storage medium
WO2019186639A1 (en) * 2018-03-26 2019-10-03 株式会社フォルテ Translation system, translation method, translation device, and speech input/output device
CN110889294A (en) * 2018-09-06 2020-03-17 重庆好德译信息技术有限公司 Auxiliary system and method for providing accurate translation in short time

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100250231A1 (en) * 2009-03-07 2010-09-30 Voice Muffler Corporation Mouthpiece with sound reducer to enhance language translation
KR102565274B1 (en) * 2016-07-07 2023-08-09 삼성전자주식회사 Automatic interpretation method and apparatus, and machine translation method and apparatus
JP6364629B2 (en) * 2016-07-08 2018-08-01 パナソニックIpマネジメント株式会社 Translation apparatus and translation method
CN107832309B (en) * 2017-10-18 2021-10-01 广东小天才科技有限公司 Language translation method and device, wearable device and storage medium
CN109067965B (en) * 2018-06-15 2020-12-22 Oppo广东移动通信有限公司 Translation method, translation device, wearable device and storage medium
CN108923810A (en) * 2018-06-15 2018-11-30 Oppo广东移动通信有限公司 Interpretation method and relevant device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019186639A1 (en) * 2018-03-26 2019-10-03 株式会社フォルテ Translation system, translation method, translation device, and speech input/output device
CN108710615A (en) * 2018-05-03 2018-10-26 Oppo广东移动通信有限公司 Interpretation method and relevant device
CN109275057A (en) * 2018-08-31 2019-01-25 歌尔科技有限公司 A kind of translation earphone speech output method, system and translation earphone and storage medium
CN110889294A (en) * 2018-09-06 2020-03-17 重庆好德译信息技术有限公司 Auxiliary system and method for providing accurate translation in short time

Also Published As

Publication number Publication date
WO2021189652A1 (en) 2021-09-30

Similar Documents

Publication Publication Date Title
Satyanarayanan et al. An open ecosystem for mobile-cloud convergence
US10637970B2 (en) Packet processing method and apparatus
US9093069B2 (en) Privacy-sensitive speech model creation via aggregation of multiple user models
US20150304417A1 (en) Data synchronization method, and device
WO2021189650A1 (en) Real-time video stream display method, headset, storage medium, and electronic device
KR20170092436A (en) Translation system and method
CN103916978A (en) Wireless connection establishing method and electronic devices
CN108538289A (en) The method, apparatus and terminal device of voice remote control are realized based on bluetooth
CN111475130A (en) Display method of track information, head-mounted device, storage medium and electronic device
KR20160069271A (en) Method and system for providing video conference using screen mirroring
WO2021189647A1 (en) Multimedia information determination method, head-mounted device, storage medium, and electronic device
CN111476040A (en) Language output method, head-mounted device, storage medium, and electronic device
CN116056076B (en) Communication system, method and electronic equipment
CN111526241A (en) Call progress processing method, head-mounted device, storage medium and electronic device
KR102254821B1 (en) A system for providing dialog contents
JP2023510518A (en) Voice verification and restriction method of voice terminal
WO2021189651A1 (en) Trajectory determination method and apparatus for head-mounted device, storage medium, and electronic apparatus
CN203180927U (en) Beidou satellite based communication terminal equipment
KR102621301B1 (en) Smart work support system for non-face-to-face office selling based on metaverse
CN109005210A (en) The method and apparatus of information exchange
CN112214108A (en) Control method of head-mounted device, and storage medium
US12032812B1 (en) Systems and methods for intent-based augmented reality virtual assistant
US12020387B2 (en) Intelligent data migration via mixed reality
CN111526178A (en) Identity information determination method, head-mounted device and storage medium
CN116436704B (en) Data processing method and data processing equipment for user privacy data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201009

Address after: 518057 2 / F, software building, No.9, Gaoxin Middle Road, Nanshan District, Shenzhen, Guangdong Province

Applicant after: SHENZHEN KUANG-CHI SPACE TECH. Co.,Ltd.

Address before: 518057 Guangdong city of Shenzhen province Nanshan District Guangdong streets in a high road No. 9 building two layer software

Applicant before: SHENZHEN KUANG-CHI SUPER MATERIAL TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20200731

RJ01 Rejection of invention patent application after publication