WO2021218631A1 - 互动信息处理方法、装置、设备及介质 - Google Patents

互动信息处理方法、装置、设备及介质 Download PDF

Info

Publication number
WO2021218631A1
WO2021218631A1 PCT/CN2021/087097 CN2021087097W WO2021218631A1 WO 2021218631 A1 WO2021218631 A1 WO 2021218631A1 CN 2021087097 W CN2021087097 W CN 2021087097W WO 2021218631 A1 WO2021218631 A1 WO 2021218631A1
Authority
WO
WIPO (PCT)
Prior art keywords
language type
voice data
participating
source language
user
Prior art date
Application number
PCT/CN2021/087097
Other languages
English (en)
French (fr)
Inventor
赵立
陈可蓉
杨晶生
苗天时
徐文铭
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2021218631A1 publication Critical patent/WO2021218631A1/zh
Priority to US17/882,032 priority Critical patent/US20220374618A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/14Use of phonemic categorisation or speech recognition prior to speaker recognition or verification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques

Definitions

  • the embodiments of the present disclosure relate to the field of computer data processing technology, and in particular to an interactive information processing method, device, equipment, and medium.
  • the server can obtain the voice information of some users and the text information published by all users, and play and display the voice information and text information after processing.
  • the embodiments of the present disclosure provide an interactive information processing method, device, equipment, and medium to realize the conversion of voice data of other participating users into target language types to obtain translation data, thereby facilitating users to understand the voices of other participating users based on the translation data Information, and then improve the technical effect of information interaction efficiency.
  • embodiments of the present disclosure provide an interactive information processing method, the method including:
  • the translation data is displayed on the target client.
  • embodiments of the present disclosure also provide an interactive information processing device, which includes:
  • the voice data collection module is used to collect voice data of at least two participating users when the user interacts based on the real-time interactive interface;
  • the source language type determining module is configured to determine the source language type of each participating user based on the voice data
  • the translation data conversion module is used to convert the voice data of the participating users from the source language type to the target language type to obtain the translation data;
  • the translation data display module is used to display the translation data on the target client.
  • embodiments of the present disclosure also provide an electronic device, the electronic device including:
  • One or more processors are One or more processors;
  • Storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the interactive information processing method according to any one of the embodiments of the present disclosure.
  • the embodiments of the present disclosure also provide a storage medium containing computer-executable instructions, which are used to execute interactive information as described in any of the embodiments of the present disclosure when the computer-executable instructions are executed by a computer processor. Approach.
  • the technical solution of the embodiment of the present disclosure converts the collected voice data into the target language type, and after the translation data is obtained, the translation data can be visually displayed on the client terminal for users to read, which solves the problem that the language types of other participating users are different from each other.
  • my language type is very different, there is a technical problem that I cannot understand the interactive content of other participating users, resulting in ineffective communication with other participating users and low interaction efficiency, so that the voice data of other participating users can be converted into the target language.
  • Type obtain the translation data, and display the translation data on the client for users to read, so that the user can determine the interactive content of other participating users based on the translation data, thereby improving the efficiency of interactive interaction and user experience.
  • FIG. 1 is a schematic flowchart of an interactive information processing method provided by Embodiment 1 of the present disclosure
  • Embodiment 2 is a schematic flowchart of an interactive information processing method provided by Embodiment 2 of the present disclosure
  • Embodiment 3 is a schematic flowchart of an interactive information processing method provided by Embodiment 3 of the present disclosure
  • Embodiment 4 is a schematic structural diagram of an interactive information processing device provided by Embodiment 4 of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by Embodiment 5 of the present disclosure.
  • FIG. 1 is a schematic flow diagram of an interactive information processing method provided by Embodiment 1 of the present disclosure.
  • the embodiment of the present disclosure is suitable for converting voice information interacted by users into target language types in a real-time interactive application scenario supported by the Internet. , The situation where the translation data is obtained.
  • the method can be implemented by an interactive information processing device, which can be implemented in the form of software and/or hardware, and optionally, implemented by an electronic device, which can be a mobile terminal, a PC (Personal Computer, personal computer) End or server end, etc.
  • Real-time interactive application scenarios can usually be implemented by the client and the server.
  • the method provided in this embodiment can be executed by the client, the server, or both.
  • the method of this embodiment includes:
  • S110 Collect voice data of at least one participating user when the user interacts based on the real-time interactive interface.
  • the real-time interactive interface is any interactive interface in the real-time interactive application scenario.
  • Real-time interactive application scenarios can be implemented through the Internet and computer technology, such as interactive applications implemented through native programs or web (web page) programs.
  • the real-time interactive interface may be an interactive interface during a video conference, an interactive interface during a live video broadcast, and/or a group chat interactive interface.
  • multiple users may be allowed to interact in various forms of interactive behavior, such as inputting text, voice, video, and sharing of content objects.
  • the voice information of each participating user can be collected, and the collected voice information can be used as voice data.
  • the voice information of the participating user may be the voice information that the user has when interacting through voice, video and other interactive behaviors.
  • the participating user may include a speaking user, and the speaking user may be a user who participates in the real-time interactive interface and makes speech for interaction.
  • each participating user can trigger the voice information conversion control, and the client can generate the voice information conversion request information and send it to the server.
  • the server can collect the voice data of the participating users based on the request information. For example, during a video conference, if participant A triggers the voice information conversion control, the server can receive the request information for converting the voice information, and start collecting voice data of the participating users participating in the video conference based on the request information.
  • S120 Determine the source language type of each participating user based on the voice data.
  • voice data of a preset duration can be collected, and the source language type of the participating user can be determined based on the voice data of the preset duration.
  • the preset duration can be 1S to 2S.
  • the source language type may be the language type used when the participating user interacts, that is, the language type corresponding to the participating user.
  • the language type corresponding to the voice data can be determined, that is, the source language type of the participating user.
  • the language type determined at this time can be The type is used as the source language type of the participating user corresponding to the voice data.
  • the voice data of four participating users are collected, and the four participating users can be marked as participating user A, participating user B, participating user C, and participating user D, respectively.
  • the language type of participating user A is Chinese.
  • Chinese can be used as the source language type of participating user A; by performing language type on the voice data of participating user B Discriminant processing, determine that the language type corresponding to participating user B is English, then English is the source language type of participating user B, and the voice data of each participating user is processed in turn to determine the source language type of each participating user.
  • S130 Convert the voice data of the participating users from the source language type to the target language type to obtain translation data.
  • the translation data may be the data after the translation processing of the voice information.
  • the language type corresponding to the translation data may be used as the target language type. For example, if the language type corresponding to the translation data is Chinese, the target language type is Chinese.
  • the target language type corresponds to the target client to which the participating user belongs.
  • the target language type can be determined based on the language type of the participating user to which the current client belongs. That is to say, when converting voice data into translation data, you can determine the language type of the participating user that the client belongs to, and use the determined language type at this time as the target language type, thereby converting the voice data into the same type as the target language type.
  • Translation data may include each participating user, the voice data associated with the participating user, and the translation data corresponding to the voice data.
  • the target language type of the target terminal to which the participating user belongs can be predetermined, and when the voice data of the participating user is collected, the voice data can be translated into the target language type to obtain the translation data.
  • the server can perform the above steps to determine the target language type of the participating user that triggers the voice conversion control, and The collected voice data is converted into translation data matching the target language type.
  • the target language type is a minor language and the server cannot provide the corresponding translation data
  • the collected voice data can be converted into a common language type, for example, the collected voice data can be converted into English translation data.
  • S140 Display the translation data on the target client.
  • the client to which each participating user belongs can be used as the target client.
  • the voice data of other participating users can be converted into the target language type to obtain the translation data, and the translation data can be displayed on the target client so as to correspond to the target client. Participating users can preview. Since the translation data includes the participating users and the translation corresponding to the voice information, the speech and opinions expressed by other participating users can be quickly understood, so as to achieve the technical effect of effective communication and interaction.
  • the target language type of the client A to which the participating user A belongs is Chinese.
  • the Chinese translation data may be displayed on the display interface of the client A.
  • the client has collected the voice data, it needs to convert the voice data, process it into the target language type, obtain the translation data, and display it on the client .
  • the translated data can be displayed in the target area of the client.
  • the display area of the translation data can be preset and the preset display area can be used as the target area.
  • the target area may be, for example, the area around the main interaction area, but may be the top, bottom, or side edges.
  • the video interaction window is the main interaction area, occupying 2/3 of the screen, and the area for displaying translation data can be 1/3 of the side area.
  • the 1/3 area of the side is target area.
  • the translation data can be displayed in the 1/3 area on the side.
  • the display mode of the translation data can be static display or dynamic display.
  • the dynamic display can be to display the translation data in the target area in the form of a barrage.
  • the translated data can be visually displayed on the client for users to read. It solves the technical problem that if the language type of other participating users is very different from the language type of this user, there is a technical problem that cannot understand the interactive content of other participating users, resulting in ineffective communication with other participating users and low interaction efficiency.
  • the voice data of other participating users is converted into the target language type, the translated data is obtained, and the translated data is displayed on the client for users to read, so that the user can determine the interactive content of other participating users based on the translated data, thereby improving the interaction Interaction efficiency and user experience.
  • FIG. 2 is a schematic flowchart of an interactive information processing method provided by Embodiment 2 of the disclosure.
  • the corresponding candidate source language type may also be determined according to the voice data of each participating user, so as to determine the source language type from the candidate source language types Type to improve the efficiency of determining the source language type.
  • the method includes:
  • S210 Collect voice data of at least one participating user when the user interacts based on the real-time interactive interface.
  • S220 Perform voiceprint recognition on the voice data to determine the identity information of the participating user corresponding to the voice data.
  • voiceprint recognition is a kind of biometric technology, which is used for identity recognition based on the sound wave characteristics of participating users. Since the voice of each participating user has a unique voiceprint, different participating users can be distinguished accordingly.
  • the voice data can be processed with acoustic characteristics, and the identity information of each participating user can be determined through the processing, so as to determine whether the server has stored the source language type corresponding to the identity information based on the identity information.
  • the client has a corresponding client account or client ID to distinguish different clients.
  • client ID the multiple users participating and interacting together at the same time
  • voiceprint recognition can be further performed on the voice data of each user, and each person's voice has a unique voiceprint, and the user's identity information can be determined based on this. Then it can be marked as, for example, client ID-user A, client ID-user B, so as to distinguish different participating users under the same client.
  • S230 According to the identity information of the participating users, determine the candidate source language type corresponding to the identity information, so as to determine the source language type from the candidate source language types based on the voice data.
  • the server performs voiceprint recognition on voice information to determine user identity information, it can also perform language type recognition on voice data to obtain the current language type corresponding to the participating user.
  • the identity identifier is associated with the current language type and stored, so that when identifying the identity information, the language type associated with the identity information can be retrieved through the identity identifier, and the associated language type can be used as a candidate source language type.
  • the server may record the language types of different participating users.
  • the candidate source language type may be a language type associated with a certain identity information recorded by the server.
  • participating user A participated in two real-time interactions. By performing voiceprint recognition on the voice data collected in the two interactions, the participating user A can be determined, and the language category of the voice data in the two interactions can be determined to determine the participation
  • the language types of user A are Chinese and English respectively, and Chinese and English can be associated with participating user A, that is, the candidate source languages corresponding to participating user A can be Chinese and English.
  • the determination of the language type is mainly determined by the comparison of the two languages, when the candidate source language type is not determined, the language type needs to be selected from a large number of language types and paired with the voice data to determine the corresponding voice data Source language type, at this time the workload is not only large but also low efficiency. If the candidate source language type corresponding to the participating users is determined in advance, the source language type corresponding to the voice data can be determined from the candidate source language types, which not only improves the efficiency of determining the source language type, but also achieves the technical effect of saving resources .
  • determining the language type corresponding to the voice data based on the voice data is mainly determined by comparing the two language types. Therefore, when determining the source language type of the voice data, a large number of language type comparisons are required to determine .
  • the identity information can be used to determine the candidate source language type, but also the client identification, for example, the account number.
  • each participating user has an account corresponding to it. Before real-time interaction, it is generally necessary to log in to the account to realize real-time interaction.
  • the server can record the associated information of each account.
  • record the language type associated with the client so that when the source language type is determined, the candidate source language type associated with the participating user can be determined based on the account logged in on the client, and then the candidate source language type is determined from the candidate source language type Type of source language.
  • S240 Convert the voice data of the participating users from the source language type to the target language type to obtain translation data.
  • determining the target language type may include at least one of the following methods: acquiring the language type preset on the target client as the target language type; acquiring the login address of the target client, and determining the geographic location of the target client based on the login address. The target language type corresponding to the location.
  • the first way can be: in a possible implementation, when participating in the user-triggered operation of language type conversion, that is, in which language type the translation data is displayed, you can set the converted language type and set the language type As the target language type.
  • a participating user triggers a control for language type conversion on the client, a language selection list may pop up on the client for the participating user to select. Participating users can select any language type. For example, if the user triggers the Chinese language type in the language selection list and clicks the confirm button, the server or client can confirm that the participating user has selected the Chinese language type and use the Chinese language type as Target language type.
  • the voice information of each participating user can be converted into Chinese translation data and displayed on the display interface.
  • the user can set the language type on the client in advance, for example, the user sets the language type when the user registers.
  • the client determines the target language type according to the language type preset by the user.
  • the second method can be: if a control that participates in the user triggering language conversion is detected, the client's login address, that is, the client's IP address, can be obtained to determine the area to which the client belongs according to the login address, and then the area used by the client can be determined
  • the language type is used as the target language type. For example, when the user triggers the control of language conversion, the login address of the client is obtained, and if it is determined based on the login address that the region to which the client belongs is China, the target language type is Chinese.
  • the translated data is displayed on the client side, so that the translated data is more in line with the reading habits of the participating users , So that participating users can quickly understand the interactive information of other participating users, thereby improving the efficiency of interactive interaction.
  • the participating users and the translation data corresponding to the voice data of the participating users are displayed on the display interface of the client in association with each other.
  • the translation data corresponding to the target language type can be associated with participating users and pushed to the client to display the translation data on the client.
  • the technical solution of the embodiment of the present disclosure determines the user's identity information by performing voiceprint recognition on the collected voice data, and then determines the candidate source language type associated with the identity information, so as to determine the source language type from the candidate source language types. Greatly improve the efficiency of determining the source language type.
  • FIG. 3 is a schematic flowchart of an interactive information processing method provided by Embodiment 3 of the present disclosure.
  • the voice data of each participating user can be collected regularly, and based on the voice data, it is determined whether the corresponding source language type has changed, and then the voice conversion is performed according to the updated source language type.
  • the target language set by each client can also be collected Whether the type changes to convert the collected voice information into the updated target language type.
  • the method includes:
  • S310 Collect voice data of at least one participating user when the user interacts based on the real-time interactive interface.
  • S320 Determine the source language type of each participating user based on the voice data.
  • S330 Convert the voice data of the participating user from the source language type to the target language type to obtain translation data.
  • S350 Collect voice data of participating users regularly, and update the source language type of the participating users based on the voice data.
  • the timing may be a relative time point, for example, it may be a source language type detection operation that is triggered once at an interval setting. If it is detected that the source language type has changed, the source language type of the participating users can be updated based on the changed source language type.
  • the collected voice data may be processed every ten minutes. According to the processing result, if the source language type of the participating user is determined to be English, it indicates that the source language category of the participating user has changed. English can be used as the source language type of the participating user, and the voice data of the participating user can be converted from English to the target language type.
  • the source language type based on voice data it is mainly determined by processing the voice data within one to two seconds. It may be that the source language type of the participating users is Chinese, but the interactive content Including English terminology. If the source language type of the participating user has not changed, when the voice data is collected regularly, the voice data corresponding to the English term is just collected. At this time, the processing result is that the source language type of the participating user is English.
  • the voice data of the participating user within a preset time period is obtained; based on the voice within the preset time period The data identifies the source language type to update the source language type of the participating users.
  • the voice data within the preset duration can be collected, optionally, the voice data within 5S or 10S, based on the preset duration To further determine whether the source language type of the participating user has changed. If it is determined based on the voice data within the preset time that the source language type of the participating user has changed, the source language type corresponding to the participating user is updated to this time The type of source language identified.
  • it further includes: periodically detecting the current target language type to which the client belongs, and when the current target language type is different from the predetermined target language type, updating the target language type based on the current target language type.
  • the language type set on the client is the language type expected to be displayed by the participating users, that is, the language type of the translation data.
  • Regularly detect the current target language type of the client terminal and determine the language type expected by the participating user corresponding to the client terminal in time, so as to convert the voice data of other participating users into the target language type, and obtain the translation data, which is convenient for users to read Technical effect.
  • the voice data and the target language type set on the client can also be collected regularly to determine the source language type and/or target language type.
  • the voice data of each participating user can be translated into translation data corresponding to the updated target language type in a timely manner, so that users can quickly understand the interactive content of other participating users based on the translation data, thereby improving the efficiency of interactive interaction.
  • FIG. 4 is a schematic structural diagram of an interactive information processing device provided by Embodiment 4 of the present disclosure. As shown in FIG. 4, the device includes: a voice data collection module 410, a source language type determination module 420, a translation data conversion module 430, and Translation data determination module 440. in,
  • the voice data collection module is used to collect the voice data of at least two participating users when the user interacts based on the real-time interactive interface; the source language type determination module is used to determine the source language type of each participating user based on the voice data.
  • the translation data conversion module is used to convert the voice data of the participating users from the source language type to the target language type to obtain the translation data; the translation data display module is used to perform the translation data on the target client exhibit.
  • the source language type determining module further includes:
  • An identity information recognition unit configured to perform voiceprint recognition on the voice data to determine the identity information of the participating user corresponding to the voice data
  • the candidate source language determining unit is configured to determine the candidate source language type corresponding to the identity information according to the identity information of the participating user, so as to determine the source language type from the candidate source language types based on the voice data.
  • the device further includes a target language type determining module, configured to: obtain the language type set on the target client as the target language type; obtain the login address of the target client, based on all The login address determines the target language type corresponding to the geographic location of the target client.
  • a target language type determining module configured to: obtain the language type set on the target client as the target language type; obtain the login address of the target client, based on all The login address determines the target language type corresponding to the geographic location of the target client.
  • the translation data display module is also used to display the participating users and the translation data corresponding to the voice data of the participating users on the display interface of the client in association with each other.
  • the device further includes: a timing collection module, configured to periodically collect voice data of participating users, and update the source language type of the participating users based on the voice data.
  • a timing collection module configured to periodically collect voice data of participating users, and update the source language type of the participating users based on the voice data.
  • the timing collection module is also used to periodically collect the voice data of each participating user, and when it is determined based on the voice data that the source language type of the participating user has changed, obtain the participation The voice data within the preset time period of the user; the source language type is identified based on the voice data within the preset time period, so as to update the source language type of the participating user.
  • the translation data conversion module is also used to translate the voice data of the participating users from the source language type to multiple languages corresponding to the target language types of one or more target clients Translation data.
  • the real-time interactive interface is a video conference interactive interface, a live video interactive interface, or a group chat interactive interface.
  • the technical solution of the embodiment of the present disclosure converts the collected voice data into the target language type, and after the translation data is obtained, the translation data can be visually displayed on the client terminal for users to read, which solves the problem that the language types of other participating users are different from each other.
  • my language type is very different, there is a technical problem that I cannot understand the interactive content of other participating users, resulting in ineffective communication with other participating users and low interaction efficiency, so that the voice data of other participating users can be converted into the target language.
  • Type obtain the translation data, and display the translation data on the client for users to read, so that the user can determine the interactive content of other participating users based on the translation data, thereby improving the efficiency of interactive interaction and user experience.
  • the interactive information processing device provided by the embodiment of the present disclosure can execute the interactive information processing method provided by any embodiment of the present disclosure, and has the corresponding functional modules and beneficial effects for the execution method.
  • FIG. 5 shows a schematic structural diagram of an electronic device (for example, the terminal device or the server in FIG. 5) 500 suitable for implementing the embodiments of the present disclosure.
  • the terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistant, personal digital assistants), PAD (portable android devices, tablet computers), and PMP (Portable Media). Players, portable multimedia players), mobile terminals such as in-vehicle terminals (for example, in-vehicle navigation terminals), and fixed terminals such as digital TVs (television), desktop computers, and the like.
  • the electronic device shown in FIG. 5 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 500 may include a processing device (such as a central processing unit, a graphics processor, etc.) 501, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 502 or from a storage device 506
  • the program in the memory (RAM) 503 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 500 are also stored.
  • the processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also connected to the bus 504.
  • the following devices can be connected to the I/O interface 505: including input devices 506 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, vibration An output device 507 such as a device; a storage device 506 such as a magnetic tape, a hard disk, etc.; and a communication device 509.
  • the communication device 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 5 shows an electronic device 500 having various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may be implemented alternatively or provided with more or fewer devices.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 509, or installed from the storage device 506, or installed from the ROM 502.
  • the processing device 501 When the computer program is executed by the processing device 501, the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the embodiment of the present disclosure provides a computer storage medium on which a computer program is stored, and when the program is executed by a processor, the interactive information processing method provided in the above-mentioned embodiment is implemented.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the client and server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communication e.g., communication network
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, the electronic device:
  • the translation data is displayed on the target client.
  • the computer program code used to perform the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include but are not limited to object-oriented programming languages such as Java, Smalltalk, C++, and Including conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logic function.
  • Executable instructions can also occur in a different order from the order marked in the drawings. For example, two blocks shown one after another can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure can be implemented in software or hardware.
  • the name of the unit/module does not constitute a limitation on the unit itself under certain circumstances.
  • a voice data collection module can also be described as a “data collection module”.
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
  • machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • Example 1 provides an interactive information processing method, which includes:
  • the translation data is displayed on the target client.
  • Example 2 providing an interactive information processing method further includes:
  • the method before the determining the source language type of each participating user based on the voice data, the method further includes:
  • the candidate source language type corresponding to the identity information is determined, so as to determine the source language type from the candidate source language types based on the voice data.
  • Example 3 provides an interactive information processing method, which further includes:
  • determine the target language type including at least one of the following:
  • the login address of the target client is acquired, and the target language type corresponding to the geographic location of the target client is determined based on the login address.
  • Example 4 provides an interactive information processing method, which further includes:
  • the displaying the translation data on the target client includes:
  • the participating users and the translation data corresponding to the voice data of the participating users are displayed on the display interface of the client in association with each other.
  • Example 5 provides an interactive information processing method, which further includes:
  • Example 6 provides an interactive information processing method, which further includes:
  • the regularly collecting voice data of participating users and updating the source language type of the participating users based on the voice data includes:
  • the source language type is recognized based on the voice data within the preset time period, so as to update the source language type of the participating user.
  • Example 7 provides an interactive information processing method, which further includes:
  • the current target language type to which the client belongs is periodically detected, and when the current target language type is different from the target language type, the target language type is updated based on the current target language type.
  • Example 8 provides an interactive information processing method, which further includes:
  • converting the voice data of the participating user from the source language type to the target language type to obtain the translation data includes:
  • Example 9 provides an interactive information processing method, which further includes:
  • the real-time interactive interface is a video conference interactive interface, a live video interactive interface, or a group chat interactive interface.
  • Example 10 provides an interactive information processing method, which further includes:
  • the participating users include speaking users.
  • Example 11 provides an interactive information processing device, which includes:
  • the voice data collection module is used to collect voice data of at least two participating users when the user interacts based on the real-time interactive interface;
  • the source language type determining module is configured to determine the source language type of each participating user based on the voice data
  • the translation data conversion module is used to convert the voice data of the participating users from the source language type to the target language type to obtain the translation data;
  • the translation data display module is used to display the translation data on the target client.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Game Theory and Decision Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Machine Translation (AREA)

Abstract

本公开实施例公开了一种互动信息处理方法、装置、设备及介质,该方法包括:在用户基于实时互动界面进行互动时,采集至少一个参与用户的语音数据;基于语音数据确定各参与用户的源语种类型;将参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据;将译文数据在目标客户端进行展示。本公开实施例的技术方案,解决了参与用户的语种存在较大差异时,存在无法理解其他参与用户的互动内容,导致无法进行交互的问题,实现了将其他参与用户的语音数据转换为目标语种类型,得到译文数据,并将译文数据展示在客户端以供用户阅读,从而使用户基于译文数据确定其他参与用户的互动内容,进而提高了互动交互效率以及用户体验。

Description

互动信息处理方法、装置、设备及介质
本申请要求于2020年4月30日提交中国专利局、申请号为202010366967.7、申请名称为“互动信息处理方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及计算机数据处理技术领域,尤其涉及一种互动信息处理方法、装置、设备及介质。
背景技术
目前,基于互联网的多媒体会议和视频直播等实时互动应用场景中,服务器可以获取部分用户的语音信息,所有用户所发表的文字信息,并对语音信息和文字信息处理后进行播放和显示。
在实际应用过程中,不可避免的存在不同语种的用户同时参与实时互动,这样就存在其他参与用户的语种类型与本用户的语种类型不同的情形。因此当其他参与用户互动时,可能存在无法理解其互动内容的情形,导致无法与其他参与用户进行有效沟通,极大的降低了互动中的用户交互效率以及用户体验。
发明内容
本公开实施例提供了一种互动信息处理方法、装置、设备及介质,以实现将其他参与用户的语音数据转化为目标语种类型,得到译文数据,从而便于用户基于译文数据理解其他参与用户的语音信息,进而提高信息交互效率的技术效果。
第一方面,本公开实施例提供了一种互动信息处理方法,该方法包括:
在用户基于实时互动界面进行互动时,采集至少一个参与用户的语音数据;
基于所述语音数据确定各所述参与用户的源语种类型;
将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据;
将所述译文数据在所述目标客户端进行展示。
第二方面,本公开实施例还提供了一种互动信息处理装置,该装置包括:
语音数据采集模块,用于在用户基于实时互动界面进行互动时,采集至少两个参与用户的语音数据;
源语种类型确定模块,用于基于所述语音数据确定各所述参与用户的源语种类型;
译文数据转换模块,用于将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据;
译文数据展示模块,用于将所述译文数据在所述目标客户端进行展示。
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本公开实施例任一所述的互动信息处理方法。
第四方面,本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本公开实施例任一所述的互动信息处理方法。
本公开实施例的技术方案,在将采集的语音数据转换为目标语种类型,得到译文数据后,可以将译文数据直观的展示在客户端以供用户阅览,解决了若其他参与用户的语种类型与本人的语种类型差异较大时,存在无法理解其他参与用户的互动内容,导致无法与其他参与用户进行有效沟通以及交互效率较低的技术问题,实现了将其他参与用户的语音数据转换为目标语种类型,得到译文数据,并将译文数据展示在客户端以供用户阅读,从而使用户基于译文数据确定其他参与用户的互动内容,进而提高了互动交互效率以及用户体验。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例一所提供的一种互动信息处理方法流程示意图;
图2为本公开实施例二所提供的一种互动信息处理方法流程示意图;
图3为本公开实施例三所提供的一种互动信息处理方法流程示意图;
图4为本公开实施例四所提供的一种互动信息处理装置结构示意图;
图5为本公开实施例五所提供的一种电子设备结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单 元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
实施例一
图1为本公开实施例一所提供的一种互动信息处理方法的流程示意图,本公开实施例适用于在互联网所支持的实时互动应用场景中,对用户所交互的语音信息转化为目标语种类型,得到译文数据的情形。该方法可以由互动信息处理装置来执行,该装置可以通过软件和/或硬件的形式实现,可选的,通过电子设备来实现,该电子设备可以是移动终端、PC(Personal Computer,个人计算机)端或服务端等。实时互动应用场景通常可由客户端和服务端来配合实现,本实施例所提供的方法,可以由客户端来执行,也可以由服务端来执行,或两者配合执行。
如图1,本实施例的方法包括:
S110、在用户基于实时互动界面进行互动时,采集至少一个参与用户的语音数据。
其中,实时互动界面为实时互动应用场景中的任意交互界面。实时互动应用场景可通过互联网和计算机技术实现,例如通过原生程序或web(网页)程序等实现的交互应用程序。实时互动界面可以是视频会议过程中的交互界面、视频直播过程中的交互界面和/或群聊交互界面。在实时互动界面中,可允许多个用户以各种形式的互动行为来进行交互,例如输入文字、语音、视频和内容对象的共享等至少一种的互动行为。参与实时互动界面进行互动的用户数量可以有多个,可以将参与实时互动界面并进行交互的用户作为参与用户。在各参与用户进行互动时,可以采集各参与用户的语音信息,并将采集的语音信息作为语音数据。参与用户的语音信息可以为用户通过语音、视频等互动行为进行交互时具有的语音信息。在一种可能的实现方式中,参与用户可以包括发言用户,发言用户可以是参与实时互动界面并且发表言论进行互动的用户。
具体的,在多个用户基于实时互动界面进行互动时,每个参与用户均可以触发语音信息转化控件,客户端可以生成语音信息转化的请求信息并发送至服务端。服务端接收到请求信息后,可以基于请求信息采集参与用户的语音数据。例如,在视频会议的过程中,若参与用户A触发了语音信息转换的控件,则服务端可以接收到转换语音信息的请求信息,基于请求信息开始采集参与视频会议的各参与用户的语音数据。
S120、基于语音数据确定各参与用户的源语种类型。
其中,可以采集预设时长的语音数据,并基于预设时长的语音数据来确定参与用户的源语种类型,可选的,例如,预设时长可以是1S至2S。源语种类型可以是参与用户进行交互时所使用的语种类型,即与参与用户对应的语种类型。通过对预设时长内的语音数据与预设的语种类型的相关数据进行两两比较,可以确定语音数据对应的语种类型,即参与用户的源语种类型。可选的,获取一到两秒内的语音数据,并通过与预设的语种类型的相关数据两两比较,来确定两秒内的语音数据所对应的语种类型,可以将此时确定的语种类型作为与语音数据对应的参与用户的源语种类型。
示例性的,在基于实时互动界面进行互动时,采集了四个参与用户的语音数据,可以将四个参与用户分别标记为参与用户A,参与用户B、参与用户C以及参与用户D。通过对参与用户A的语音数据进行语种类型判别处理,可以确定参与用户A的语种类型为中文,此时可以将中文作为参与用户A的源语种类型;通过对参与用户B的语音数据进行语种类型判别处理,确定与参与用户B对应的语种类型为英文,则英文为参与用户B的源语种类型,依次对各个参与用户的语音数据进行处理,可以确定每个参与用户的源语种类型。
S130、将参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据。
其中,译文数据可以是对语音信息翻译处理后的数据。可以将译文数据所对应的语种类型作为目标语种类型,例如,若译文数据对应的语种类型为中文,则目标语种类型为中文。目标语种类型与参与用户所属目标客户端相对应。可以基于当前客户端所属参与用户的语种类型来确定目标语种类型。也就是说,在将语音数据转换为译文数据时,可以确定客户端所属参与用户的语种类型,并将此时确定的语种类型作为目标语种类型,从而将语音数据转换为与目标语种类型相同的译文数据。译文数据中可以包括每个参与用户,与参与用户相关联的语音数据,以及与语音数据对应的翻译数据。
具体的,可以预先确定参与用户所属目标终端的目标语种类型,在采集到参与用户的语音数据时,可以将语音数据翻译为目标语种类型,得到译文数据。
需要说明的是,参与实时互动的参与用户数量可以有多个,只要检测到参与用户触发了语音转换控件,服务端可以执行上述步骤,确定触发语音转换控件的参与用户的目标语种类型,并将采集到的语音数据转换为与目标语种类型相匹配的译文数据。当然,若目标语种类型为小语种,服务端无法提供相应的译文数据时,可以将采集的语音数据转换为通用语种类型,例如,将采集的语音数据转换为语种类型为英文的译文数据。
S140、将译文数据在目标客户端进行展示。
其中,可以将每个参与用户所属的客户端作为目标客户端。
具体的,在确定与各个客户端对应的目标语种类型后,可以将其他参与用户的语音数据转换为目标语种类型,得到译文数据,并将译文数据展示在目标客户端,以便与目标客户端对应的参与用户可以进行预览。由于译文数据中包括参与用户以及与语音信息对应的译文,因此可以快速的了解其他参与用户所发表的言论和观点,从而达到有效沟通和互动的技术效果。
示例性的,参与用户A所属的客户端A的目标语种类型为中文,在将参与用户的语音数据转换为中文后,可以将中文译文数据展示至客户端A的显示界面上。
需要说明的是,只有当用户在客户端触发了语音转化控件时,才会对其他参与用户的语音信息进行处理并展示;若其他参与用户位触发语音转化控件,则可以不执行上述操作。
还需要说明的是,一旦检测到参与用户触发了语音转化控件,客户端采集到语音数据后,就需要将语音数据进行转化,处理为目标语种类型,得到译文数据,并在客户端上进行展示。
在上述基础上,可以将译文数据展示在客户端的目标区域。
其中,可以预先设置译文数据的显示区域并将预先设置的显示区域作为目标区域。目 标区域例如可以是主交互区域周边的区域,可是顶部、底部或侧边等。例如,在视频会议场景下,视频交互窗口是主交互区域,占据屏幕2/3的区域,展示译文数据的区域可以是侧边的1/3区域,相应的,侧边的1/3区域为目标区域。可以将译文数据展示在侧边的1/3区域。当然,译文数据的展示方式可以是静态展示,也可是动态展示,可选的,动态展示可以是将译文数据以弹幕的形式展示在目标区域。
本公开实施例的技术方案,在将采集的语音数据转换为目标语种类型,得到译文数据后,可以将译文数据直观的展示在客户端以供用户阅览。解决了若其他参与用户的语种类型与本用户的语种类型差异较大时,存在无法理解其他参与用户的互动内容,导致无法与其他参与用户进行有效沟通以及交互效率较低的技术问题。实现了将其他参与用户的语音数据转换为目标语种类型,得到译文数据,并将译文数据展示在客户端以供用户阅读,从而使用户基于译文数据确定其他参与用户的互动内容,进而提高了互动交互效率以及用户体验。
实施例二
图2为本公开实施例二所提供的一种互动信息处理方法流程示意图。在前述实施例的基础上,在根据语音数据确定参与用户的源语种类型之前,还可以根据各参与用户的语音数据,确定对应的候选源语种类型,从而从候选源语种类型中确定出源语种类型,以提高源语种类型的确定效率。
如图2所示,所述方法包括:
S210、在用户基于实时互动界面进行互动时,采集至少一个参与用户的语音数据。
S220、对语音数据进行声纹识别,以确定语音数据所对应参与用户的身份信息。
其中,声纹识别是一种生物识别技术,用于根据参与用户的声波特性进行身份辨识。由于每个参与用户的语音具有独特的声纹,可据此来区分不同的参与用户。
具体的,在采集到语音数据后,可以对语音数据进行声波特性处理,通过处理可以确定各参与用户的身份信息,以基于身份信息确定服务端是否存储有与身份信息相对应的源语种类型。
通常,客户端会有对应的客户端账号或客户端ID,从而区分不同客户端。但是当某个客户端同时有多个用户在一起参会互动时,则多个用户无法通过客户端ID进行区分。由此,可以进一步针对各个用户的语音数据进行声纹识别,每个人的语音具有独特的声纹,可据此确定用户的身份信息。而后可标记为,例如,客户端ID-用户A、客户端ID-用户B,从而区分相同客户端下的不同参与用户。
S230、根据参与用户的身份信息,确定与身份信息对应的候选源语种类型,以基于语音数据从候选源语种类型中确定源语种类型。
需要说明的是,在服务端对语音信息进行声纹识别确定用户身份信息的同时,还可以对语音数据进行语种类型识别,以得到与参与用户对应的当前语种类型,可以将与身份信息对应的身份标识与当前语种类型进行关联并存储,以在确定身份信息时,可以通过身份标识调取与身份信息关联的语种类型,可以将关联的语种类型作为候选源语种类型。
其中,在基于实时互动界面进行互动时,服务器可以记录不同参与用户的语种类型。候选源语种类型可以是服务端记录的与某个身份信息关联的语种类型。例如,参与用户A参与了两场实时互动,通过对两场互动中采集的语音数据进行声纹识别,可以确定参与用户A,并通过对两场互动中的语音数据进行语种类别判定,确定参与用户A的语种类型分别为中文和英文,可以将中文、英文与参与用户A关联,即与参与用户A对应的候选源语种可以是中文和英文。
由于语种类型的确定主要是通过两语种进行比较来确定的,在未确定候选源语种类型时,需要从大量语种类型中选取语种类型,与语音数据进行两两匹配,来确定与语音数据对应的源语种类型,此时工作量不仅工作量大还效率较低。若预先确定了与参与用户对应的候选源语种类型,则可以从候选源语种类型中确定与语音数据对应的源语种类型,不仅提高了确定源语种类型的效率,还达到了节约资源的技术效果。
需要说明的是,根据语音数据确定与语音数据对应的语种类型,主要是通过对两种语种类型进行比较来确定,因此确定语音数据的源语种类型时,需要通过大量的语种类型比对来确定。为了提高确定语音数据的语言类型的效率,可以预先对语音数据进行声纹识别,以确定与语音数据对应的参与用户的身份信息,并确定是否存储与身份信息对应的候选源语种类型,从而从候选源语种类型中确定源语种类型,降低了语种类型比对的数量,从而提高了源语种类型的确定效率。
在本实施例中,不仅可以通过身份信息来确定候选源语种类型,还可以通过客户端标识,比如,账号来确定。目前,每个参与用户均存在与其相对应的账号,在实时互动之前,一般需要登录账号,进而实现实时互动。在实时互动的过程中,服务端可以记录各账号的关联信息。可选的,记录与该客户端关联的语种类型,以在确定源语种类型时,可以基于客户端上登录的账号来确定与参与用户关联的候选源语种类型,进而从候选源语种类型中确定出源语种类型。
S240、将参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据。
在本实施例中,确定目标语种类型可以包括如下至少一种方式:获取目标客户端上预先设置的语种类型作为目标语种类型;获取目标客户端的登录地址,基于登录地址确定与目标客户端所在地理位置对应的目标语种类型。
也就是说,确定目标语种类型可以包括至少两种方式。第一种方式可以是:在一种可能的实现方式中,在参与用户触发语种类型转换的操作时,即译文数据以哪种语种类型显示,可以设置转化的语种类型,并将设置的语种类型作为目标语种类型。示例性的,参与用户在客户端上触发语种类型转化的控件时,客户端上可以弹出语种选择列表以供参与用户选择。参与用户可以选择任意一种语种类型,如,用户触发了语种选择列表中的中文语种类型并点击了确认按键,服务端或客户端可以确定参与用户选择了中文语种类型,并将中文语种类型作为目标语种类型。也就是说,针对当前客户端来说,可以将各参与用户的语音信息转换为中文译文数据,并将其展示在显示界面上。在另一种可能的实现方式中,用户可以预先在客户端上进行语种类型的设置,比如,用户在用户注册时设置语种类型。参与用户在客户端上触发语种类型转化的控件时,客户端根据用户预先设置的语种类型, 确定目标语种类型。
第二种方式可以是:若检测到参与用户触发语种转换的控件时,可以获取客户端的登录地址,即客户端的IP地址,以根据登录地址确定客户端所属的区域,进而将所属区域所使用的语种类型作为目标语种类型。例如,在用户触发语种转换的控件时,获取该客户端的登录地址,若基于登录地址确定客户端所属的区域为中国,则目标语种类型为中文。
在本实施例中,通过确定与各参与用户对应的目标语种类型,并将其他参与用户的语音信息以目标语种类型,得到译文数据展示在客户端,使译文数据更符合各参与用户的阅读习惯,以便于参与用户能够快速理解其他参与用户的互动信息,从而提高互动交互效率。
S250、将译文数据在目标客户端进行展示。
可选的,将参与用户,以及与参与用户的语音数据对应的译文数据关联展示在客户端的显示界面上。
也就是说,在将语音信息转换为目标语种类型后,可以将与目标语种类型对应的译文数据与参与用户进行关联,并推送至客户端,以在客户端上展示译文数据。
本公开实施例的技术方案,通过对采集的语音数据进行声纹识别来确定用户的身份信息,进而确定与身份信息关联的候选源语种类型,以从候选源语种类型中确定源语种类型,极大的提高了确定源语种类型的效率。
实施例三
图3为本公开实施例三所提供的一种互动信息处理方法流程示意图。在前述实施例的基础上,考虑到在视频会议过程中,可能存在与参与用户对应的源语种类型发生变化,无法对其进行语种类型转换的情形。可以定时采集各参与用户的语音数据,并基于语音数据确定与其相对应的源语种类型是否发生变化,进而根据更新后的源语种类型进行语音转化,当然,还可以采集各客户端设置的目标语种类型是否发生变化,以将采集的语音信息转换为更新后的目标语种类型。如图3所示,所述方法包括:
S310、在用户基于实时互动界面进行互动时,采集至少一个参与用户的语音数据。
S320、基于所述语音数据确定各所述参与用户的源语种类型。
S330、将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据。
S340、将所述译文数据在所述目标客户端进行展示。
S350、定时采集参与用户的语音数据,并基于所述语音数据更新参与用户的源语种类型。
其中,定时可以为相对时间点,例如,可以是间隔设定时长触发一次源语种类型检测操作。若检测到源语种类型发生变化后,可以基于变化后的源语种类型更新参与用户的源语种类型。
示例性的,在确定参与用户的源语种类别为中文后,可以每隔十分钟对采集的语音数据进行处理。根据处理结果确定参与用户的源语种类型为英文,则说明参与用户的源语种类别发生了变化,可以将英文作为参与用户的源语种类型,进而将参与用户的语音数据从 英文转化目标语种类型。
在实际应用的过程中,基于语音数据确定源语种类型时,主要是通过对一到两秒内的语音数据进行处理来确定的,可能存在参与用户的源语种类型为中文,但互动的内容中包括英文专业术语。若参与用户的源语种类型并未发生变化,在定时采集语音数据时,刚好采集到了与英文术语对应的语音数据,此时处理结果为参与用户的源语种类型为英文。为了避免此种情况存在,可选的,当基于所述语音数据确定所述参与用户的源语种类型发生变化时,获取所述参与用户预设时长内的语音数据;基于预设时长内的语音数据识别源语种类型,以更新所述参与用户的源语种类型。
也就是说,若基于定时采集的语音数据,确定参与用户的源语种类型发生变化时,可以采集预设时长内的语音数据,可选的,5S或者10S内的语音数据,以基于预设时长内的语音数据,进一步确定参与用户的源语种类型是否变化,若基于预设时长内的语音数据确定参与用户的源语种类型发生变化时,则将与参与用户对应的源语种类型更新为此时确定的源语种类型。
在本实施例中,还包括:定时检测客户端所属的当前目标语种类型,当当前目标语种类型与预先确定的目标语种类型不同时,基于当前目标语种类型更新目标语种类型。
通常,客户端上设置的语言类型为参与用户期望显示的语言类型,即译文数据的语种类型。定时检测客户端的当前目标语种类型,可以及时确定与客户端对应的参与用户所期望显示的语种类型,从而将其他参与用户的语音数据转化为目标语种类型,得到译文数据,进而达到了便于用户阅读的技术效果。
本公开实施例的技术方案,在确定参与用户的源语种类型以及所属客户端的目标语言类型后,还可以定时采集语音数据和客户端上设置的目标语种类型,以在源语种类型和/或目标语种类型发生变化时,可以及时将各参与用户的语音数据翻译为与更新后的目标语种类型对应的译文数据,以使用户基于译文数据快速理解其他参与用户的互动内容,从而提高互动交互效率。
实施例四
图4为本公开实施例四所提供的一种互动信息处理装置结构示意图,如图4所示,所述装置包括:语音数据采集模块410、源语种类型确定模块420、译文数据转换模块430以及译文数据确定模块440。其中,
语音数据采集模块,用于在用户基于实时互动界面进行互动时,采集至少两个参与用户的语音数据;源语种类型确定模块,用于基于所述语音数据确定各所述参与用户的源语种类型;译文数据转换模块,用于将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据;译文数据展示模块,用于将所述译文数据在所述目标客户端进行展示。
在上述技术方案的基础上,所述源语种类型确定模块,还包括:
身份信息识别单元,用于对所述语音数据进行声纹识别,以确定所述语音数据所对应参与用户的身份信息;
候选源语种确定单元,用于根据所述参与用户的身份信息,确定与所述身份信息对应的候选源语种类型,以便基于所述语音数据从候选源语种类型中确定源语种类型。
在上述技术方案的基础上,所述装置还包括,目标语种类型确定模块,用于:获取目标客户端上设置的语种类型作为所述目标语种类型;获取所述目标客户端的登录地址,基于所述登录地址确定与所述目标客户端所在地理位置对应的目标语种类型。
在上述各技术方案的基础上,所述译文数据展示模块,还用于将参与用户以及与所述参与用户的语音数据对应的译文数据关联展示在客户端的显示界面上。
在上述各技术方案的基础上,所述装置还包括:定时采集模块,用于定时采集参与用户的语音数据,并基于所述语音数据更新所述参与用户的源语种类型。
在上述各技术方案的基础上,所述定时采集模块,还用于定时采集各个参与用户的语音数据,当基于所述语音数据确定所述参与用户的源语种类型发生变化时,获取所述参与用户预设时长内的语音数据;基于预设时长内的语音数据识别源语种类型,以更新所述参与用户的源语种类型。
在上述各技术方案的基础上,所述译文数据转换模块,还用于将所述参与用户的语音数据,从源语种类型翻译为与一个或多个目标客户端的目标语种类型对应的多个语种的译文数据。
在上述各技术方案的基础上,所述实时互动界面是视频会议交互界面、视频直播交互界面或群聊交互界面。
本公开实施例的技术方案,在将采集的语音数据转换为目标语种类型,得到译文数据后,可以将译文数据直观的展示在客户端以供用户阅览,解决了若其他参与用户的语种类型与本人的语种类型差异较大时,存在无法理解其他参与用户的互动内容,导致无法与其他参与用户进行有效沟通以及交互效率较低的技术问题,实现了将其他参与用户的语音数据转换为目标语种类型,得到译文数据,并将译文数据展示在客户端以供用户阅读,从而使用户基于译文数据确定其他参与用户的互动内容,进而提高了互动交互效率以及用户体验。
本公开实施例所提供的互动信息处理装置可执行本公开任意实施例所提供的互动信息处理方法,具备执行方法相应的功能模块和有益效果。
值得注意的是,上述装置所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
实施例五
下面参考图5,其示出了适于用来实现本公开实施例的电子设备(例如图5中的终端设备或服务器)500的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(Personal Digital Assistant,个人数字助理)、PAD(portable android device,平板电脑)、PMP(Portable Media Player,便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV(television,电视机)、 台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图5所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储装置506加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理。在RAM503中,还存储有电子设备500操作所需的各种程序和数据。处理装置501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置506;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置506被安装,或者从ROM502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的方法中限定的上述功能。
本公开实施例提供的电子设备与上述实施例提供的互动信息处理方法属于同一发明构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的有益效果。
实施例六
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的互动信息处理方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还 可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:
在用户基于实时互动界面进行互动时,采集至少一个参与用户的语音数据;
基于所述语音数据确定各所述参与用户的源语种类型;
将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据;
将所述译文数据在所述目标客户端进行展示。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元/模块的名称在某种情况下并不构成对该单元本身的限定,例如,语音数据采集模块还可以被描述为“数据采集模块”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种互动信息处理方法,该方法包括:
在用户基于实时互动界面进行互动时,采集至少一个参与用户的语音数据;
基于所述语音数据确定各所述参与用户的源语种类型;
将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据;
将所述译文数据在所述目标客户端进行展示。
根据本公开的一个或多个实施例,【示例二】提供了一种互动信息处理方法还包括:
可选的,在所述基于所述语音数据确定各所述参与用户的源语种类型之前,还包括:
对所述语音数据进行声纹识别,以确定所述语音数据所对应参与用户的身份信息;
根据所述参与用户的身份信息,确定与所述身份信息对应的候选源语种类型,以便基于所述语音数据从候选源语种类型中确定源语种类型。
根据本公开的一个或多个实施例,【示例三】提供了一种互动信息处理方法,还包括:
可选的,确定目标语种类型,包括下述至少一项:
获取目标客户端上设置的语种类型作为所述目标语种类型;
获取所述目标客户端的登录地址,基于所述登录地址确定与所述目标客户端所在地理位置对应的目标语种类型。
根据本公开的一个或多个实施例,【示例四】提供了一种互动信息处理方法,还包括:
可选的,所述将所述译文数据在所述目标客户端进行展示,包括:
将参与用户以及与所述参与用户的语音数据对应的译文数据关联展示在客户端的显示界面上。
根据本公开的一个或多个实施例,【示例五】提供了一种互动信息处理方法,还包括:
可选的,定时采集参与用户的语音数据,并基于所述语音数据更新所述参与用户的源语种类型。
根据本公开的一个或多个实施例,【示例六】提供了一种互动信息处理方法,还包括:
可选的,所述定时采集参与用户的语音数据,并基于所述语音数据更新所述参与用户的源语种类型,包括:
定时采集各个参与用户的语音数据,当基于所述语音数据确定所述参与用户的源语种类型发生变化时,获取所述参与用户预设时长内的语音数据;
基于预设时长内的语音数据识别源语种类型,以更新所述参与用户的源语种类型。
根据本公开的一个或多个实施例,【示例七】提供了一种互动信息处理方法,还包括:
可选的,定时检测客户端所属的当前目标语种类型,当所述当前目标语种类型与所述目标语种类型不同时,基于所述当前目标语种类型更新所述目标语种类型。
根据本公开的一个或多个实施例,【示例八】提供了一种互动信息处理方法,还包括:
可选的,将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据包括:
将所述参与用户的语音数据,从源语种类型翻译为与一个或多个目标客户端的目标语种类型对应的多个语种的译文数据。
根据本公开的一个或多个实施例,【示例九】提供了一种互动信息处理方法,还包括:
可选的,所述实时互动界面是视频会议交互界面、视频直播交互界面或群聊交互界面。
根据本公开的一个或多个实施例,【示例十】提供了一种互动信息处理方法,还包括:
可选的,所述参与用户包括发言用户。
根据本公开的一个或多个实施例,【示例十一】提供了一种互动信息处理装置,该装置包括:
语音数据采集模块,用于在用户基于实时互动界面进行互动时,采集至少两个参与用户的语音数据;
源语种类型确定模块,用于基于所述语音数据确定各所述参与用户的源语种类型;
译文数据转换模块,用于将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据;
译文数据展示模块,用于将所述译文数据在所述目标客户端进行展示。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理 解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (13)

  1. 一种互动信息处理方法,其特征在于,包括:
    在用户基于实时互动界面进行互动时,采集至少一个参与用户的语音数据;
    基于所述语音数据确定各所述参与用户的源语种类型;
    将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据;
    将所述译文数据在所述目标客户端进行展示。
  2. 根据权利要求1所述的方法,其特征在于,在所述基于所述语音数据确定各所述参与用户的源语种类型之前,还包括:
    对所述语音数据进行声纹识别,以确定所述语音数据所对应参与用户的身份信息;
    根据所述参与用户的身份信息,确定与所述身份信息对应的候选源语种类型,以便基于所述语音数据从候选源语种类型中确定源语种类型。
  3. 根据权利要求1所述的方法,其特征在于,确定目标语种类型,包括下述至少一项:
    获取目标客户端上设置的语种类型作为所述目标语种类型;
    获取所述目标客户端的登录地址,基于所述登录地址确定与所述目标客户端所在地理位置对应的目标语种类型。
  4. 根据权利要求1所述的方法,其特征在于,所述将所述译文数据在所述目标客户端进行展示,包括:
    将参与用户以及与所述参与用户的语音数据对应的译文数据关联展示在客户端的显示界面上。
  5. 根据权利要求1所述的方法,其特征在于,还包括:
    定时采集参与用户的语音数据,并基于所述语音数据更新所述参与用户的源语种类型。
  6. 根据权利要求5所述的方法,其特征在于,所述定时采集参与用户的语音数据,并基于所述语音数据更新所述参与用户的源语种类型,包括:
    定时采集各个参与用户的语音数据,当基于所述语音数据确定所述参与用户的源语种类型发生变化时,获取所述参与用户预设时长内的语音数据;
    基于预设时长内的语音数据识别源语种类型,以更新所述参与用户的源语种类型。
  7. 根据权利要求1所述的方法,其特征在于,还包括:
    定时检测客户端所属的当前目标语种类型,当所述当前目标语种类型与所述目标语种类型不同时,基于所述当前目标语种类型更新所述目标语种类型。
  8. 根据权利要求1所述的方法,其特征在于,将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据包括:
    将所述参与用户的语音数据,从源语种类型翻译为与一个或多个目标客户端的目标语种类型对应的多个语种的译文数据。
  9. 根据权利要求1-8中任一所述的方法,其特征在于,所述实时互动界面是视频会议交互界面、视频直播交互界面或群聊交互界面。
  10. 根据权利要求1-8中任一所述的方法,其特征在于,所述参与用户包括发言用户。
  11. 一种互动信息处理装置,其特征在于,包括:
    语音数据采集模块,用于在用户基于实时互动界面进行互动时,采集至少两个参与用户的语音数据;
    源语种类型确定模块,用于基于所述语音数据确定各所述参与用户的源语种类型;
    译文数据转换模块,用于将所述参与用户的语音数据,从源语种类型转换为目标语种类型,得到译文数据;
    译文数据展示模块,用于将所述译文数据在所述目标客户端进行展示。
  12. 一种电子设备,其特征在于,所述电子设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-10中任一所述的互动信息处理方法。
  13. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-10中任一所述的互动信息处理方法。
PCT/CN2021/087097 2020-04-30 2021-04-14 互动信息处理方法、装置、设备及介质 WO2021218631A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/882,032 US20220374618A1 (en) 2020-04-30 2022-08-05 Interaction information processing method and apparatus, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010366967.7 2020-04-30
CN202010366967.7A CN113014986A (zh) 2020-04-30 2020-04-30 互动信息处理方法、装置、设备及介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/882,032 Continuation US20220374618A1 (en) 2020-04-30 2022-08-05 Interaction information processing method and apparatus, device, and medium

Publications (1)

Publication Number Publication Date
WO2021218631A1 true WO2021218631A1 (zh) 2021-11-04

Family

ID=76383611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087097 WO2021218631A1 (zh) 2020-04-30 2021-04-14 互动信息处理方法、装置、设备及介质

Country Status (3)

Country Link
US (1) US20220374618A1 (zh)
CN (1) CN113014986A (zh)
WO (1) WO2021218631A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114915819B (zh) * 2022-03-30 2023-09-15 卡莱特云科技股份有限公司 一种基于互动屏的数据交互方法、装置及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682967A (zh) * 2017-01-05 2017-05-17 胡开标 在线翻译聊天系统
US20180052831A1 (en) * 2016-08-18 2018-02-22 Hyperconnect, Inc. Language translation device and language translation method
CN108763231A (zh) * 2018-06-12 2018-11-06 深圳市合言信息科技有限公司 一种多国语言同声传译的聊天室实现方法
CN108829688A (zh) * 2018-06-21 2018-11-16 北京密境和风科技有限公司 跨语种交互的实现方法和装置
CN109688363A (zh) * 2018-12-31 2019-04-26 深圳爱为移动科技有限公司 多终端多语言实时视频群内私聊的方法及系统
CN110519070A (zh) * 2018-05-21 2019-11-29 香港乐蜜有限公司 用于对聊天室内语音进行处理的方法、装置和服务器

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6820055B2 (en) * 2001-04-26 2004-11-16 Speche Communications Systems and methods for automated audio transcription, translation, and transfer with text display software for manipulating the text
US7035804B2 (en) * 2001-04-26 2006-04-25 Stenograph, L.L.C. Systems and methods for automated audio transcription, translation, and transfer
US20020169592A1 (en) * 2001-05-11 2002-11-14 Aityan Sergey Khachatur Open environment for real-time multilingual communication
US7849144B2 (en) * 2006-01-13 2010-12-07 Cisco Technology, Inc. Server-initiated language translation of an instant message based on identifying language attributes of sending and receiving users
US9101279B2 (en) * 2006-02-15 2015-08-11 Virtual Video Reality By Ritchey, Llc Mobile user borne brain activity data and surrounding environment data correlation system
WO2007105193A1 (en) * 2006-03-12 2007-09-20 Nice Systems Ltd. Apparatus and method for target oriented law enforcement interception and analysis
US20080263132A1 (en) * 2007-04-23 2008-10-23 David Saintloth Apparatus and method for efficient real time web language translations
US7953590B2 (en) * 2007-10-02 2011-05-31 International Business Machines Corporation Using separate recording channels for speech-to-speech translation systems
WO2009073194A1 (en) * 2007-12-03 2009-06-11 Samuel Joseph Wald System and method for establishing a conference in tow or more different languages
US8473555B2 (en) * 2009-05-12 2013-06-25 International Business Machines Corporation Multilingual support for an improved messaging system
US9710429B1 (en) * 2010-11-12 2017-07-18 Google Inc. Providing text resources updated with translation input from multiple users
US8849628B2 (en) * 2011-04-15 2014-09-30 Andrew Nelthropp Lauder Software application for ranking language translations and methods of use thereof
US20140343994A1 (en) * 2011-07-21 2014-11-20 Parlant Technology, Inc. System and method for enhanced event participation
US8832301B2 (en) * 2011-07-21 2014-09-09 Parlant Technology System and method for enhanced event participation
EP2774053A4 (en) * 2011-09-09 2015-11-18 Google Inc USER INTERFACE FOR A TRANSLATION WEB PAGE
US9245254B2 (en) * 2011-12-01 2016-01-26 Elwha Llc Enhanced voice conferencing with history, language translation and identification
US9110891B2 (en) * 2011-12-12 2015-08-18 Google Inc. Auto-translation for multi user audio and video
US9280610B2 (en) * 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US9672209B2 (en) * 2012-06-21 2017-06-06 International Business Machines Corporation Dynamic translation substitution
US20140035823A1 (en) * 2012-08-01 2014-02-06 Apple Inc. Dynamic Context-Based Language Determination
US20130238311A1 (en) * 2013-04-21 2013-09-12 Sierra JY Lou Method and Implementation of Providing a Communication User Terminal with Adapting Language Translation
US9635392B2 (en) * 2014-04-16 2017-04-25 Sony Corporation Method and system for displaying information
WO2016020920A1 (en) * 2014-08-05 2016-02-11 Speakez Ltd. Computerized simultaneous interpretation system and network facilitating real-time calls and meetings
US10133740B2 (en) * 2015-08-07 2018-11-20 Samsung Electronics Co., Ltd. Translation apparatus and control method thereof
US10162308B2 (en) * 2016-08-01 2018-12-25 Integem Inc. Methods and systems for photorealistic human holographic augmented reality communication with interactive control in real-time
US10957083B2 (en) * 2016-08-11 2021-03-23 Integem Inc. Intelligent interactive and augmented reality based user interface platform
US9836458B1 (en) * 2016-09-23 2017-12-05 International Business Machines Corporation Web conference system providing multi-language support
US20200125643A1 (en) * 2017-03-24 2020-04-23 Jose Rito Gutierrez Mobile translation application and method
JP7197259B2 (ja) * 2017-08-25 2022-12-27 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 情報処理方法、情報処理装置およびプログラム
CN110730952B (zh) * 2017-11-03 2021-08-31 腾讯科技(深圳)有限公司 处理网络上的音频通信的方法和系统
US10757148B2 (en) * 2018-03-02 2020-08-25 Ricoh Company, Ltd. Conducting electronic meetings over computer networks using interactive whiteboard appliances and mobile devices
GB201804073D0 (en) * 2018-03-14 2018-04-25 Papercup Tech Limited A speech processing system and a method of processing a speech signal
US20210365641A1 (en) * 2018-06-12 2021-11-25 Langogo Technology Co., Ltd Speech recognition and translation method and translation apparatus
US20200137224A1 (en) * 2018-10-31 2020-04-30 International Business Machines Corporation Comprehensive log derivation using a cognitive system
GB2582910A (en) * 2019-04-02 2020-10-14 Nokia Technologies Oy Audio codec extension
US20220245574A1 (en) * 2019-11-05 2022-08-04 Strong Force Vcn Portfolio 2019, Llc Systems, Methods, Kits, and Apparatuses for Digital Product Network Systems and Biology-Based Value Chain Networks
US11783135B2 (en) * 2020-02-25 2023-10-10 Vonage Business, Inc. Systems and methods for providing and using translation-enabled multiparty communication sessions

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180052831A1 (en) * 2016-08-18 2018-02-22 Hyperconnect, Inc. Language translation device and language translation method
CN106682967A (zh) * 2017-01-05 2017-05-17 胡开标 在线翻译聊天系统
CN110519070A (zh) * 2018-05-21 2019-11-29 香港乐蜜有限公司 用于对聊天室内语音进行处理的方法、装置和服务器
CN108763231A (zh) * 2018-06-12 2018-11-06 深圳市合言信息科技有限公司 一种多国语言同声传译的聊天室实现方法
CN108829688A (zh) * 2018-06-21 2018-11-16 北京密境和风科技有限公司 跨语种交互的实现方法和装置
CN109688363A (zh) * 2018-12-31 2019-04-26 深圳爱为移动科技有限公司 多终端多语言实时视频群内私聊的方法及系统

Also Published As

Publication number Publication date
US20220374618A1 (en) 2022-11-24
CN113014986A (zh) 2021-06-22

Similar Documents

Publication Publication Date Title
WO2018010682A1 (zh) 直播方法、直播数据流展示方法和终端
US20160255399A1 (en) Tv program identification method, apparatus, terminal, server and system
WO2021218981A1 (zh) 互动记录的生成方法、装置、设备及介质
JP6990772B2 (ja) 情報のプッシュ方法、記憶媒体、端末機器及びサーバ
US12001478B2 (en) Video-based interaction implementation method and apparatus, device and medium
WO2021143338A1 (zh) 直播间页面的加载方法、装置、终端、服务器及存储介质
WO2018095219A1 (zh) 媒体信息处理方法和装置
WO2015043547A1 (en) A method, device and system for message response cross-reference to related applications
WO2022042609A1 (zh) 提取热词的方法、装置、电子设备及介质
WO2015062224A1 (en) Tv program identification method, apparatus, terminal, server and system
US11758087B2 (en) Multimedia conference data processing method and apparatus, and electronic device
WO2021218794A1 (zh) 信息共享方法、装置、电子设备及存储介质
WO2021197161A1 (zh) 图标更新方法、装置和电子设备
US20220391058A1 (en) Interaction information processing method and apparatus, electronic device and storage medium
WO2020124966A1 (zh) 节目搜索方法、装置、设备及介质
WO2023124767A1 (zh) 一种基于文档共享的提示方法、装置、设备及介质
WO2021218612A1 (zh) 一种信息的切换共享方法、装置、电子设备及存储介质
US11818491B2 (en) Image special effect configuration method, image recognition method, apparatus and electronic device
CN111818383B (zh) 视频数据的生成方法、系统、装置、电子设备及存储介质
WO2021218631A1 (zh) 互动信息处理方法、装置、设备及介质
WO2021143317A1 (zh) 邀请链接信息处理方法、装置、电子设备及可读介质
WO2020233171A1 (zh) 歌单切换方法、装置、系统、终端和存储介质
CN116418711A (zh) 服务网关的测试方法、设备、存储介质及产品
WO2023088461A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2024032111A9 (zh) 在线会议的数据处理方法、装置、设备、介质及产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21795360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/03/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21795360

Country of ref document: EP

Kind code of ref document: A1