CN106790460B - Voice data interaction method and device and file server - Google Patents

Voice data interaction method and device and file server Download PDF

Info

Publication number
CN106790460B
CN106790460B CN201611123864.8A CN201611123864A CN106790460B CN 106790460 B CN106790460 B CN 106790460B CN 201611123864 A CN201611123864 A CN 201611123864A CN 106790460 B CN106790460 B CN 106790460B
Authority
CN
China
Prior art keywords
client
voice data
file server
url
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611123864.8A
Other languages
Chinese (zh)
Other versions
CN106790460A (en
Inventor
陈康明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN201611123864.8A priority Critical patent/CN106790460B/en
Publication of CN106790460A publication Critical patent/CN106790460A/en
Priority to PCT/CN2017/115208 priority patent/WO2018103735A1/en
Application granted granted Critical
Publication of CN106790460B publication Critical patent/CN106790460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Abstract

The invention provides a voice data interaction method, a voice data interaction device and a file server, and relates to the field of mobile terminal games. The method comprises the following steps: the first client records voice data and transmits the voice data to the file server for storage. And the file server generates a URL matched with the voice data and sends the URL to the first client. The first client generates an identification text according to the URL and sends the identification text to the second client through the game server, and the second client sends a voice data acquisition request to the file server after identifying the identification text and downloads voice data from the file server. The game server and the file server can be separated, and the voice data does not pass through the game server, so that the burden of the game server is reduced. When the voice data is sent, the bandwidth is not occupied, the game quality is not influenced, and the scheme is used without improving the original game program, the user interface and the game server, so that the access and the use are convenient.

Description

Voice data interaction method and device and file server
Technical Field
The invention relates to the field of mobile terminal games, in particular to a voice data interaction method and device and a file server.
Background
In the period of intense heat of the current mobile terminal game, a large number of Multiplayer Online Role playing games (MMORPGs) and Multiplayer Online tactical sports games (MOBAs) appear, and the games are favored by a large number of players.
However, in a mobile environment, such as walking, taking a bus, taking a subway, etc., the communication method using text input is very inconvenient, and the information interaction between friends is time-consuming and cannot meet the real-time requirement.
The existing mobile terminal game integrates a third-party SDK (such as science news) in the game, improves the communication efficiency in a voice input mode, saves the trouble of typing and enables the game to be more interesting. However, when the game calls the third-party SDK to record, store and send, the third-party SDK process in the client system may be resident, occupy system resources, and consume power quickly, even the mobile phone CPU and the memory may be consumed too fast, and the game client may be caused to be flashed back seriously, which may seriously affect the experience of the game player.
Disclosure of Invention
The present invention is directed to a method for voice data interaction, so as to improve the above-mentioned problems.
Another objective of the present invention is to provide a voice data interaction device to improve the above mentioned problems.
Another object of the present invention is to provide a file server to improve the above mentioned problems.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
a first embodiment of the present invention provides a voice data interaction method, where the voice data interaction method is applied to a file server that is in communication connection with a first client and a second client, respectively, and the first client and the second client are also in communication connection with a game server, respectively, and the voice data interaction method includes: storing voice data sent by a first client; generating a URL matched with the voice data; sending the URL to the first client; receiving a request sent by the second client, wherein the request is generated by the second client according to an identification text, the identification text is generated by the first client according to the URL and then sent to the game server, and the identification text is sent to the second client by the game server; and sending the voice data to the second client according to the request.
A second embodiment of the present invention provides a voice data interaction method, where the voice data interaction method is applied to a first client communicatively connected to a file server and a game server, respectively, and the file server and the game server are also communicatively connected to a second client, respectively, and the voice data interaction method includes: recording voice data; sending the voice data to the file server; receiving a URL sent by the file server, wherein the URL is generated by the file server after the voice data is stored in the file server according to the voice data matching; generating an identification text according to the URL; and sending the identification text to the game server.
A third embodiment of the present invention provides a voice data interaction method, where the voice data interaction method is applied to a second client communicatively connected to a file server and a game server, respectively, and the file server and the game server are also communicatively connected to a first client, and the voice data interaction method includes: receiving an identification text sent by the game server, wherein the identification text is generated by the first client according to a URL sent by the file server and then sent to the game server, the URL is generated by the file server according to the matching of voice data sent by the first client, and the voice data is recorded by the first client, sent to the file server and then stored by the file server; and acquiring the voice data from the file server according to the identification text.
A fourth embodiment of the present invention provides a voice data interaction apparatus, where the voice data interaction apparatus is applied to a file server that is in communication connection with a first client and a second client, respectively, and the first client and the second client are also in communication connection with a game server, respectively, and the voice data interaction apparatus includes: the storage module is used for storing voice data sent by the first client; the first generation module is used for generating a URL (uniform resource locator) matched with the voice data; the first sending module is used for sending the URL to the first client; the first receiving module is used for receiving a request sent by the second client, the request is generated by the second client according to an identification text, the identification text is generated by the first client according to the URL and then sent to the game server, and the identification text is sent to the second client by the game server; the first sending module is further configured to send the voice data to the second client according to the request.
A fifth embodiment of the present invention provides a voice data interaction apparatus, where the voice data interaction apparatus is applied to a first client communicatively connected to a file server and a game server, respectively, and the file server and the game server are further communicatively connected to a second client, and the voice data interaction method includes: the recording module is used for recording voice data; the second sending module is used for sending the voice data to the file server; the second receiving module is used for receiving the URL sent by the file server, wherein the URL is generated by the file server after the voice data is stored in the file server according to the voice data matching; the second generation module generates an identification text according to the URL; the second sending module is further configured to send the identification text to the game server.
A sixth embodiment of the present invention provides a voice data interaction apparatus, where the voice data interaction apparatus is applied to a second client communicatively connected to a file server and a game server, respectively, and the file server and the game server are also communicatively connected to a first client, and the voice data interaction apparatus includes: the third receiving module is used for receiving an identification text sent by the game server, wherein the identification text is generated by the first client according to a URL sent by the file server and then sent to the game server, the URL is generated by the file server according to the matching of voice data sent and stored by the first client, and the voice data is sent to the file server and then stored by the file server; and the acquisition module is used for acquiring the voice data from the file server according to the identification text.
The embodiment of the present invention further provides a file server, where the file server is in communication connection with a first client and a second client, respectively, and the first client and the second client are also in communication connection with a game server, respectively, and the file server includes: a first memory; a first processor; and a voice data interaction device installed in the first memory and including one or more software function modules executed by the first processor, the voice data interaction device comprising: the storage module is used for storing voice data sent by the first client; the first generation module is used for generating a URL (uniform resource locator) matched with the voice data; the first sending module is used for sending the URL to the first client; the first receiving module is used for receiving a request sent by the second client, the request is generated by the second client according to an identification text, the identification text is generated by the first client according to the URL and then sent to the game server, and the identification text is sent to the second client by the game server; the first sending module is further configured to send the voice data to the second client according to the request.
Compared with the prior art, the voice data interaction method, the voice data interaction device and the file server provided by the invention have the advantages that the first client is used for recording voice data and transmitting the voice data to the file server for storage. And the file server generates a URL matched with the voice data and sends the URL to the first client. The first client generates an identification text according to the URL and sends the identification text to the second client through the game server, and after the second client identifies the identification text, the second client sends a request for obtaining voice data to the file server and downloads the voice data from the file server. Thus, the game server is separated from the file server, and the voice data does not pass through the game server, thereby reducing the burden of the game server. When the voice data is sent, the bandwidth is not occupied, the game quality is not influenced, and the scheme is used without improving the original game program, the user interface and the game server, so that the access and the use are convenient.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a schematic diagram of an application environment of the present invention.
Fig. 2 is a block diagram of a file server according to a preferred embodiment of the present invention.
Fig. 3 is a block diagram of a client according to a preferred embodiment of the present invention.
Fig. 4 is a flowchart of a voice data interaction method according to a first embodiment of the present invention.
Fig. 5 is a flowchart of a voice data interaction method according to a second embodiment of the present invention.
Fig. 6 is a flowchart of a voice data interaction method according to a third embodiment of the present invention.
Fig. 7 is a flowchart illustrating the method shown in fig. 6 for obtaining the voice data from the file server according to the recognition text.
Fig. 8 is a block diagram of a voice data interaction apparatus according to a fourth embodiment of the present invention.
Fig. 9 is a block diagram of a voice data interaction apparatus according to a fifth embodiment of the present invention.
Fig. 10 is a block diagram of a voice data interaction apparatus according to a sixth embodiment of the present invention.
Fig. 11 is a block diagram of the acquisition module of fig. 10.
Icon: 100-a file server; 111-a first memory; 112-a first processor; 113-a first communication unit; 200-a client; 211-a second memory; 212-a storage controller; 213-a second processor; 214-peripheral interface; 215-input-output unit; 216-an audio unit; 217-a display unit; 218-a radio frequency unit; 219-second communication unit; 220-a first client; 230-a second client; 300-a network; 400-a game server; 500-a voice data interaction device; 501-a storage module; 502-a first generation module; 503-a first sending module; 504-a first receiving module; 600-a voice data interaction device; 601-a recording module; 602-a second sending module; 603-a second receiving module; 604-a second generation module; 605-a third receiving module; 606-a query module; 607-an obtaining module; 6071-obtain sub-module; 6072-generate sub-module; 6073-send submodule.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
The embodiments of the present invention described below can be applied to the environment shown in fig. 1 without specific description, and as shown in fig. 1, the file server 100 and the game server 400 are respectively connected to the client 200 through a wired or wireless network 300 in a communication manner.
In this embodiment, there are a plurality of file servers 100, and the plurality of file servers 100 and the location server form a distributed file storage architecture system, and the system employs a Content Delivery Network (CDN). In the distributed file storage architecture system, a plurality of file servers 100 are used for sharing storage load, and a location server is used for locating storage information, so that the reliability, the availability and the access efficiency of the system are improved, and the system is easy to expand. The CDN can avoid bottlenecks and links on the Internet which possibly affect the data transmission speed and stability as far as possible, so that the content transmission is faster and more stable. The CDN is a one-layer intelligent virtual network based on the existing network 300, which is configured by placing the file servers 100 at nodes in each location of the network 300, and is capable of redirecting a request or voice data of a user to a file server 100 node closest to the user in real time according to traffic of the network 300, connection of each node, a load condition, a distance to the user, response time, and other comprehensive information. The purpose is to enable the user to store and acquire the required content nearby, solve the congestion of the Internet network, and improve the response speed of the user accessing the file server 100.
Fig. 2 is a block diagram of the file server 100. The file server 100 comprises a voice data interaction device 500, a first memory 111, a first processor 112 and a first communication unit 113.
The first memory 111, the first processor 112 and the first communication unit 113 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The voice data interaction device 500 includes at least one software function module which can be stored in the first memory 111 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the file server 100. The first processor 112 is used for executing executable modules stored in the first memory 111, such as software functional modules and computer programs included in the voice data interaction device 500.
The first Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The first memory 111 is used for storing programs and voice data, and the first processor 112 executes the programs after receiving the execution instruction. The first communication unit 113 is configured to establish a communication connection between the file server 100 and the client 200 via the network 300, and to transceive data via the network 300.
Referring to fig. 3, fig. 3 is a block diagram illustrating the client 200 shown in fig. 1. The client 200 may be, but is not limited to: personal Computers (PCs), tablet computers, smart phones, electronic readers, notebook tablet computers, game machines (playstations), in-vehicle terminals, and the like. The client 200 includes a voice data interaction apparatus 600, a second memory 211, a storage controller 212, a second processor 213, a peripheral interface 214, an input/output unit 215, an audio unit 216, a display unit 217, a radio frequency unit 218, and a second communication unit 219.
The second memory 211, the memory controller 212, the second processor 213, the peripheral interface 214, the input/output unit 215, the audio unit 216, the display unit 217, the radio frequency unit 218, and the second communication unit 219 are electrically connected to each other directly or indirectly to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The voice data interaction device 600 includes at least one software function module which can be stored in the second memory 211 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the client 200. The second memory 211 stores a mobile game application client or a browser capable of logging in a game web page, which is downloaded and installed by the client 200 from the game server 400. The second processor 213 is used for executing executable modules stored in the second memory 211, such as software functional modules and computer programs included in the voice data interaction device 600.
The second Memory 211 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The second memory 211 is used for storing programs, and the second processor 213 executes the programs after receiving the execution instructions. Access to the second memory 211 by the second processor 213 and possibly other components may be under the control of the memory controller 212.
The second processor 213 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 214 couples various input/output devices (e.g., an input/output unit 215, an audio unit 216, a display unit 217, and a radio unit 218) to the second processor 213 and the second memory 211. In some embodiments, the peripheral interface 214, the second processor 213, and the memory controller 212 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 215 is used for providing input data for a user to realize the interaction of the user with the client 200. The input and output unit 215 may be, but is not limited to, a virtual keyboard, a voice input circuit, and the like.
The audio unit 216 provides an audio interface to the user, which may include one or more microphones, one or more speakers, and audio circuitry.
The display unit 217 provides an interactive interface (e.g., a user operation interface) between the client 200 and a user or is used to display image data. In this embodiment, the display unit 217 may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations generated at one or more positions on the touch display and send the sensed touch operations to the processor for calculation and processing.
The rf unit 218 is configured to receive and transmit radio wave signals (e.g., electromagnetic waves), so as to perform interconversion between radio waves and electrical signals, thereby performing wireless communication between the client 200 and the network 300 or other communication devices.
The second communication unit 219 is configured to establish a connection with the first communication unit 113 of the file server 100 through the network 300, so as to implement a communication connection between the file server 100 and the client 200. For example, the second communication unit 219 may connect to the network 300 by using the radio frequency signal transmitted by the radio frequency unit 218, and further establish a communication connection with the first communication unit 113 of the file server 100 through the network 300.
In the embodiment of the present invention, the client 200 used as the input voice data is the first client 220, the client 200 having the authority of receiving voice data specified by the first client 220 is the second client 230, and the first client 220 and the second client 230 may be any type of client 200 that can be installed with a mobile-end game application client or a browser capable of logging in a game web page. The first client 220 that inputs the voice data is the client 200 used by the user who needs to transmit the voice data to another player in the mobile terminal game. The other players refer to users having authority to receive voice data designated by the user using the first client 220, and thus the second client 230 having designated authority to receive data is the client 200 used by the other players. For example, a user using the first client 220 may determine the authority to designate to receive voice data from a chat channel. The chat channels may include a private chat channel, a region channel, a team channel, and a help channel, wherein if the private chat channel is selected, the user selected for the chat has a right to receive voice data, and the user selected for the chat may be a friend who is a friend in a relationship stored in the game server 400 with the first client 220; selecting the domain channel, the first client 220 uses all users in the same game scene with the user to have the authority to receive the voice data; if the team channel is selected, the user who uses the same team as the first client 220 has the authority of receiving the voice data; selecting the help channel allows the user who uses the same help as the first client 220 to have the right to receive the voice data. In this embodiment of the present invention, the first client 220 and the second client 230 may respectively establish a communication connection with the file server 100 in the network 300, and specifically, the network 300 may be: Wi-Fi, a 2G network, a 3G network, a 4G network, or a local area network. The first client 220 and the second client 230 may respectively establish communication connection with the game server 400 in the network 300, and specifically, the network 300 may be: Wi-Fi, a 2G network, a 3G network, a 4G network, or a local area network.
The game server 400 in this embodiment may include a plurality of servers such as a WEB server and an authentication server, or may be one server.
First embodiment
Fig. 4 is a flowchart illustrating a voice data interaction method applied to the file server 100 shown in fig. 1 according to a preferred embodiment of the present invention. The voice data interaction method comprises the following steps:
step S101, storing the voice data sent by the first client 220.
In this embodiment, the voice data stored in the file server 100 is the voice data sent by the user of the first client 220 to the user who needs to send to the second client 230, and may be the voice data recorded in the audio unit 216 of the first client 220. Where the speech data may be audio information such as a sentence spoken by the user as being received by the audio unit 216 of the first client 220. After the voice data is recorded, the first client 220 transmits the recorded voice data to the file server 100 through the network 300. The voice data is stored in the first memory 111 by the file server 100.
Step S102, generating a Uniform Resource Locator (URL) matching the voice data.
The location where the voice data is stored in the file server 100 corresponds to a unique address URL, and thus the file server 100 generates a corresponding URL after storing the voice data. URL format for resource type: port number/path, e.g., http:// www.ucly.net/.
Step S103, sending the URL to the first client 220.
In this embodiment, the file server 100 sends the URL to the first client 220. After receiving the URL, the first client 220 generates an identification text according to the URL. The recognition text belongs to text data, and since the original game program supports the transmission of text data between the first client 220 and the game server 400 and between the second client 230 and the game server 400, the transmission of the original game program and the game server 400 between the first client 220 and the game server 400 and between the second client 230 and the game server 400 using the recognition text does not require modification of the original game program and the game server 400. The identification text may be URL text or identification code text data, but is not limited thereto. The recognition text is generated and transmitted to the game server 400 by the first client 220, and the game server 400 sends the recognition text to the second client 230 according to the permission of receiving the voice data specified by the first client 220.
Step S104, receiving the request sent by the second client 230.
In this embodiment, the file server 100 receives the request sent by the second client 230. The request is generated by the second client 230 from the recognized text. The second client 230 recognizes the recognition text, acquires the URL, and generates a request according to the URL. The request is a request from the second client 230 to the file server 100 to obtain the voice data stored in the file server 100 and matching with the URL. The request comprises a GET request. A GET request is a request to ask a server for data. The parameters of the GET request are passed after the URL, so the request may be, but is not limited to, composed of a URL and a GET request.
Step S105, sending the voice data to the second client 230 according to the request.
In this embodiment, after receiving the request, the file server 100 matches the corresponding voice data stored in the file server 100 according to the URL, and sends the voice data to the second client 230.
Second embodiment
Referring to fig. 5, a flowchart of a voice data interaction method applied to the client 200 as the first client 220 shown in fig. 1 according to a preferred embodiment of the present invention is shown. The method comprises the following steps:
step S201 records voice data.
In this embodiment, the audio unit 216 of the first client 220 includes voice data.
Step S202, sending the voice data to the file server 100.
In this embodiment, the first client 220 transmits the recorded voice data to the file server 100 through the network 300. The voice data is stored in the first memory 111 by the file server 100.
Step S203, receiving the URL sent by the file server 100.
In this embodiment, when the voice data is stored in the file server 100, the file server 100 generates a corresponding and unique address URL according to the stored location, and sends the address URL to the first client 220, and the address URL is received by the first client 220.
And step S204, generating an identification text according to the URL.
In this embodiment, the first client 220 generates the identification text according to the URL. The identification text may be URL text or identification code text data, but is not limited thereto.
Step S205, sending the identification text to the game server 400.
In this embodiment, the first client 220 sends the recognition text and the authority for receiving the voice data designated by the user to the game server 400. So that the game server 400 transmits the recognition text to the second client 230 according to the authority of receiving the voice data designated by the user, causes the second client 230 to acquire the URL according to the recognition text, generates a request, and acquires the voice data matched with the URL from the file server 100 using the request.
Third embodiment
Referring to fig. 6, a flowchart of a voice data interaction method applied to the client 200 as the second client 230 shown in fig. 1 according to a preferred embodiment of the present invention is shown. The method comprises the following steps:
step S301, receiving the identification text sent by the game server 400.
In this embodiment, the identification text is generated by the first client 220 according to the URL sent by the file server 100 and then sent to the game server 400, the URL is generated by the file server 100 according to the matching of the voice data sent by the first client 220, and the voice data is recorded by the first client 220 and sent to the file server 100 and then stored by the file server 100. After receiving the recognition text, the game server 400 sends the recognition text to the second client 230. And when the second client 230 receives the recognition text, the second client 230 sends a prompt to the user. The prompt may be, but is not limited to, information sent by the second client 230 to the user through the display unit 217. For example, the display unit 217 displays information such as "voice information of small transmissions is received".
Step S302, inquiring whether the second client 230 accesses Wi-Fi.
In this embodiment, after step S302 is completed, the process proceeds to step S303. Specifically, when the second client 230 does not access Wi-Fi, the flow proceeds to sub-step S3031 of step S303. When the second client 230 accesses Wi-Fi, the flow proceeds to substep S3032 of step S303. This may help the user to save traffic.
Step S303, acquiring the voice data from the file server 100 according to the identification text.
In this embodiment, as shown in fig. 7, step S303 further includes the following sub-steps:
in sub-step S3031, the second client 230 obtains a confirmation instruction triggered by the user.
In this embodiment, the first client 220 may receive, but is not limited to, a triggered confirmation instruction through the input and output unit 215 or a triggered confirmation instruction through the display unit 217.
Substep S3032, generating a request according to the recognition text.
In this embodiment, after the second client 230 identifies the identification text, it obtains the URL and generates a request according to the URL. The request may be, but is not limited to, composed of a URL and a GET request.
Substep S3033, sending said request to said file server 100.
In this embodiment, when sending the request to the file server 100, the parameter of the GET request is transmitted after the URL. So that the file server 100 can accurately recognize the requirement that the second client 230 needs to obtain the voice data matching the URL.
Substep S3034, downloading the voice data matched by the file server 100 according to the URL in the request.
Fourth embodiment
Referring to fig. 8, a voice data interaction apparatus 500 applied to the file server 100 shown in fig. 1 according to a preferred embodiment of the present invention is shown, where the voice data interaction apparatus 500 includes: the device comprises a storage module 501, a first generation module 502, a first sending module 503 and a first receiving module 504.
The storage module 501 is configured to store the voice data sent by the first client 220.
In the embodiment of the present invention, the step S101 may be executed by the storage module 501.
A first generating module 502, configured to generate a URL matching the voice data.
In the embodiment of the present invention, the step S102 may be performed by the first generating module 502.
A first sending module 503, configured to send the URL to the first client 220.
In the embodiment of the present invention, the step S103 may be performed by the first sending module 503.
A first receiving module 504, configured to receive the request sent by the second client 230. Specifically, the request is generated by the second client 230 according to an identification text, the identification text is generated by the first client 220 according to the URL and then sent to the game server 400, and the game server 400 sends the identification text to the second client 230.
In this embodiment of the present invention, the step S104 may be executed by the first receiving module 504.
The first sending module 503 may be further configured to send the voice data to the second client 230 according to the request.
In the embodiment of the present invention, the step S105 may be performed by the first sending module 503.
Fifth embodiment
Referring to fig. 9, a voice data interaction apparatus 600 applied to the client 200 shown in fig. 1 as the first client 220 is shown in an embodiment of the present invention, where the voice data interaction apparatus 600 includes a recording module 601, a second sending module 602, a second receiving module 603, and a second generating module 604.
The recording module 601 is configured to record voice data.
In this embodiment of the present invention, the step S201 may be executed by the recording module 601.
A second sending module 602, configured to send the voice data to the file server 100.
In this embodiment of the present invention, the step S202 may be executed by the second sending module 602.
A second receiving module 603, configured to receive a URL sent by the file server 100, where the URL is generated by the file server 100 after storing the voice data according to the voice data matching.
In this embodiment of the present invention, the step S203 may be performed by the second receiving module 603.
And a second generating module 604, which generates the identification text according to the URL.
In this embodiment of the present invention, the step S204 may be performed by the second generating module 604.
The second sending module 602 is further configured to send the identification text to the game server 400.
In this embodiment of the present invention, the step S205 may also be executed by the second sending module 602.
Sixth embodiment
Referring to fig. 10, a voice data interaction apparatus 600 applied to the client 200 shown in fig. 1 as the second client 230 is shown in the preferred embodiment of the present invention, and the voice data interaction apparatus 600 includes a third receiving module 605, a query module 606 and an obtaining module 607.
A third receiving module 605, configured to receive the identification text sent by the game server 400. The identification text is generated by the first client 220 according to the URL sent by the file server 100 and then sent to the game server 400, the URL is generated by the file server 100 according to the matching of the voice data sent and stored by the first client 220, and the voice data is sent to the file server 100 and then stored by the file server 100.
In this embodiment of the present invention, the step S301 may also be executed by the third receiving module 605.
The query module 606 is configured to query whether the second client 230 accesses Wi-Fi.
In this embodiment of the present invention, the step S302 may also be executed by the query module 606.
An obtaining module 607, configured to obtain the voice data from the file server 100 according to the identification text.
In this embodiment of the present invention, the step S303 may also be executed by the obtaining module 607.
Further, as shown in fig. 11, the obtaining module 607 includes a generating sub-module 6072, a sending sub-module 6073, and an obtaining sub-module 6071.
An obtaining sub-module 6071, configured to obtain a confirmation instruction triggered by a user using the second client 230.
In this embodiment of the present invention, the step S3031 may be executed by the obtaining sub-module 6071. Specifically, when the query module 606 queries that the second client 230 does not access Wi-Fi, the obtaining sub-module 6071 executes a confirmation instruction triggered by the obtaining user.
A generating submodule 6072 is configured to generate a request according to the identification text, where the request may include a GET request.
In this embodiment of the present invention, the step S3032 may be performed by the generating sub-module 6072. Specifically, when the query module 606 queries that the second client 230 accesses Wi-Fi, the generation sub-module 6072 executes generation of a request according to the identification text; or when the query module 606 queries that the second client 230 does not access Wi-Fi, after the obtaining sub-module 6071 finishes obtaining the confirmation instruction triggered by the user, the generating sub-module 6072 executes generating the request according to the identification text.
A sending submodule 6073, configured to send the request to the file server 100.
In this embodiment of the present invention, the step S3033 may be performed by the sending submodule 6073.
The obtaining sub-module 6071 is further configured to download the voice data matched by the file server 100 according to the URL in the request.
In this embodiment of the present invention, the step S3034 may also be executed by the obtaining sub-module 6071. In summary, the voice data interaction method, the voice data interaction device, and the file server provided by the present invention record voice data by using the first client and transmit the voice data to the file server for storage. And the file server generates a URL matched with the voice data and sends the URL to the first client. The first client generates an identification text according to the URL and sends the identification text to the second client through the game server, and after the second client identifies the identification text, the second client sends a request for obtaining voice data to the file server and downloads the voice data from the file server. Therefore, the game server is separated from the file server, the bandwidth is not occupied when the voice data is sent, the game quality is not influenced, and the scheme does not need to improve the original game program, the user interface and the game server, so that the game server is convenient to access and use.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A voice data interaction method is applied to a file server which is in communication connection with a first client and a second client respectively, the first client and the second client are also in communication connection with a game server respectively, and the voice data interaction method comprises the following steps:
storing voice data sent by a first client;
generating a URL matched with the voice data;
sending the URL to the first client;
receiving a request sent by the second client, wherein the request is generated by the second client according to an identification text of a text data type, the identification text is generated by the first client according to the URL and then sent to the game server, and the identification text is sent to the second client by the game server;
and sending the voice data to the second client according to the request.
2. The voice data interaction method of claim 1, wherein the recognition text comprises URL text or recognition code text data.
3. The voice data interaction method of claim 1, wherein the request comprises a GET request.
4. The voice data interaction method of claim 1, wherein the file server is a server of a distributed file storage architecture.
5. A voice data interaction method is applied to a first client which is respectively in communication connection with a file server and a game server, the file server and the game server are also respectively in communication connection with a second client, and the voice data interaction method comprises the following steps:
recording voice data;
sending the voice data to the file server;
receiving a URL sent by the file server, wherein the URL is generated by the file server after the voice data is stored in the file server according to the voice data matching;
generating an identification text of the text data type according to the URL;
and sending the identification text to the game server.
6. The voice data interaction method of claim 5, wherein the recognition text comprises URL text or recognition code text data.
7. A voice data interaction method is applied to a second client which is respectively in communication connection with a file server and a game server, the file server and the game server are also respectively in communication connection with a first client, and the voice data interaction method comprises the following steps:
receiving an identification text of a text data type sent by the game server, wherein the identification text is generated by the first client according to a URL sent by the file server and then sent to the game server, the URL is generated by the file server according to the matching of voice data sent by the first client, and the voice data is recorded by the first client, sent to the file server and then stored by the file server;
and acquiring the voice data from the file server according to the identification text.
8. The voice-data interaction method of claim 7, wherein the method further comprises:
before the voice data is acquired from the file server according to the identification text, whether the second client is accessed to Wi-Fi is inquired;
when the second client accesses Wi-Fi, the acquiring the voice data from the file server according to the identification text comprises: generating a request according to the identification text, wherein the request comprises a GET request; sending the request to the file server; downloading the voice data matched by the file server according to the URL in the request;
when the second client does not access Wi-Fi, the acquiring the voice data from the file server according to the identification text comprises: acquiring a confirmation instruction triggered by a user; generating the request according to the identification text; sending the request to the file server; and downloading the voice data matched by the file server according to the URL in the request.
9. A voice data interaction device is applied to a file server which is respectively in communication connection with a first client and a second client, the first client and the second client are also respectively in communication connection with a game server, and the voice data interaction device comprises:
the storage module is used for storing voice data sent by the first client;
the first generation module is used for generating a URL (uniform resource locator) matched with the voice data;
the first sending module is used for sending the URL to the first client;
the first receiving module is used for receiving a request sent by the second client, the request is generated by the second client according to an identification text of a text data type, the identification text is generated by the first client according to the URL and then sent to the game server, and the identification text is sent to the second client by the game server;
the first sending module is further configured to send the voice data to the second client according to the request.
10. A voice data interaction device is applied to a first client which is respectively in communication connection with a file server and a game server, the file server and the game server are also respectively in communication connection with a second client, and the voice data interaction method comprises the following steps:
the recording module is used for recording voice data;
the second sending module is used for sending the voice data to the file server;
the second receiving module is used for receiving the URL sent by the file server, wherein the URL is generated by the file server after the voice data is stored in the file server according to the voice data matching;
the second generation module is used for generating an identification text of the text data type according to the URL;
the second sending module is further configured to send the identification text to the game server.
11. A voice data interaction device is applied to a second client end which is respectively in communication connection with a file server and a game server, the file server and the game server are also respectively in communication connection with a first client end, and the voice data interaction device comprises:
the third receiving module is used for receiving an identification text of a text data type sent by the game server, wherein the identification text is generated by the first client according to a URL sent by the file server and then sent to the game server, the URL is generated by the file server according to matching of voice data sent and stored by the first client, and the voice data is sent to the file server and then stored by the file server;
and the acquisition module is used for acquiring the voice data from the file server according to the identification text.
12. The voice data interaction device of claim 11, wherein the obtaining module comprises:
the generation submodule is used for generating a request according to the identification text, wherein the request comprises a GET request;
the sending submodule is used for sending the request to the file server;
the obtaining submodule is used for obtaining a confirmation instruction triggered by a user using the second client side, and downloading the voice data matched by the file server according to the URL in the request.
13. The voice data interaction apparatus of claim 12, wherein the apparatus further comprises a query module configured to query whether the second client accesses Wi-Fi;
when the inquiry module inquires that the second client accesses Wi-Fi, the generation sub-module executes the request generated according to the identification text;
and when the inquiry module inquires that the second client does not access Wi-Fi, the acquisition sub-module executes a confirmation instruction triggered by the acquisition user.
14. A file server, wherein the file server is in communication connection with a first client and a second client, respectively, and the first client and the second client are also in communication connection with a game server, respectively, the file server comprising:
a first memory;
a first processor; and
a voice data interaction device installed in the first memory and including one or more software function modules executed by the first processor, the voice data interaction device comprising:
the storage module is used for storing voice data sent by the first client;
the first generation module is used for generating a URL (uniform resource locator) matched with the voice data;
the first sending module is used for sending the URL to the first client;
the first receiving module is used for receiving a request sent by the second client, the request is generated by the second client according to an identification text of a text data type, the identification text is generated by the first client according to the URL and then sent to the game server, and the identification text is sent to the second client by the game server;
the first sending module is further configured to send the voice data to the second client according to the request.
CN201611123864.8A 2016-12-08 2016-12-08 Voice data interaction method and device and file server Active CN106790460B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611123864.8A CN106790460B (en) 2016-12-08 2016-12-08 Voice data interaction method and device and file server
PCT/CN2017/115208 WO2018103735A1 (en) 2016-12-08 2017-12-08 Method, device, and file server for voice data exchange

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611123864.8A CN106790460B (en) 2016-12-08 2016-12-08 Voice data interaction method and device and file server

Publications (2)

Publication Number Publication Date
CN106790460A CN106790460A (en) 2017-05-31
CN106790460B true CN106790460B (en) 2020-06-30

Family

ID=58881703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611123864.8A Active CN106790460B (en) 2016-12-08 2016-12-08 Voice data interaction method and device and file server

Country Status (2)

Country Link
CN (1) CN106790460B (en)
WO (1) WO2018103735A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106790460B (en) * 2016-12-08 2020-06-30 阿里巴巴(中国)有限公司 Voice data interaction method and device and file server
CN108854062B (en) * 2018-06-24 2019-08-09 广州银汉科技有限公司 A kind of voice-enabled chat module of moving game
CN109302473A (en) * 2018-09-28 2019-02-01 重庆赢者科技有限公司 A kind of voice SMS transmission system and method
CN109308893A (en) * 2018-10-25 2019-02-05 珠海格力电器股份有限公司 Method for sending information and device, storage medium, electronic device
CN111128184B (en) * 2019-12-25 2022-09-02 思必驰科技股份有限公司 Voice interaction method and device between devices
CN111885130B (en) * 2020-07-10 2023-06-30 深圳市瑞立视多媒体科技有限公司 Voice communication method, device, system, equipment and storage medium
CN114598679B (en) * 2022-02-17 2024-02-06 宏图智能物流股份有限公司 Single-bin in-platform voice transmission method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102185862A (en) * 2011-05-13 2011-09-14 北京朗玛数联科技有限公司 Communication method, device and system of online game system
CN104811911A (en) * 2015-03-25 2015-07-29 广州多益网络科技有限公司 Chatting method and system of mobile phone game
CN105554112A (en) * 2015-11-09 2016-05-04 广州多益网络科技有限公司 Chatting emoticon transmission method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080002380A (en) * 2006-06-30 2008-01-04 주식회사 케이티 Method for providing game service based on voice
TWI437503B (en) * 2010-11-26 2014-05-11 Inst Information Industry Figure and figure developing system
CN103501316B (en) * 2013-09-16 2017-02-08 天脉聚源(北京)传媒科技有限公司 Audio and video synchronization method, system and device between webpage game client terminals
CN104052846B (en) * 2014-06-30 2016-06-22 腾讯科技(深圳)有限公司 Game application in voice communication method and system
CN105049319B (en) * 2015-05-25 2018-09-18 腾讯科技(深圳)有限公司 Good friend's adding method and system, client and server
CN105743897A (en) * 2016-02-01 2016-07-06 上海龙游网络科技有限公司 Internet audio real-time synchronous transmission system and method
CN106790460B (en) * 2016-12-08 2020-06-30 阿里巴巴(中国)有限公司 Voice data interaction method and device and file server

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102185862A (en) * 2011-05-13 2011-09-14 北京朗玛数联科技有限公司 Communication method, device and system of online game system
CN104811911A (en) * 2015-03-25 2015-07-29 广州多益网络科技有限公司 Chatting method and system of mobile phone game
CN105554112A (en) * 2015-11-09 2016-05-04 广州多益网络科技有限公司 Chatting emoticon transmission method and system

Also Published As

Publication number Publication date
WO2018103735A1 (en) 2018-06-14
CN106790460A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106790460B (en) Voice data interaction method and device and file server
EP3389230B1 (en) System for providing dialog content
US11213743B2 (en) Method, system and electronic device for achieving remote control of computer game by game controller
US11153620B2 (en) Media broadcasting method, server, terminal device, and storage medium
US10789614B2 (en) Method and system for issuing recommended information
US10264053B2 (en) Method, apparatus, and system for data transmission between multiple devices
US8905763B1 (en) Managing demonstration sessions by a network connected demonstration device and system
CN103563415A (en) Over-the-air device configuration
WO2020186928A1 (en) Data sharing method and apparatus, electronic device and computer-readable storage medium
KR20140123076A (en) A device control method and apparatus
CN106649446B (en) Information pushing method and device
WO2014187321A1 (en) Method and system for information push
US10333915B2 (en) Customization of user account authentication
JP2017045462A (en) System and method for authenticating user by using contact list
CN112312222A (en) Video sending method and device and electronic equipment
CN109428908B (en) Information display method, device and equipment
WO2018137528A1 (en) Method and device for accessing resource
US20140041054A1 (en) Attestation of possession of media content items using fingerprints
US20170374489A1 (en) Mobile ghosting
CN110138887B (en) Data processing method, device and storage medium
US20240031630A1 (en) Platform-agnostic media framework
CN110915187B (en) Information recommendation method and related equipment
CN106331887B (en) Calling method of webpage player, playing method and device of multimedia file
CN103634348A (en) Terminal device and method for releasing information
KR101170322B1 (en) Method and device for providing cloud computing service using personal computer based on web

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200604

Address after: 310051 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 510000 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping square B radio tower 13 layer self unit 02 (only for office use)

Applicant before: GUANGZHOU UCWEB COMPUTER TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant