CN110215707B - Method and device for voice interaction in game, electronic equipment and storage medium - Google Patents

Method and device for voice interaction in game, electronic equipment and storage medium Download PDF

Info

Publication number
CN110215707B
CN110215707B CN201910631683.3A CN201910631683A CN110215707B CN 110215707 B CN110215707 B CN 110215707B CN 201910631683 A CN201910631683 A CN 201910631683A CN 110215707 B CN110215707 B CN 110215707B
Authority
CN
China
Prior art keywords
voice
mark
voice interaction
response
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910631683.3A
Other languages
Chinese (zh)
Other versions
CN110215707A (en
Inventor
崔晓菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201910631683.3A priority Critical patent/CN110215707B/en
Publication of CN110215707A publication Critical patent/CN110215707A/en
Application granted granted Critical
Publication of CN110215707B publication Critical patent/CN110215707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5375Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6072Methods for processing data by generating or executing the game program for sound processing of an input signal, e.g. pitch and rhythm extraction, voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6081Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization

Abstract

The embodiment of the invention provides a method and a device for voice interaction in a game, wherein the method comprises the following steps: determining a marker position in response to a first touch operation of an operating medium at the graphical user interface; responding to the second contact operation of the operation medium on the graphical user interface, and inputting voice data to obtain marked voice; generating a voice interaction mark according to the mark position, the mark voice and a first game account corresponding to the first terminal; synchronizing the voice interactive mark to a second terminal so that the second terminal can display the voice interactive mark on a graphical user interface of the second terminal, and playing the marked voice in response to a third contact operation on the voice interactive mark. According to the embodiment of the invention, the clues are marked by using the voice at any position, so that players can communicate the found clues in a targeted manner, the efficiency of describing clue information points is improved, and the game can be smoothly carried out.

Description

Method and device for voice interaction in game, electronic equipment and storage medium
Technical Field
The present invention relates to the field of games, and in particular, to a method and apparatus for voice interaction in a game, an electronic device, and a storage medium.
Background
At present, in a multi-player running group network game, a plurality of players are often required to find clues, and then the respective found clues are connected in series through communication among the players so as to advance the game.
However, in the existing multi-player group online game, a player can only find clues from pictures or text information and communicate clues with other players through multi-player voices, and the communication mode cannot directly and purposefully communicate the clues found by the player, so that the real-time voice form of the clues is very single, clue information points are easy to miss and have low efficiency when being described, and the smooth progress of the game is very affected.
Disclosure of Invention
In view of the foregoing, the present invention has been developed to provide a method and apparatus for in-game voice interaction that overcomes or at least partially solves the foregoing, an electronic device, a storage medium, comprising:
in order to solve the above problems, the present invention discloses a method for voice interaction in a game, which is applied to a first terminal, and a graphical user interface is obtained by executing a software application on a processor of the first terminal and rendering on a touch display of the first terminal, wherein the method comprises:
determining a marker position in response to a first touch operation of an operating medium at the graphical user interface;
responding to the second contact operation of the operation medium on the graphical user interface, and inputting voice data to obtain marked voice;
generating a voice interaction mark according to the mark position, the mark voice and a first game account, wherein the first game account corresponds to the first terminal;
synchronizing the voice interactive mark to a second terminal so that the second terminal can display the voice interactive mark on a graphical user interface of the second terminal, and playing the marked voice in response to a third contact operation on the voice interactive mark.
Preferably, a virtual object exists in the graphical user interface, and when the virtual object is text, the step of determining the marker position in response to the first contact operation of the operation medium on the graphical user interface includes:
generating an underline mark at the text in response to a long press operation of an operation medium on the text;
and in response to a drag operation of the operation medium on the underlined mark, selecting a target keyword or a target sentence from the text in a frame mode, and determining the position of the target keyword or the target sentence as a mark position.
Preferably, a virtual object exists in the graphical user interface, and when the virtual object is a picture, the step of determining the marker position in response to the first contact operation of the operation medium on the graphical user interface includes:
determining a target picture area from the picture in response to clicking operation of an operation medium on the picture;
and determining the position of the target picture area as a mark position.
Preferably, the method further comprises:
generating a recording icon corresponding to the mark position;
the second touch operation is a touch operation that acts on the sound recording icon.
Preferably, the recording icon has a corresponding recording button, and the step of inputting voice data to obtain marked voice in response to the second contact operation of the operation medium on the graphical user interface includes:
responding to the long-time pressing operation of the operation medium on the recording button, and recording voice data;
generating a marking voice using the voice data in response to a release operation of the operation medium on the record button.
Preferably, the method further comprises:
and adjusting the position of the sound recording icon and/or the voice interaction mark in the graphical user interface in response to a fourth contact operation of the operation medium on the sound recording icon and/or the voice interaction mark.
Preferably, after generating the voice interaction mark according to the mark position, the mark voice and the first game account, the method further comprises:
when the voice interaction mark is in a hidden state, responding to clicking operation of the operation medium on the target keyword or the target sentence with the underline mark, and displaying the voice interaction mark;
and when the voice interaction mark is in a display state, hiding the voice interaction mark in response to clicking operation of the operation medium on the target keyword or the target sentence with the underline mark.
Preferably, the method further comprises:
and converting the voice data into characters and displaying the characters in the graphical user interface in response to a fifth contact operation of the operation medium on the voice interaction mark.
Preferably, the voice interaction tag comprises one or more of the following:
a location indicating object, a voice indicating object, a first game account indicating object.
Preferably, the method further comprises:
providing a revocation area above said record button;
and when the operation medium is subjected to long-time pressing operation on the recording button, the recording voice data is withdrawn in response to the operation that the operation medium slides upwards to the withdrawal area and is released on the recording button.
Preferably, the method further comprises:
providing a deletion area in the graphical user interface;
moving the voice interactive mark to the deletion area in response to a sixth contact operation of the operation medium on the voice interactive mark;
and deleting the voice interaction mark in response to the loosening operation of the operation medium on the voice interaction mark.
Preferably, the method further comprises:
and playing the marked voice in response to a seventh contact operation of the operation medium on the voice interaction mark.
The embodiment of the invention also discloses a device for voice interaction in the game, which is applied to the first terminal, and a graphical user interface is obtained by executing a software application on a processor of the first terminal and rendering on a touch display of the first terminal, and the device comprises:
the first response module is used for responding to the first contact operation of the operation medium on the graphical user interface and determining the marking position;
the second response module is used for responding to the second contact operation of the operation medium on the graphical user interface and inputting voice data to obtain marked voice;
the voice interaction mark generation module is used for generating a voice interaction mark according to the mark position, the mark voice and a first game account, wherein the first game account corresponds to the first terminal;
and the voice interaction mark synchronizing module is used for synchronizing the voice interaction mark to a second terminal so that the second terminal can display the voice interaction mark on a graphical user interface of the second terminal and play the marked voice in response to a third contact operation on the voice interaction mark.
The embodiment of the invention also provides electronic equipment, which comprises a processor, a memory and a computer program stored on the memory and capable of running on the processor, wherein the computer program is executed by the processor to realize the steps of the method for realizing voice interaction in the game.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the method of voice interaction in the game when being executed by a processor.
The invention has the following advantages:
in the embodiment of the invention, the marking position is determined by responding to the first contact operation of the operation medium on the graphical user interface, the voice data is input to obtain the marking voice in response to the second contact operation of the operation medium on the graphical user interface, the voice interaction mark is generated according to the marking position, the marking voice and the first game account and is synchronized to the second terminal, so that the second terminal displays the voice interaction mark on the graphical user interface, and plays the marking voice in response to the third contact operation on the voice interaction mark.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the description of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of steps of a method for in-game voice interaction provided by an embodiment of the present invention;
FIG. 2 is a flow chart of steps of another method of in-game voice interaction provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of the present invention for recording voice data;
FIG. 4 is a schematic diagram of a voice entry intermediate revocation provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a method for adjusting the position of a recording icon and/or a voice interactive mark according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a method for converting voice data into text according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of deleting a voice interaction mark according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of playing voice data according to an embodiment of the present invention;
fig. 9 is a block diagram of a device for voice interaction in a game according to an embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart illustrating steps of a method for voice interaction in a game, provided by an embodiment of the present invention, is applied to a first terminal, and a graphical user interface is obtained by executing a software application on a processor of the first terminal and rendering the software application on a touch display of the first terminal, where the method specifically includes the following steps:
step 101, determining a marking position in response to a first contact operation of an operation medium on the graphical user interface;
as an example, the operating medium may include a finger, a stylus, a gaze in VR glasses, etc., which may perform a touch operation based on the mobile terminal or VR glasses (virtual interface), e.g., the operating medium performs a click or slide operation on a touch display of the mobile terminal.
The game application is run on the mobile terminal, and a graphical user interface is rendered on a touch display of the mobile terminal, wherein the displayed content of the graphical user interface at least partially comprises a local or whole game scene, and the specific form of the game scene can be square or other shapes (such as a circle and the like).
In response to a first touch operation of the player at the graphical user interface using the operating medium, a position of the marker is determined based on a position of the first touch operation of the player.
Step 102, responding to the second contact operation of the operation medium on the graphical user interface, and inputting voice data to obtain marked voice;
after the first touch operation is performed, the player can control the operation medium to perform the second touch operation. And responding to the second contact operation, inputting voice data by the player, and generating marked voice by adopting the voice data after the voice data are input.
Step 103, generating a voice interaction mark according to the mark position, the mark voice and a first game account, wherein the first game account corresponds to the first terminal;
the first terminal can log in the server through the first game account, and after the marked voice is generated, a voice interaction mark is generated according to the mark position determined by the player before, the generated marked voice and the first game account.
Step 104, synchronizing the voice interaction mark to a second terminal so that the second terminal can display the voice interaction mark on a graphical user interface of the second terminal, and responding to a third contact operation on the voice interaction mark to play the marked voice;
the second terminal is connected with the first terminal through the server, after the voice interaction mark is generated, the voice interaction mark is synchronized to the second terminal, the voice interaction mark is displayed on a graphical user interface of the second terminal, a player can control an operation medium to execute a third contact operation on the graphical user interface of the second terminal, and marked voice is played on the second terminal in response to the third contact operation.
In the embodiment of the invention, the marking position is determined by responding to the first contact operation of the operation medium on the graphical user interface, the voice data is recorded to obtain the marking voice in response to the second contact operation of the operation medium on the graphical user interface recording icon, the voice interaction mark is generated according to the marking position, the marking voice and the first game account, the voice interaction mark is synchronized to the second terminal, so that the second terminal can display the voice interaction mark on the graphical user interface, the third contact operation on the voice interaction mark is responded to play the marking voice, and the clues are marked by using the voice at any position of the virtual object in the graphical user interface, so that players can communicate the found clues in a targeted manner, the efficiency when describing the clue information points is improved, and the game can be performed more smoothly.
Referring to fig. 2, a flowchart illustrating steps of another method for voice interaction in a game, provided by an embodiment of the present invention, applied to a first terminal, where a graphical user interface is obtained by executing a software application on a processor of the first terminal and rendering the software application on a touch display of the first terminal, may specifically include the following steps:
step 201, a virtual object exists in the graphical user interface, and a mark position is determined in response to a first contact operation of an operation medium in the graphical user interface;
in a preferred embodiment of the present invention, when the virtual object is text, the operation medium is operated as a long press operation in the first contact of the graphical user interface, and the step 201 may include the following sub-steps:
generating an underline mark at the text in response to a long press operation of an operation medium on the text;
and in response to a drag operation of the operation medium on the underlined mark, selecting a target keyword or a target sentence from the text in a frame mode, and determining the position of the target keyword or the target sentence as a mark position.
In the embodiment of the invention, the virtual object may be a text, specifically, may be information preset by a game developer in a specific game scene and appearing in a text form, for example, a text on a wall in the game scene, a text displayed on a prop of a book, etc., which is not limited in the embodiment of the invention.
The first contact operation is a long press operation, the target keyword or the target sentence is considered by a player, a clue helpful to the game pushing is achieved, specifically, an underline is generated at the text in response to the long press operation of the operation medium on the target keyword or the target sentence in the text, the target keyword or the target sentence is selected through dragging the underline, and after the selection is completed, the position of the selected target keyword or target sentence is determined to be the mark position. For example, the player sees the characters in the game scene to appear on the touch display of the mobile terminal, considers that the characters have clues required for playing the game, namely target keywords or target sentences, the player can control the operation mediums such as fingers, a touch pen and the like to press target keywords or target sentences which are required to be marked in the characters on the touch display, an underline is generated at a long-pressed position of the player in the game scene, the player selects a character range by dragging the underline frame, and the position of the selected character range is determined to be a marking position.
In another preferred embodiment of the present invention, when the virtual object is a picture, the first contact operation of the operation medium on the virtual object is a click operation, and the step 201 may include the following sub-steps:
determining a target picture area from the picture in response to clicking operation of an operation medium on the picture;
and determining the position of the target picture area as a mark position.
In the embodiment of the present invention, the virtual object may be a picture, specifically, information preset by a game developer in a specific game scene and appearing in the form of a picture, for example, a picture on a wall, a picture on a desktop, etc. in the game scene, which is not limited in the embodiment of the present invention.
The first contact operation is a clicking operation, the target picture area is an area where a player considers that the game clue is located, specifically, the picture area in the game scene is determined to be the target picture area by responding to the clicking operation of the target picture area on the picture by the operation medium, and the position where the target picture area is located is determined to be the marking position. For example, the player sees that the picture in the game scene appears on the touch display of the first terminal, considers that the picture has a region containing clues, the region is a target picture region, the player can control an operation medium, such as a finger, a touch pen and the like, to click on the target picture region to be marked in the picture on the touch display, the target picture region clicked by the player in the game scene is the target picture region, and the region is determined to be the marking position.
Step 202, generating a recording icon corresponding to the mark position;
after the marker location is determined, a sound recording icon is generated at the marker location.
Step 203, responding to the second contact operation of the operation medium for the recording icon, and recording voice data to obtain marked voice;
in the embodiment of the invention, the second contact operation is a long-press operation, a recording button corresponding to the recording icon appears in a game scene at the same time when the recording icon is generated, a player can control an operation medium to press the recording button displayed on the touch display for a long time, the recording function of the mobile terminal device is started, voice data is recorded, when the player wants to end voice data recording, the operation medium on the recording button is loosened, and in response to the loosening operation of the operation medium on the recording button, the recorded voice data is adopted to generate marked voice.
When the player wants to input voice data, the voice data starts to be input by pressing the record button for a long time, and when the voice data is input, the player releases the finger on the record button, so that the marked voice can be generated, as shown in the right side of fig. 3. It should be noted that, the position of the record button in the game scene can be set according to the actual requirement, which is not limited in the embodiment of the present invention.
During the process of recording voice data by a player, the player may be dissatisfied with the voice data being recorded and needs to undo the voice data being recorded, so that the embodiment of the invention can further comprise the following steps:
providing a revocation area above said record button;
when the operation medium is subjected to long-press operation on the recording button, the voice data input through the recording icon is canceled in response to the operation that the operation medium slides upwards to the cancel area on the recording button and is released.
As shown in fig. 4, there is a revocation area above the record button, and when the player presses the record button for a long time, the voice data being recorded is revoked by sliding the operation medium up to the revocation area and releasing the operation medium.
Step 204, generating a voice interaction mark according to the mark position, the mark voice and a first game account, wherein the first game account corresponds to the first terminal;
the voice interaction mark comprises a position indication object, a voice indication object and a first game account indication object, for example, a line pointing to the mark position belongs to the position indication object and can be obtained from the mark position; the duration information of the indication mark voice belongs to the voice indication object and can be obtained by the mark voice; the player avatar belongs to the first game account indication object and can be obtained by the first game account.
The voice interaction mark generated on the target keyword or the target sentence sometimes has the problems of shielding characters and obstructing the reading of a player, and the embodiment of the invention can further comprise the following steps:
when the voice interaction mark is in a hidden state, responding to clicking operation of an operation medium on the target keyword or the target sentence with the underline mark, and displaying the voice interaction mark;
and when the voice interaction mark is in a display state, hiding the voice interaction mark in response to clicking operation of an operation medium on the target keyword or the target sentence with the underline mark.
Specifically, after the voice interaction mark is generated, when a player considers that the voice interaction mark affects the reading of the characters, the player clicks the underline mark through an operation medium to hide the voice interaction mark; when a player needs to operate on the voice interaction indicia, clicking on the underline indicia may display the voice interaction indicia.
Step 205, synchronizing the voice interaction mark to a second terminal, so that the second terminal can display the voice interaction mark on a graphical user interface of the second terminal, and playing the marked voice in response to a third contact operation on the voice interaction mark;
step 206, adjusting the position of the recording icon and/or the voice interaction mark in the graphical user interface in response to the fourth contact operation of the operation medium on the recording icon and/or the voice interaction mark;
in the embodiment of the invention, when the record icon is generated, the cue generation position on the virtual object indicates the object, such as a mark point, and the mark point is connected with the record icon.
When the virtual object is a word, an underline is generated at the target keyword and the target sentence to select a word range, the underline is a mark point of the selected word, and when the virtual object is a target picture area, a mark point is generated at the target picture area.
Specifically, the fourth contact operation is a long-press and drag operation, for example, when the player finds that the generated record icon and/or voice interactive mark is not at the desired position, the player can control the operation medium to press the record icon and/or voice interactive mark on the touch display for a long time, and then the player can control the operation medium to drag the record icon and/or voice interactive mark, so as to adjust the position of the record icon and/or voice interactive mark in the graphical user interface.
As shown in fig. 5, if the player holds the mark point of the voice interaction mark with a finger, the voice interaction mark can be moved to a desired place if the player drags the mark point.
Step 207, in response to a fifth contact operation of the operation medium on the voice interaction mark, converting the voice data into text and displaying the text in the graphical user interface.
Specifically, the fifth contact operation is a long-press operation, the player can control the operation medium to press the voice interaction mark on the touch display for a long time, and trigger the voice data conversion text function, as shown in fig. 6, when the player is inconvenient to play voice data, the player can press the voice interaction mark on the touch display for a long time with a finger, the recorded voice data is converted into text, and the text is displayed in the graphical user interface.
In a preferred embodiment of the invention, the method may further comprise the steps of:
providing a deletion area in the graphical user interface;
moving the voice interactive mark to the deleting area in response to a sixth contact operation on the voice interactive mark of the operation medium;
and deleting the voice interaction mark in response to the loosening operation of the operation medium on the voice interaction mark.
As an example, as shown in fig. 7, the sixth contact operation is a long press and move operation, and the deletion area is a trash can icon located below in the graphical user interface, typically in a hidden state. When the player wants to delete the voice interaction mark, the player can move the finger to press the mark point of the voice interaction mark on the touch display for a long time, move the finger, the deleting area is displayed at the lower position in the game scene, the player moves the finger to the position where the deleting area is located, and after the player releases the finger on the voice interaction mark, the voice interaction mark is deleted.
After generating the voice interaction indicia, the player may also play the indicia voice by interacting with the voice interaction indicia in the graphical user interface. In this regard, the embodiment of the invention may further include the following steps:
playing the marked voice in response to a seventh contact operation of the operation medium on the voice interaction mark;
in the embodiment of the invention, the seventh contact operation is a clicking operation on the first terminal, after the voice interaction mark is generated, the player can respond to the clicking operation of the player by performing the clicking operation on the voice interaction mark on the first terminal by using the operation medium, and the marked voice is played.
Specifically, the upper right corner of the voice interaction mark displays the playing time length of the marked voice, and the player clicks the voice interaction mark with fingers, and the upper right corner of the voice interaction mark starts to count down the playing time in response to the clicking operation of the player, which is used for prompting the player of the progress of voice playing, and stops playing the marked voice when the count down is finished.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 9, a block diagram of an apparatus for voice interaction in a game, which is applied to a first terminal and is used for executing a software application on a processor of the first terminal and rendering a graphical user interface on a touch display of the first terminal, according to an embodiment of the present invention, the apparatus may include the following modules:
a first response module 301 for determining a marker position in response to a first touch operation of an operation medium on the graphical user interface;
a second response module 302, configured to, in response to a second contact operation of the operation medium on the graphical user interface, enter voice data to obtain a marked voice;
a voice interaction mark generation module 303, configured to generate a voice interaction mark according to the mark position, the mark voice, and a first game account, where the first game account corresponds to the first terminal;
and the voice interaction mark synchronizing module 304 is configured to synchronize the voice interaction mark to a second terminal, so that the second terminal displays the voice interaction mark on its graphical user interface, and plays the marked voice in response to a third contact operation on the voice interaction mark.
In an embodiment of the present invention, the first response module 301 includes:
the long-press response sub-module is used for responding to long-press operation of an operation medium on characters when the virtual object is detected to be the characters, and generating an underline mark at the characters;
a text box selection sub-module, configured to respond to a drag operation of the operation medium on the underline label, and select a target keyword or a target sentence from the text box;
the first marking position determining sub-module is used for determining the position of the target keyword or the target sentence as a marking position;
and the first record icon generation sub-module is used for generating a record icon corresponding to the mark position.
In an embodiment of the present invention, the first response module 301 includes:
the click response sub-module is used for responding to click operation of an operation medium on the picture when the virtual object is detected as the picture, and determining a target picture area from the picture;
a second marking position determining sub-module, configured to determine a position of the target picture area as a marking position;
and the second record icon generation sub-module is used for generating a record icon corresponding to the mark position.
In one embodiment of the present invention, the second response module 302 includes:
the voice data input submodule is used for responding to long-press operation of the operation medium on the recording button and inputting voice data;
and the marking voice generation sub-module is used for responding to the loosening operation of the operation medium on the recording button and generating marking voice by adopting the voice data.
In an embodiment of the present invention, the second response module 302 further includes:
a revocation zone providing sub-module for providing a revocation zone above the record button;
and the recording removing pin module is used for responding to the operation that the operating medium slides upwards to the removing area and is released on the recording button when the operating medium is subjected to long-time pressing operation on the recording button, and removing the recorded voice data.
In an embodiment of the invention, the apparatus further comprises:
a voice interaction mark hiding module, configured to, when the voice interaction mark is in a display state, hide the voice interaction mark in response to a click operation of the operation medium on the target keyword or the target sentence with the underline mark;
and the voice interaction mark display module is used for responding to the click operation of the operation medium on the target keyword or the target sentence with the underline mark when the voice interaction mark is in a hidden state, and displaying the voice interaction mark.
In an embodiment of the invention, the apparatus further comprises:
and the fourth response module is used for responding to fourth contact operation of the operation medium on the sound recording icon and/or the voice interaction mark and adjusting the position of the sound recording icon and/or the voice interaction mark in the graphical user interface.
In an embodiment of the invention, the apparatus further comprises:
and a fifth response module, configured to respond to a fifth contact operation of the operation medium on the voice interaction mark, convert the voice data into text, and display the text in the graphical user interface.
In an embodiment of the invention, the apparatus further comprises:
a deleted region providing module for providing a deleted region in the graphical user interface;
a sixth response module, configured to move the voice interaction mark to the deletion area in response to a sixth contact operation of the operation medium on the voice interaction mark;
and the voice interaction mark deleting module is used for responding to the loosening operation of the operation medium on the voice interaction mark and deleting the voice interaction mark.
In an embodiment of the invention, the apparatus further comprises:
and the seventh response module is used for responding to a seventh contact operation of the operation medium on the voice interaction mark and playing the marked voice.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
An embodiment of the present invention also provides an electronic device that may include a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program implementing the steps of the method of voice interaction in a game as described above when executed by the processor.
An embodiment of the invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of voice interaction in a game as above.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above detailed description of the method and apparatus for voice interaction in game, the electronic device and the storage medium provided by the present invention applies specific examples to illustrate the principles and embodiments of the present invention, and the above examples are only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (15)

1. A method of in-game voice interaction applied to a first terminal, characterized in that a graphical user interface is rendered on a touch display of the first terminal by executing a software application on a processor of the first terminal, the method comprising:
determining a marker position in response to a first touch operation of an operation medium on the graphical user interface while the game is running;
responding to the second contact operation of the operation medium on the graphical user interface, and inputting voice data to obtain marked voice;
generating a voice interaction mark according to the mark position, the mark voice and a first game account, wherein the first game account corresponds to the first terminal;
synchronizing the voice interaction mark to a second terminal in real time so that the second terminal can display the voice interaction mark on a graphical user interface of the second terminal, and responding to a third contact operation on the voice interaction mark to play the marked voice;
wherein the marked voice is used for communicating cue information.
2. The method of claim 1, wherein a virtual object exists in the graphical user interface, and wherein the step of determining the marker location in response to a first touch operation of an operating medium on the graphical user interface when the virtual object is text comprises:
generating an underline mark at the text in response to a long press operation of an operation medium on the text;
and in response to a drag operation of the operation medium on the underlined mark, selecting a target keyword or a target sentence from the text in a frame mode, and determining the position of the target keyword or the target sentence as a mark position.
3. The method of claim 1, wherein a virtual object exists in the graphical user interface, and wherein the step of determining the marker location in response to a first touch operation of an operating medium on the graphical user interface when the virtual object is a picture comprises:
determining a target picture area from the picture in response to clicking operation of an operation medium on the picture;
and determining the position of the target picture area as a mark position.
4. A method according to claim 1 or 2 or 3, further comprising:
generating a recording icon corresponding to the mark position;
the second touch operation is a touch operation that acts on the sound recording icon.
5. The method of claim 4, wherein the record icon has a corresponding record button, and wherein the step of entering voice data to obtain a markup voice in response to a second touch-type operation of the operating medium on the graphical user interface comprises:
responding to the long-time pressing operation of the operation medium on the recording button, and recording voice data;
generating a marking voice using the voice data in response to a release operation of the operation medium on the record button.
6. The method as recited in claim 1, further comprising:
and adjusting the position of the sound recording icon and/or the voice interaction mark in the graphical user interface in response to a fourth contact operation of the operation medium on the sound recording icon and/or the voice interaction mark.
7. The method of claim 1, wherein generating the voice interaction indicia based on the indicia location, the indicia voice, and the first game account further comprises:
when the voice interaction mark is in a hidden state, responding to clicking operation of the operation medium on the target keyword or the target sentence with the underline mark, and displaying the voice interaction mark;
and when the voice interaction mark is in a display state, hiding the voice interaction mark in response to clicking operation of the operation medium on the target keyword or the target sentence with the underline mark.
8. The method as recited in claim 1, further comprising:
and converting the voice data into characters and displaying the characters in the graphical user interface in response to a fifth contact operation of the operation medium on the voice interaction mark.
9. The method of claim 1 or 6 or 7 or 8, wherein the voice interaction indicia comprises one or more of:
a location indicating object, a voice indicating object, a first game account indicating object.
10. The method as recited in claim 5, further comprising:
providing a revocation area above said record button;
and when the operation medium is subjected to long-time pressing operation on the recording button, the recording voice data is withdrawn in response to the operation that the operation medium slides upwards to the withdrawal area and is released on the recording button.
11. The method as recited in claim 1, further comprising:
providing a deletion area in the graphical user interface;
moving the voice interactive mark to the deletion area in response to a sixth contact operation of the operation medium on the voice interactive mark;
and deleting the voice interaction mark in response to the loosening operation of the operation medium on the voice interaction mark.
12. The method as recited in claim 1, further comprising:
and playing the marked voice in response to a seventh contact operation of the operation medium on the voice interaction mark.
13. An apparatus for in-game voice interaction applied to a first terminal, wherein a graphical user interface is rendered on a touch display of the first terminal by executing a software application on a processor of the first terminal, the apparatus comprising:
the first response module is used for responding to the first contact operation of the operation medium on the graphical user interface when the game is running and determining the mark position;
the second response module is used for responding to the second contact operation of the operation medium on the graphical user interface and inputting voice data to obtain marked voice;
the voice interaction mark generation module is used for generating a voice interaction mark according to the mark position, the mark voice and a first game account, wherein the first game account corresponds to the first terminal;
the voice interaction mark synchronizing module is used for synchronizing the voice interaction mark to a second terminal in real time so that the second terminal can display the voice interaction mark on a graphical user interface of the second terminal and play the marked voice in response to a third contact operation on the voice interaction mark;
wherein the marked voice is used for communicating cue information.
14. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the method of in-game voice interaction of any of claims 1 to 12 when executed by the processor.
15. A computer readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steps of the method of in-game voice interaction according to any of claims 1 to 12.
CN201910631683.3A 2019-07-12 2019-07-12 Method and device for voice interaction in game, electronic equipment and storage medium Active CN110215707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910631683.3A CN110215707B (en) 2019-07-12 2019-07-12 Method and device for voice interaction in game, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910631683.3A CN110215707B (en) 2019-07-12 2019-07-12 Method and device for voice interaction in game, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110215707A CN110215707A (en) 2019-09-10
CN110215707B true CN110215707B (en) 2023-05-05

Family

ID=67812453

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910631683.3A Active CN110215707B (en) 2019-07-12 2019-07-12 Method and device for voice interaction in game, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110215707B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111013145A (en) * 2019-12-18 2020-04-17 北京智明星通科技股份有限公司 Game object marking method, device and server in group battle game
CN113392272A (en) * 2020-03-11 2021-09-14 阿里巴巴集团控股有限公司 Method and device for voice marking of pictures and videos
CN111773670A (en) * 2020-07-10 2020-10-16 网易(杭州)网络有限公司 Marking method, device, equipment and storage medium in game

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236037A (en) * 2005-02-25 2006-09-07 Nippon Telegr & Teleph Corp <Ntt> Voice interaction content creation method, device, program and recording medium
CN103002960A (en) * 2010-05-11 2013-03-27 索尼电脑娱乐美国公司 Placement of user information in a game space
WO2015102082A1 (en) * 2014-01-06 2015-07-09 株式会社Nttドコモ Terminal device, program, and server device for providing information according to user data input
CN107888757A (en) * 2017-09-25 2018-04-06 努比亚技术有限公司 A kind of voice message processing method, terminal and computer-readable recording medium
CN108499106A (en) * 2018-04-10 2018-09-07 网易(杭州)网络有限公司 The treating method and apparatus of race games prompt message
CN109634501A (en) * 2018-12-20 2019-04-16 掌阅科技股份有限公司 E-book annotates adding method, electronic equipment and computer storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236037A (en) * 2005-02-25 2006-09-07 Nippon Telegr & Teleph Corp <Ntt> Voice interaction content creation method, device, program and recording medium
CN103002960A (en) * 2010-05-11 2013-03-27 索尼电脑娱乐美国公司 Placement of user information in a game space
WO2015102082A1 (en) * 2014-01-06 2015-07-09 株式会社Nttドコモ Terminal device, program, and server device for providing information according to user data input
CN107888757A (en) * 2017-09-25 2018-04-06 努比亚技术有限公司 A kind of voice message processing method, terminal and computer-readable recording medium
CN108499106A (en) * 2018-04-10 2018-09-07 网易(杭州)网络有限公司 The treating method and apparatus of race games prompt message
CN109634501A (en) * 2018-12-20 2019-04-16 掌阅科技股份有限公司 E-book annotates adding method, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN110215707A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
US20210245049A1 (en) Method, non-transitory computer-readable recording medium, information processing system, and information processing device
CN110215707B (en) Method and device for voice interaction in game, electronic equipment and storage medium
RU2557762C2 (en) Method of moving object between pages and interface device
CN101073048B (en) A content-management interface
US10860345B2 (en) System for user sentiment tracking
US10622021B2 (en) Method and system for video editing
CN110090444B (en) Game behavior record creating method and device, storage medium and electronic equipment
US20130268826A1 (en) Synchronizing progress in audio and text versions of electronic books
CN107209756B (en) Supporting digital ink in markup language documents
CN109375865A (en) Jump, check mark and delete gesture
CN109939445B (en) Information processing method and device, electronic equipment and storage medium
CN103034395A (en) Techniques to facilitate asynchronous communication
Bouchardon Figures of gestural manipulation in digital fictions
CN110302535B (en) Game thread recording method, device, equipment and readable storage medium
KR20190138798A (en) Live Ink Presence for Real-Time Collaboration
JP6294035B2 (en) Information processing apparatus, system, method, and program
WO2015120072A1 (en) Collaborative group video production system
US20230054388A1 (en) Method and apparatus for presenting audiovisual work, device, and medium
JP6474728B2 (en) Enhanced information gathering environment
Jokela et al. Mobile video editor: design and evaluation
CN112951013B (en) Learning interaction method and device, electronic equipment and storage medium
CN114797102A (en) Information display method and device, computer readable storage medium and electronic equipment
CN106851330B (en) Web technology-based on-line on-demand micro-class video dotting playing method
JP6196569B2 (en) DATA GENERATION / EDITION DEVICE, PROGRAM, AND DATA GENERATION / EDITION METHOD
CN113362802A (en) Voice generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant