CN112203134A - Method, apparatus, computer readable medium and electronic device for information processing - Google Patents

Method, apparatus, computer readable medium and electronic device for information processing Download PDF

Info

Publication number
CN112203134A
CN112203134A CN202011056741.3A CN202011056741A CN112203134A CN 112203134 A CN112203134 A CN 112203134A CN 202011056741 A CN202011056741 A CN 202011056741A CN 112203134 A CN112203134 A CN 112203134A
Authority
CN
China
Prior art keywords
video information
audio
recording
instant
instant audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011056741.3A
Other languages
Chinese (zh)
Inventor
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011056741.3A priority Critical patent/CN112203134A/en
Publication of CN112203134A publication Critical patent/CN112203134A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The embodiment of the invention provides a method and a device for information processing, a computer readable medium and electronic equipment. The method for information processing includes: responding to a first recording instruction, recording and storing first instant audio/video information; in response to a first recording stopping instruction, stopping recording the first instant audio/video information; responding to a second recording instruction, recording and storing second instant audio/video information; and in response to stopping recording the second instant audio/video information, synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information. The embodiment of the invention effectively improves the communication efficiency, reduces the unnecessary resource consumption and reduces the cost by storing the first instant audio/video information and the second instant audio/video information and combining the first instant audio/video information and the second instant audio/video information.

Description

Method, apparatus, computer readable medium and electronic device for information processing
Technical Field
Embodiments of the present invention generally relate to the field of information processing technology, and in particular, to a method and apparatus for recording instant audio/video information, a computer-readable medium, and an electronic device.
Background
With the development of instant messaging technology, most instant messaging tools have voice information capability, and meanwhile, a large number of users also choose to use voice information for communication. In addition, the number of users communicating with each other using video information is also increasing.
Fig. 1 illustrates a user interface for transmitting voice information from the perspective of a user in the related art. Fig. 2 shows a schematic flow chart of transmitting voice information in the prior art. As shown in fig. 1 and 2, in the existing instant messaging tool, after the user holds down to speak, the instant messaging tool starts to record voice information. When the voice information is less than 60s, the voice information can be directly dragged and converted into characters or deleted after the voice recording is finished, or the voice information can be sent out after the user looses his hands. When the voice message reaches 60s, the voice message is automatically sent out.
However, as shown in fig. 1 and 2, in the existing instant messaging tool, the voice information and the video information can be recorded and transmitted only once, and during the recording of the information, other transactions cannot be processed, the information cannot be saved, and the information cannot be recorded continuously at a breakpoint. In one scenario, when sending voice information, if text information is to be sent or other tasks are to be completed in the middle, the voice information needs to be sent in two segments, or the voice information needs to be recorded again. For example, when a voice replies to a and a certain picture or video is to be sent to B, the voice must reply to a and then send the picture or video, or reply to a half of the voice message first, then send the picture or video of B, and then continue to reply to a. Therefore, when instant voice information or instant video information is transmitted, if text information is to be transmitted or other tasks are to be completed in the middle, the information needs to be transmitted in two segments or recorded again. This seriously affects communication efficiency and consumes a lot of unnecessary resources and costs.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method, an apparatus, a computer-readable medium, and an electronic device for information processing to alleviate, reduce, or even eliminate the above-mentioned problems.
According to a first aspect of embodiments of the present invention, there is provided a method for information processing, including: responding to a first recording instruction, recording and storing first instant audio/video information; in response to a first recording stopping instruction, stopping recording the first instant audio/video information; responding to a second recording instruction, recording and storing second instant audio/video information; and in response to stopping recording the second instant audio/video information, synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information.
According to a second aspect of the embodiments of the present invention, there is provided an audio/video information recording method, including: recording the audio/video information in response to the first recording instruction; stopping recording the audio/video information in response to a first recording stopping instruction; in response to a second recording instruction, in an interface for stopping recording the audio/video information, continuing recording the audio/video information on the basis of the audio/video information; in response to stopping recording the audio/video information, performing any one of the following operations on the audio/video information: converting voice information in the audio/video information into text, and transmitting or deleting the text, deleting the audio/video information, and transmitting the audio/video information.
According to a third aspect of embodiments of the present invention, there is provided an apparatus for information processing, including: an instant audio/video information generation module for: responding to a first recording instruction, recording and storing first instant audio/video information; in response to a first recording stopping instruction, stopping recording the first instant audio/video information; responding to a second recording instruction, recording and storing second instant audio/video information; and a synthesis module to: and responding to the stop of recording the second instant audio/video information, and synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information.
In some embodiments, based on the foregoing, the instant audio/video information generation module is further configured to: creating an audio/video storage cartridge having a predefined storage space, wherein said first instant audio/video information is stored in said audio/video storage cartridge.
In some embodiments, based on the foregoing solution, the instant audio/video information generation module is further configured to: in response to stopping recording of said first instant audio/video information, calculating a remaining available audio/video information storage space of said audio/video storage cartridge.
In some embodiments, based on the foregoing, the instant audio/video information generation module is further configured to: storing the second instant audio/video information in the remaining available audio/video information storage space; if the second instant audio/video information does not occupy the remaining available audio/video information storage space, continuing to record the second instant audio/video information until the recording of the second instant audio/video information is stopped in response to a second recording stopping instruction; and stopping recording the second instant audio/video information if the remaining available audio/video information storage space is occupied by the second instant audio/video information.
In some embodiments, based on the foregoing scheme, the first instruction to stop recording and the second instruction to stop recording are generated based on an operation instruction of a user or based on a response to an event.
In some embodiments, based on the foregoing scheme, the apparatus further comprises: a treatment module to: performing any one of the following operations on the combined instant audio/video information: converting voice information in the combined instant audio/video information to text and sending or deleting the text, deleting the combined instant audio/video information, and sending the combined instant audio/video information.
In some embodiments, based on the foregoing, the handling module is further configured to: removing remaining unrecorded space in said audio/video storage cartridge prior to sending said combined instant audio/video information.
In some embodiments, based on the foregoing, the handling module is further configured to: in response to synthesizing the first and second instant audio/video information into the combined instant audio/video information, automatically sending the combined instant audio/video information if the remaining available audio/video information storage space is occupied by the second instant audio/video information.
In some embodiments, based on the foregoing solution, the instant audio/video information generation module is further configured to: responsive to synthesizing said first instant audio/video information and said second instant audio/video information into said combined instant audio/video information, recording additional instant audio/video information and storing said additional instant audio/video information in said audio/video storage cartridge in response to additional recording instructions if said second instant audio/video information does not fill said remaining available audio/video information storage space, and said synthesizing module is further for: in response to stopping recording the additional instant audio/video information, synthesizing the additional instant audio/video information with the combined instant audio/video information.
In some embodiments, based on the foregoing scheme, the first instant audio/video information and the second instant audio/video information are stored in the audio/video storage cartridge at predefined intervals.
In some embodiments, based on the foregoing, the instant audio/video information generation module is further configured to: a plurality of audio/video storage cartridges having predefined storage spaces are created, wherein each audio/video storage cartridge of said plurality of audio/video storage cartridges is associated with a different session and/or with a different object in the same session, respectively.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable medium having stored thereon a computer program which, when executed by a processor, causes the processor to execute the method for information processing as described in the above-described embodiments of the present invention.
According to a fifth aspect of embodiments of the present invention, there is provided an electronic apparatus, including: a storage device for storing a program; a processor configured to execute the program to perform the method for information processing as described in the above inventive embodiments.
The embodiment of the invention can have the following beneficial effects:
in the technical solutions provided by some embodiments of the present invention, by recording and storing the first instant audio/video information and the second instant audio/video information, and synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information, the instant audio/video information function can be independent and not coupled with other functions; when recording information, the processing of other tasks can be suspended, and the recording can be continued after returning; information recording supports local storage capabilities. Therefore, the embodiment of the invention can effectively avoid the problems that the audio/video information needs to be repeatedly recorded or sent in two sections in the instant communication, and the like, thereby improving the communication efficiency, reducing the unnecessary resource consumption and lowering the cost.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the following description will be made of exemplary embodiments with reference to the accompanying drawings. Obviously, the figures in the following description are only some embodiments of the invention, in which:
FIG. 1 illustrates a user interface for transmitting voice information from a user's perspective in the prior art;
FIG. 2 shows a schematic flow diagram of transmitting voice information in the prior art;
FIG. 3 illustrates an exemplary system architecture according to an embodiment of the present invention;
fig. 4 schematically illustrates a flow chart of an instant audio/video information recording method according to an embodiment of the present invention;
fig. 5 shows a schematic flow chart of the steps of recording and storing second audio/video information in the instant audio/video information recording method shown in fig. 4;
FIG. 6 illustrates an example schematic diagram of processing instant audio/video information within an audio/video storage cartridge in accordance with an embodiment of the present invention;
fig. 7 is a block diagram illustrating an exemplary structure of an apparatus for instant audio/video information resume in accordance with an embodiment of the present invention;
FIG. 8 illustrates an example flow diagram of an application scenario in accordance with one embodiment of this disclosure;
9 a-9 h illustrate a user interface for instant audio/video information continuation from the perspective of a user according to one embodiment of the present invention;
10a-10j illustrate user interfaces for instant audio/video information continuation from the perspective of a user according to another embodiment of the present invention;
FIG. 11 shows a schematic block diagram of a computing device according to an embodiment of the invention.
It should be understood that other figures may also be derived from these figures to those skilled in the art.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are merely for illustrating and explaining the present invention, and are not intended to limit the present invention, and features in the embodiments and examples of the present invention may be combined with each other without conflict.
The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the inventive aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures are only functional entities and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In particular, according to an embodiment of the present invention, the processes described below with reference to the flowcharts may be implemented as computer programs. For example, embodiments of the present invention provide a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing at least one step of the method embodiments of the present disclosure.
Before describing embodiments of the present invention in detail, some relevant concepts are explained first:
1. instant messaging: instant Messaging (IM) refers to a service capable of instantly sending and receiving an internet message and the like. Two or more people are allowed to use the network to communicate text messages, files, voice and video in real time.
2. Instant audio/video information: instant recorded audio or video information delivered in an instant communication.
3. Audio/video storage box: a memory for storing instant audio/video information in instant communications.
Fig. 3 illustrates an exemplary system architecture 300 of an instant audio/video information recording method or apparatus to which embodiments of the present invention may be applied, in which various methods described herein may be implemented. As shown in fig. 3, the system architecture 300 includes a server 310, a network 340, and one or more terminal devices 350.
Server 310 stores and executes instructions, which may be a single server or a cluster of servers, that can perform the various methods described herein. It should be understood that the servers referred to herein are typically server computers having a large amount of memory and processor resources, but other embodiments are possible.
Examples of network 340 include a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), and/or a combination of communication networks such as the Internet. Server 310 and one or more terminal devices 350 may each include at least one communication interface (not shown) capable of communicating over network 340. Such communication interfaces may be one or more of the following: any type of network interface (e.g., a Network Interface Card (NIC)), wired or wireless (such as IEEE 802.11 wireless lan (wlan)) wireless interface, a global microwave access interoperability (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth. Further examples of communication interfaces are described elsewhere herein.
Terminal device 350 may be any type of mobile computing device, including a mobile computer or mobile computing device (e.g., Microsoft Surface devices, Personal Digital Assistants (PDAs), laptop computers, notebook computers, a tablet computer such as Apple iPad, a netbook, etc.), a mobile phone (e.g., a cellular phone, a smart phone such as Microsoft Windows telephone, an Apple iPhone, a phone implementing the Google Android operating system, Palm devices, Blackberry devices, etc.), a wearable computing device (e.g., smart watches, a head mounted device, including smart glasses, such as Google Glass, etc.), or other type of mobile device. In some embodiments, terminal device 350 may also be a stationary computing device. Further, where the system includes multiple terminal devices 350, the multiple terminal devices 350 may be the same or different types of computing devices.
The terminal device 350 may include a display screen 351 and a terminal application 352 that may interact with an end user via the display screen 351. Terminal device 350 may interact with, e.g., send data to or receive data from, server 310, e.g., via network 340. The terminal application 352 may be a native application, a Web page (Web) application, or an applet (LiteApp) that is a lightweight application. In the case where the terminal application 352 is a native application that needs to be installed, the terminal application 352 may be installed in the user terminal 350. In the case where the terminal application 352 is a Web application, the terminal application 352 can be accessed through a browser. In the case where the terminal application 352 is an applet, the terminal application 352 may be directly opened on the user terminal 350 by searching related information of the terminal application 352 (e.g., a name of the terminal application 352), scanning a graphic code of the terminal application 352 (e.g., a barcode, a two-dimensional code, etc.), and the like, without installing the terminal application 352.
In one application scenario of the disclosed embodiment, a user may use a user terminal 350 to send instant messages to a server 310 over a network 340. Other user terminals 350 may receive instant messages from the server 310 over the network 340. In the case of group instant messaging, instant messages sent by different user terminal devices 350 may be synchronized by the server 310 to ensure that different user terminals 310 in the same instant messaging group can receive instant messages.
It should be understood that the number of terminal devices, networks, and servers in fig. 3 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 4 schematically illustrates a flow chart 400 of an instant audio/video information recording method according to an embodiment of the present invention. First, in step 410, in response to a first recording instruction, first instant audio/video information is recorded and stored. In one embodiment, the first recording instruction may include a user action instruction, such as a hold-to-speak or hold-to-shoot video, a double-click start, or may include an otherwise defined user action instruction. In another embodiment, the first recording command may also include a command input by the user, such as a voice command, a text command, and the like. In one embodiment, the first instant audio/video information may be stored locally or remotely in a cloud or server.
In one embodiment, step 410 may include the step of creating an audio/video storage cartridge having a predefined storage space, wherein the first instant audio/video information is stored in the audio/video storage cartridge. In one embodiment, the measure of predefined storage space may include a length of time. In alternative embodiments, the predefined storage space may comprise other suitable metrics, such as byte size, etc. In one embodiment, the audio/video storage cartridge is built in a local backend or in a server or cloud, noting that even if the audio/video storage cartridge is built in a local backend, closing the backend does not affect the use of the functionality. In one example, the audio/video storage cartridge may have a predefined space of 60s time length.
Instant messaging allows a user to communicate with multiple other users using a network of instant audio and/or video information. For example, different sessions may be used for instant messaging with different objects, or instant messaging with different objects in the same session (e.g., in an instant messaging group). In one embodiment, the session is a chat window of instant messaging software. Thus, embodiments of the present invention may be used for instant messaging for a plurality of different objects.
As stated previously, embodiments of the present invention may be used for instant messaging for a plurality of different objects. Thus, in one embodiment, step 410 may comprise creating a plurality of audio/video storage cartridges having predefined storage spaces, wherein each audio/video storage cartridge is associated with a different session. For example, when a user is communicating with other objects instantly in different sessions a and B, an audio/video storage cartridge associated with session a may be created for session a while an audio/video storage cartridge associated with session B is also created for session B. The audio/video storage cartridges for different sessions are independent of each other. Therefore, the embodiment of the invention can simultaneously realize the recording of the instant audio/video information aiming at different conversations. In one application scenario, a user may first record a voice message for object a in chat window a. When half of the recording is finished, the recording can be suspended, and a new chat window B is opened to record voice information for the object B. When recording voice information for B to half, it can return to chat window a, and on the basis of the previously recorded voice information, record the other half of voice information for a and send it. And then returning to the chat window B, and recording and transmitting the other half of the voice information for the chat window B on the basis of the voice information recorded previously.
In another embodiment, step 410 may include creating a plurality of audio/video storage cartridges having predefined storage spaces, wherein each audio/video storage cartridge is associated with a different object of the same session. For example, when a user is in instant communication with object a and object B simultaneously in the same session (e.g., in an instant communication group), an audio/video storage cartridge associated with object a may be created for object a while an audio/video storage cartridge associated with object B is also created for object B. The audio/video storage cartridges for different objects are independent of each other. Therefore, the embodiment of the invention can simultaneously realize the recording of the instant audio/video information aiming at different objects in the same conversation. In one application scenario, a user may record voice information for object A in a chat group (e.g., "@ A"). When recording halfway, the user may pause recording voice information for object a and record voice information for object B (e.g., "@ B"). When recording half as much voice information for B, the user may pause recording voice information for object B. And then may switch to a dialog with the object a again (e.g., "@ a" again), continue recording the other half of the voice information for the object a on the basis of the voice information that has been recorded for the object a previously, and synthesize and transmit the two pieces of voice information for the object a into one piece of voice information after the recording is completed. And then switching to a conversation with the object B (e.g., "@ B" again), continuing to record the other half of the voice information for the object B on the basis of the voice information that has been recorded for the object B previously, and synthesizing the two pieces of voice information for the object B into one piece of voice information and transmitting the same after the recording is completed. This embodiment will be described in detail below with reference to fig. 10a to 10j, and thus will not be described again here.
In yet another embodiment, step 410 may include creating a plurality of audio/video storage cartridges having predefined storage spaces, wherein each audio/video storage cartridge is associated with a different session and with a different object in the same session. For example, when a user is in instant communication with different objects A, B, C in different sessions A and B, respectively (with object A in session A and objects B and C in session B), an audio/video storage cartridge associated with session A may be created for session A, while audio/video storage cartridges associated with object B and object C are also created for different objects B and C in session B. The audio/video storage cartridges for different sessions and different objects in the same session are independent of each other. Therefore, the embodiment of the invention can simultaneously realize the continuous recording of the instant audio/video information aiming at different conversations and different objects in the same conversation. In one application scenario, a user may first record a voice message for object a in chat window a. Recording may be suspended in half, a new group chat window B opened, voice information recorded for object B in group B (e.g., "@ B"), recording to half, voice information suspended for object B in group chat window B, and voice information recorded for object C in group chat window B (e.g., "@ C"). When half of the recording is made for object C, the recording of the voice information for object C is suspended. Returning to the chat window A, on the basis of the voice information recorded previously, the other half of the voice information is recorded and sent for A. Then returning to the group chat window B, switching to the object B (for example, "@ B" again), and completing the recording and sending of the other half voice information for the object B on the basis of the voice information recorded previously. Then switch again to object C in group chat window B (e.g., "@ C" again) on the basis of the previously recorded voice information, the other half of the voice information is recorded for C and sent.
It should be understood that the above application scenarios are intended to illustrate embodiments of the present invention and are not intended to limit embodiments of the present invention. For clarity, and also for ease of understanding, the following steps illustrate embodiments of the present invention in terms of instant messaging with the same object in the same session. Based on the following description, one skilled in the art can implement instant messaging for multiple objects.
In step 420, recording of the first instant audio/video information is stopped in response to the first stop recording command. In one embodiment, the first instruction to stop recording may be generated based on an operation instruction of the user (e.g., an action of the user, a voice instruction), for example, the first instruction to stop recording may be generated based on the user's hand being loose. In another embodiment, the first stop recording instruction may be generated based on a response to an event, such as automatically generating the first stop recording instruction upon receiving an incoming call alert.
In one embodiment, the method 400 may further include the steps of: in response to stopping recording the first instant audio/video information, the remaining available audio/video information storage space of the audio/video storage cartridge is calculated. In an example where the audio/video storage cartridge has a 60s time length, when the first instant audio/video information is 12s, the remaining available audio/video information storage space is 60s-12s =48 s.
In step 430, in response to the second recording instruction, the second instant audio/video information is recorded and stored. In one embodiment, the second recording instruction may also include various types of instructions, such as various gesture instructions, voice instructions, and the like. In one embodiment, the second recording instruction is the same instruction as the first recording instruction, e.g., both may be gesture instructions (such as a press-and-hold recording instruction) or the like. In another embodiment, the second recording instruction is a different instruction from the first recording instruction, for example, the first recording instruction is a gesture instruction (such as a press-and-hold recording instruction), and the second recording instruction may be a voice instruction, etc.
Fig. 5 shows a schematic flow chart of recording and storing the second audio/video information (i.e., step 430 of method 400) in the instant audio/video information recording method shown in fig. 4.
In one embodiment, in step 431, in response to the second recording instruction, recording the second instant audio/video information and storing the second instant audio/video information in the remaining available audio/video information storage space. In one embodiment, the first instant audio/video information and the second instant audio/video information are stored in an audio/video storage box at a default interval (e.g., 0.5 s), and the interval is added to make the instant audio/video information combined later more natural and also to avoid aliasing of the two pieces of information. In another embodiment, there is no space between the first instant audio/video information and the second instant audio/video information, and storing the information in this manner saves storage box space, records and stores more information. In one embodiment, the second instant audio/video information is stored after the first instant audio/video information.
In step 432, it is determined whether the second instant audio/video information occupies the remaining available audio/video information storage space. In the above example where the audio/video storage box has a time length of 60s, the determination of whether the remaining available audio/video information storage space is occupied by the second instant audio/video information is a determination of whether the length of the second instant audio/video information reaches 48 s.
If the second instant audio/video information occupies the remaining available audio/video information storage space (e.g., the length of the second instant audio/video information reaches 48 s), go to step 435, and stop recording the second instant audio/video information in step 435.
If the remaining available audio/video information storage space is not occupied by the second instant audio/video information (e.g., the length of the second instant audio/video information does not reach 48 s), recording of the second instant audio/video information is continued until recording of the second instant audio/video information is stopped in response to the second stop recording command (steps 433, 434). Similar to the first instruction to stop recording, the second instruction to stop recording may be generated based on an operation instruction (e.g., a user's motion, a voice instruction) of the user, for example, based on the user's hand being loose. The second stop recording instruction may also be generated based on a response to an event, such as automatically generating the second stop recording instruction upon receiving an incoming call alert. In one embodiment, the second stop recording instruction may be the same as the first stop recording instruction, e.g., both generated based on user instructions. In another embodiment, the second stop recording instruction may be different from the first stop recording instruction, e.g., one generated based on a user instruction and another generated based on a response to an event.
Returning to fig. 4, in response to stopping recording the second instant audio/video information, the first instant audio/video information and the second instant audio/video information are synthesized into combined instant audio/video information in step 440.
In one embodiment, the method 400 may further include the step 450 of operating on the combined instant audio/video information. In one embodiment, step 450 may comprise: in response to synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information, the combined instant audio/video information may be automatically transmitted if the second instant audio/video information fills the remaining available audio/video information storage space. In another alternative embodiment, step 450 may include, in response to synthesizing the first instant audio/video information and the second instant audio/video information into the combined instant audio/video information, displaying a prompt message to prompt the user to process the combined instant audio/video information if the second instant audio/video information fills the remaining available audio/video information storage space. In response to the received user input, any of the following is performed on the combined instant audio/video information: converting the voice information in the combined instant audio/video information to text and sending or deleting, deleting the combined instant audio/video information, and sending the combined instant audio/video information. Processing the combined instant audio/video information in this way can enable the user to process information which may not be recorded completely, and the user can send or delete the information as required to avoid sending the information which is not recorded completely and further avoid repeated recording and sending, thereby improving the communication efficiency and saving the cost. In this embodiment, the user input may include: user gesture instructions, such as drag, click, etc.; inputting a user voice; inputting characters by a user; and any other suitable user input. In another embodiment, a prompt message may also be displayed to prompt the user that the audio/video storage cartridge has been filled. The reminder information may include, for example, "reminder: the information has exceeded the maximum capacity ". In another embodiment, a prompt may also be displayed to remind the user that recording is to be stopped, for example, "stop recording after 5 s" is displayed when the second instant audio/video information is about to occupy the remaining available audio/video information storage space.
In yet another embodiment, step 450 may comprise: in response to synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information, if the second instant audio/video information fills the remaining available audio/video information storage space, a prompt is displayed to prompt the user to process the combined instant audio/video information. In response to the received user input, any of the following is performed on the combined instant audio/video information: converting the voice information in the combined instant audio/video information to text and sending or deleting, deleting the combined instant audio/video information, and sending the combined instant audio/video information. In this embodiment, the remaining unrecorded space in the audio/video storage cartridge may be removed when the combined instant audio/video information is transmitted at step 450. Continuing with the example above where the audio/video storage cartridge has a time length of 60s, if the second instant audio/video information length is 17s, the unrecorded space of 60s-12s-17s =31s is removed and the instant audio/video information length of 29s is sent. It is noted that if there is a gap (e.g. 0.5 s) between the first instant audio/video information and the second instant audio/video information, the unrecorded space is 60s-12s-17s-0.5s =30.5s, and the instant audio/video information with a length of 29.5s is transmitted.
In one embodiment, the method 400 may further include the steps of: in response to synthesizing the first instant audio/video information and the second instant audio/video information into the combined instant audio/video information, if the second instant audio/video information does not fill the remaining available audio/video information storage space, the additional instant audio/video information may be recorded and stored in the audio/video storage cartridge in response to an additional recording instruction, and the additional instant audio/video information may be synthesized with the combined instant audio/video information in response to stopping recording the additional instant audio/video information. In this embodiment the recording, storing and composing of the additional instant audio/video information is similar to the recording, storing and composing of the second instant audio/video information and therefore will not be described in further detail herein. It is noted that if there is a gap (e.g. 0.5 s) between the first instant audio/video information and the second instant audio/video information, the gap needs to be accounted for in calculating the remaining available audio/video information storage space, e.g. 60s-12s-17s-0.5s =30.5s in the example above where the audio/video storage cartridge has a time length of 60 s. In an application scenario, after the first and second instant audio/video information are combined, if there is still remaining available audio/video information storage space, one or more additional instant audio/video information may be recorded, stored, and combined in sequence until receiving a user's instruction to send, delete, etc., or until the storage space of the audio/video information box is full. The method steps of recording, storing, and incorporating the additional instant audio/video information are similar to the method steps of recording, storing, and incorporating the second instant audio/video information described in fig. 4.
It should be appreciated that in recording and storing the first instant audio/video information (i.e., step 410), an operation similar to that shown in fig. 5 may also be employed to determine whether to continue recording by determining whether the first instant audio/video information fills the entire instant audio/video storage cartridge. Further, a decision is made whether the second instant audio/video information can be recorded and stored continuously based on whether the first instant audio/video information occupies the entire instant audio/video storage box.
Fig. 6 shows an example schematic of processing instant audio/video information within an audio/video storage cartridge according to an embodiment of the present invention.
As shown in fig. 6, when receiving an instruction (i.e., a first recording instruction) from the customer to start recording information, the first instant audio/video information starts to be stored in the created storage box. When a pause command (i.e. a first stop recording command) is received, 12s of first instant audio/video information has been stored in the created 60s audio/video storage cartridge. When a continue recording command (i.e., a second recording instruction) is received, the second instant audio/video information may be stored at intervals of 0.5s, and when recording is suspended again, the second instant audio/video information has a length of 17s (601). In response to the command to pause recording again (i.e., stop recording the second instant audio/video information), the first instant audio/video information and the second instant audio/video information in the audio/video storage cartridges are combined into a combined instant audio/video information having a length of 29.5s (602). Upon receiving the continue recording command (i.e., the third recording instruction) again, the third instant audio/video information may be stored at intervals of 0.5s, and when recording is suspended again, the third instant audio/video information has a length of 8s (603). In response to the command to pause recording again (i.e., stop recording the third instant audio/video information), the two pieces of audio/video information previously combined with the third instant audio/video information in the audio/video storage cartridge are combined into a new combined instant audio/video information having a length of 38s (604). When the new combined instant audio/video information is transmitted, the unrecorded part (i.e., the hatched part, which is 22s in length) of the audio/video storage box is removed, and finally the combined instant audio/video information is transmitted for 38s (605).
Fig. 7 shows an exemplary block diagram of an apparatus 700 for instant audio/video information resume according to an embodiment of the present invention. As shown in fig. 7, the apparatus 700 includes an instant audio/video information generating module 701 and a synthesizing module 702.
The instant audio/video information generation module 701 is configured to: responding to a first recording instruction, recording and storing first instant audio/video information; in response to the first recording stopping instruction, stopping recording the first instant audio/video information; and recording and storing the second instant audio/video information in response to the second recording instruction. The first recording instruction and the second recording instruction may include a user action instruction (e.g., hold-to-talk, drag) and a user input instruction (e.g., a voice input instruction), etc. The first recording instruction and the second recording instruction may be the same instruction or different instructions. In one embodiment, the first instruction to stop recording may be generated based on an operation instruction of the user (e.g., an action of the user, a voice instruction), for example, the first instruction to stop recording may be generated based on the user's hand being loose. In another embodiment, the first stop recording instruction may be generated based on a response to an event, such as automatically generating the first stop recording instruction upon receiving an incoming call alert.
The synthesis module 702 is configured to: in response to stopping recording the second instant audio/video information, the first instant audio/video information and the second instant audio/video information are synthesized into combined instant audio/video information. In one embodiment, recording of the second instant audio/video information may be stopped in response to a user instruction. In another embodiment, certain events may also trigger the stopping of the recording of the second instant audio/video information.
In one embodiment, the instant audio/video information generation module 701 is further configured to create an audio/video storage cartridge having a predefined storage space, wherein the first instant audio/video information is stored in the audio/video storage cartridge. The measure of the predefined storage space may comprise a length of time, e.g. an audio/video storage cartridge may have a predefined space of 60s length of time. The predefined storage space may include other suitable metrics, such as byte size, etc.
In one embodiment, the instant audio/video information generation module 701 is further configured to calculate a remaining available audio/video information storage space of the audio/video storage cartridge and store the second instant audio/video information in the remaining available audio/video information storage space in response to stopping recording the first instant audio/video information, wherein there may or may not be a gap between the first instant audio/video information and the second instant audio/video information. And if the second instant audio/video information occupies the remaining available audio/video information storage space, stopping recording the second instant audio/video information. And if the second instant audio/video information does not occupy the remaining available audio/video information storage space, continuing to record the second instant audio/video information until the recording of the second instant audio/video information is stopped in response to the second recording stopping instruction. The second instruction to stop recording may be generated based on an operation instruction (e.g., an action of the user, a voice instruction) of the user, for example, a second instruction to stop recording may be generated based on a loose hand of the user. The second stop recording instruction may also be generated based on a response to an event, such as automatically generating the second stop recording instruction upon receiving an incoming call alert. The second recording stop instruction may be the same as the first recording stop instruction or may be different from the first recording stop instruction.
In one embodiment, the instant audio/video information generation module 701 is further configured to, in response to synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information, record additional instant audio/video information and store the additional instant audio/video information in the audio/video storage cartridge in response to additional recording instructions if the remaining available audio/video information storage space is not occupied by the second instant audio/video information. The compositing module 702 is also operable to composite the additional instant audio/video information with the combined instant audio/video information in response to stopping recording the additional instant audio/video information.
In one embodiment, the instant audio/video information generation module 701 is further configured to create a plurality of audio/video storage cartridges having predefined storage spaces, wherein each audio/video storage cartridge of the plurality of audio/video storage cartridges is associated with a different instant messaging session and/or with a different object in the same instant messaging session, respectively.
In one embodiment, the apparatus 700 may further comprise a treatment module 703, the treatment module 703 being configured to: performing any of the following operations on the combined instant audio/video information: converting the voice information in the combined instant audio/video information to text and sending or deleting, deleting the combined instant audio/video information, and sending the combined instant audio/video information. In one embodiment, the remaining unrecorded space in the audio/video storage cartridge may be removed when the combined instant audio/video information is transmitted.
In one embodiment, the handling module 703 is further for, in response to synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information, automatically sending the combined instant audio/video information if the second instant audio/video information fills the remaining available audio/video information storage space.
The various modules described above with respect to fig. 7 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, the modules may be implemented as hardware logic/circuitry. For example, in an embodiment, one or more of instant audio/video information generation module 701, composition module 702, and handling module 703 may be implemented together in a system on a chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a Central Processing Unit (CPU), microcontroller, microprocessor, Digital Signal Processor (DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions. The features of the techniques described herein are carrier-independent, meaning that the techniques may be implemented on a variety of computing platforms having a variety of processors.
Although specific functionality is discussed above with reference to particular modules, it should be noted that the functionality of the various modules discussed herein may be divided into multiple modules and/or at least some of the functionality of multiple modules may be combined into a single module. Additionally, a particular module performing an action discussed herein includes the particular module itself performing the action, or alternatively the particular module invoking or otherwise accessing another component or module that performs the action (or performs the action in conjunction with the particular module). Thus, a particular module that performs an action can include the particular module that performs the action itself and/or another module that the particular module that performs the action calls or otherwise accesses.
FIG. 8 illustrates an example flow diagram of an application scenario in accordance with one embodiment of this disclosure. The application scenario shown in fig. 8 is merely exemplary, and not limiting. As shown in fig. 8, in step 801, a user turns on a voice message function in an instant messenger. In step 802, the user starts to hold-to-talk. In step 803, the instant messaging tool begins recording speech in response to the user beginning to hold down to speak. The recorded voice is directly stored in the created voice storage box which can store 60s voice at the maximum. After recording the 15s voice message, the user releases his hand to stop recording in step 804, and other functional operations are processed in step 805. In step 805, the user may process other functional operations in the current chat window, such as sending text messages, sending pictures, or if the current chat window is a group chat window, the user may record information for other users, etc. The user can also switch to other chat windows to process other functional operations, such as switching to another chat window to chat with others, transferring files, and the like. The user may also turn off the instant messaging tool and handle other functional operations in other applications, such as making and receiving phone calls, browsing news in other applications, etc. After the user has completed other tasks, in step 806, the user returns to the voice message function of the instant messaging tool and continues to hold down speaking in step 807. In step 808, the instant messaging tool begins recording speech in response to the user continuing to speak. At this time, the recorded information continues to be stored after the first piece of information in the voice storage cartridge. During this period, if the length of the voice message reaches 60s, the recording is stopped, and the process jumps to step 809 to automatically send the voice message. As previously described, the user may also be provided with a selection and the voice information processed based on the user output. If the voice message length is less than 60s, the voice message is continuously recorded until the user releases his hand to stop recording (step 810). After the user loosens hands, the information in the voice storage box is combined, and the user can move the voice information to the transfer text to send the combined voice information to the transfer text, can click to confirm to send the voice information, and can also move the voice information to delete the voice information.
Fig. 9 a-9 h illustrate a user interface for instant audio/video information continuation from the perspective of a user according to one embodiment of the present invention. A method according to an embodiment of the invention will now be described with reference to fig. 9 a-9 h. As shown in fig. 9a, the user turns on the voice message function at the session, at which time a microphone icon 901 is displayed, and a prompt of "hold-and-talk" is displayed. Then, in one embodiment, the audio/video information is recorded in response to the first recording instruction. For example, in fig. 9b, the user presses the generated microphone icon 901 to speak (i.e., the first recording instruction), and starts recording the audio information. In one embodiment, when recording audio/video information, an operation option may be displayed to the user to prompt the user to perform an operation. In one embodiment, the length of the audio/video information may be displayed when the audio/video information is recorded. For example, in fig. 9b three icons 902 appear after holding down the utterance, representing text conversion, deletion, and transmission, respectively, while the length of time that the recorded audio can be displayed. Then, in one embodiment, recording of the audio/video information is stopped in response to the first stop recording instruction. As shown in fig. 9c, the user releases the microphone icon and pauses the recording. In one embodiment, a prompt may be displayed to alert that audio/visual information has been stored. For example, in fig. 9d, a reminder mark may be present at the voice message function icon 903 to remind that there has been a recorded voice before. The reminder indicia may include any suitable indicia such as a red dot, a box, etc. After the recording is suspended, other tasks may be performed. For example, in one application scenario, when a user is recording audio/video information, it may be desirable to talk to others, at which point the user may release the microphone icon to pause recording the audio/video information and begin talking to others. In another application scenario, as shown in fig. 9 d-9 e, the user may send information at the input box after pausing recording the audio/video information. The reminder mark may still be present after other tasks are completed (e.g., after a conversation with another person is completed or a text message has been sent). After completing other tasks, the user may return to the interface where the recording of the audio/video information was previously stopped and operate on the audio/video information. In one embodiment, the audio/video information may be deleted or transmitted, and the voice information in the audio/video information may be converted to text and transmitted or deleted. In another embodiment, the recording of the audio/video information may be continued on the basis of the previous audio/video information in response to the second recording instruction. For example, as shown in fig. 9f and 9g, the user may return to the voice message function, click delete may delete previous voice content, and continue to hold microphone icon 901 to speak may continue recording audio information based on the audio information. In one embodiment, in response to stopping recording the audio/video information, the audio/video information is subjected to any one of the following operations upon receipt of a user input: converting voice information in audio/video information into text, and transmitting or deleting the text, deleting the audio/video information, and transmitting the audio/video information. For example, as shown in FIG. 9h, the user releases the microphone icon to end recording the voice message, as shown by icon 904. The user then clicks send to send the voice message as shown in fig. 9 h.
Fig. 10a-10j illustrate user interfaces for instant audio/video information continuation from the perspective of a user according to another embodiment of the present invention. As described above, the embodiments of the present invention can be used for instant messaging with different objects in the same session (e.g., in an instant messaging group). This embodiment is described in detail below in conjunction with fig. 10a-10 j. As shown in fig. 10a, a user opens a multi-person communication session (e.g., a communication group), inputs a first object selection instruction to select a communication object in the session, for example, the user inputs "@ nigelcheng" in an input box 1001 of the communication group to select a user with a user name "nigelcheng" as the first object to communicate. In response to the first object selection instruction, the device may create an instant audio/video storage cartridge associated with the first object for the first object. Then in fig. 10b, the user turns on the voice message function, now displaying the microphone icon 1002, and displaying the "press-and-talk" prompt. Additionally, a "@ nigelcheng" indicia 1003 may also be displayed in the interface to prompt that an instant audio message is to be recorded for the first object. In fig. 10c, first instant audio information is recorded for a first object (i.e., "nigelcheng") in response to a first recording instruction for the first object. The interface diagram of FIG. 10c is similar to that of FIG. 9b and will not be described herein, except that the interface of FIG. 10c displays a "@ nigelcheng" label to indicate that the first instant audio message is being recorded for the first object "nigelcheng". After recording a segment of instant audio information, recording of the first instant audio information for the first object may be suspended in response to a pause recording instruction for the first object. Then, in response to a second object selection instruction (e.g., entering "@ dobbycheng" in the input box of the communication group), an instant audio/video storage box associated with the second object is created for the second object ("dobbycheng"). First instant audio information for a second object in the same session may then be recorded for the second object. Fig. 10d to 10f show interface diagrams for selecting a second object in the same session and recording first instant audio information for the second object. Fig. 10d to 10f are different from fig. 10a to 10c only in that the user name of the communication object is changed to "dobbycheng". Thus, FIGS. 10 d-10 f are not described in detail herein. Recording of the first instant audio information for the second object may then be suspended in response to the record suspension instruction for the second object. The communication object may then be re-switched back to the first object "nigelcheng" in response to the first object selection instruction. For example, the user may re-enter "@ nigelcheng" in the input box 1001 in the same session to switch back to communication for the first object "nigelcheng", as shown in fig. 10 g. In one embodiment, when switching back to the first object, a reminder mark may be present to remind that there was previously audio information recorded for the first object. Fig. 10 h-10 i show interface diagrams for the first object "nigelcheng" to continue recording the second instant audio message after the first instant audio message. 10 h-10 i are similar to FIGS. 9 f-9 g, except that a "@ nigelcheng" label is displayed in the interface of FIGS. 10 h-10 i to indicate that instant audio information is currently being recorded for the first object "nigelcheng". The first instant audio information and the second instant audio information for the first object "nigelcheng" are then synthesized into a combined instant audio information in response to the user releasing the microphone icon, as shown in fig. 10 j. The combined instant audio information is then transmitted in response to the user's selection. The steps and user interface for the recording of audio information for the second object "dobbycheng" are similar to those for the first object "nigelheng" and thus will not be described in detail herein.
It should be understood that although the application scenario of recording voice information is mainly described above, video information may be recorded using the technical solution of the embodiment of the present invention. For example, in one application scenario, video information may be recorded in a video message function. As a user records video information, it may be necessary to talk to others or to process other tasks. At this point the user may pause recording video information and talk to others or perform other tasks. After completing other tasks (e.g., having spoken with others or having sent a text message), the user may return to the interface where the recording of the video information was previously stopped and continue recording the video information based on the previous video information. After the recording is completed, video information may be sent. It should be understood that any of the other embodiments and application scenarios described previously with respect to recording of audio information may be used for recording of video information and are therefore not described in detail herein.
FIG. 11 shows a schematic block diagram of a computing device 1100 in accordance with embodiments of the invention. Computing device 1000 is a device for performing an instant audio/video recording method in accordance with an embodiment of the present invention.
Computing device 1100 can be a variety of different types of devices, such as server computers, devices associated with clients (e.g., client devices), systems on a chip, and/or any other suitable computing device or computing system.
Computing device 1100 may include at least one processor 1102, memory 1104, communication interface(s) 1106, display device 1108, other input/output (I/O) devices 1110, and one or more mass storage devices 1112, which may be connected in communication with each other, such as by system bus 1114 or other appropriate means.
The processor 1102 may be a single processing unit or multiple processing units, all of which may include single or multiple computing units or multiple cores. The processor 1102 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitry, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 1102 may be configured to retrieve and execute computer readable instructions, such as program code for an operating system 1116, program code for an application 1118, program code for other programs 1120, and the like, stored in the memory 1104, the mass storage device 1112, or other computer readable media to implement the instant audio/video information recording methods provided by embodiments of the present invention.
Memory 1104 and mass storage device 1112 are examples of computer storage media for storing instructions that are executed by processor 1102 to carry out the various functions described above. By way of example, memory 1104 may generally include both volatile and nonvolatile memory (e.g., RAM, ROM, and the like). In addition, mass storage device 1112 may generally include a hard disk drive, solid state drive, removable media including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), storage arrays, network attached storage, storage area networks, and the like. Memory 1104 and mass storage device 1112 may both be referred to herein collectively as memory or computer storage media and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by processor 1102 as a particular machine configured to implement the operations and functions described in the examples herein.
A number of program modules can be stored on the mass storage device 1112. These programs include an operating system 1116, one or more application programs 1118, other programs 1120, and program data 1122, and they can be loaded into memory 1104 for execution. Examples of such applications or program modules may include, for instance, computer program logic (e.g., computer program code or instructions) for implementing the following components/functions: instant audio/video information generation module 701, composition module 702, treatment module 703, and/or further embodiments described herein. In some embodiments, these program modules may be distributed over different physical locations.
Although illustrated in fig. 11 as being stored in memory 1104 of computing device 1100, modules 1116, 1118, 1120, and 1122, or portions thereof, may be implemented using any form of computer-readable media that is accessible by computing device 1100. As used herein, "computer-readable media" includes at least two types of computer-readable media, namely computer storage media and communication media.
Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism. Computer storage media, as defined herein, does not include communication media.
Computing device 1100 may also include one or more communication interfaces 1106 for exchanging data with other devices, such as over a network, direct connection, or the like. Communication interface 1106 may facilitate communication within a variety of networks and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the Internet, etc. The communication interface 1106 may also provide for communication with external storage devices (not shown), such as in a storage array, network attached storage, storage area network, or the like.
In some examples, a display device 1108, such as a monitor, may be included for displaying information and images. Other I/O devices 1110 may be devices that receive various inputs from a user and provide various outputs to the user, and may include touch input devices, gesture input devices, cameras, keyboards, remote controls, mice, printers, audio input/output devices, and so forth.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various devices, elements, or components, these devices, elements, or components should not be limited by these terms. These terms are only used to distinguish one device, element, or component from another device, element, or component.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" appearing in various places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus.
Although embodiments of the present invention have been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of embodiments of the invention is limited only by the accompanying claims. Additionally, although individual features may be included in different claims, these may possibly advantageously be combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. The order of features in the claims does not imply any specific order in which the features must be worked. Furthermore, in the claims, the word "comprising" does not exclude other elements, and the indefinite article "a" or "an" does not exclude a plurality. Reference signs in the claims are provided merely as a clarifying example and shall not be construed as limiting the scope of the claims in any way.

Claims (15)

1. A method for information processing, the method comprising:
responding to a first recording instruction, recording and storing first instant audio/video information;
in response to a first recording stopping instruction, stopping recording the first instant audio/video information;
responding to a second recording instruction, recording and storing second instant audio/video information; and
and responding to the stop of recording the second instant audio/video information, and synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information.
2. The method of claim 1, wherein recording and storing the first instant audio/video information in response to the first recording instruction comprises:
creating an audio/video storage cartridge having a predefined storage space, wherein said first instant audio/video information is stored in said audio/video storage cartridge.
3. The method of claim 2, wherein the method further comprises: in response to stopping recording of said first instant audio/video information, calculating a remaining available audio/video information storage space of said audio/video storage cartridge.
4. The method of claim 3, wherein recording and storing the second instant audio/video information in response to the second recording instruction comprises:
storing the second instant audio/video information in the remaining available audio/video information storage space;
if the second instant audio/video information does not occupy the remaining available audio/video information storage space, continuing to record the second instant audio/video information until the recording of the second instant audio/video information is stopped in response to a second recording stopping instruction; and
and if the second instant audio/video information occupies the remaining available audio/video information storage space, stopping recording the second instant audio/video information.
5. The method of claim 4, wherein the first stop recording instruction and the second stop recording instruction are generated based on an operation instruction of a user or based on a response to an event.
6. The method of any of claims 1 to 5, further comprising:
performing any one of the following operations on the combined instant audio/video information:
converting the speech information in the combined instant audio/video information into text, and sending or deleting the text,
deleting said combined instant audio/video information, an
Sending the combined instant audio/video information.
7. The method of claim 6, wherein the method further comprises:
removing remaining unrecorded space in said audio/video storage cartridge prior to sending said combined instant audio/video information.
8. The method of claim 4 or 5, wherein the method further comprises:
in response to synthesizing the first and second instant audio/video information into the combined instant audio/video information, automatically sending the combined instant audio/video information if the remaining available audio/video information storage space is occupied by the second instant audio/video information.
9. The method of claim 4 or 5, wherein the method further comprises:
in response to synthesizing the first instant audio/video information and the second instant audio/video information into the combined instant audio/video information, if the remaining available audio/video information storage space is not occupied by the second instant audio/video information:
recording additional instant audio/video information and storing said additional instant audio/video information in said audio/video storage cartridge in response to additional recording instructions, and
in response to stopping recording the additional instant audio/video information, synthesizing the additional instant audio/video information with the combined instant audio/video information.
10. The method of any of claims 2 to 5, wherein the first instant audio/video information and the second instant audio/video information are stored in the audio/video storage cartridge at predefined intervals.
11. The method of claim 1, wherein recording and storing the first instant audio/video information in response to the first recording instruction comprises:
a plurality of audio/video storage cartridges having predefined storage spaces are created, wherein each audio/video storage cartridge of said plurality of audio/video storage cartridges is associated with a different session and/or with a different object in the same session, respectively.
12. A method for information processing, comprising:
recording the audio/video information in response to the first recording instruction;
stopping recording the audio/video information in response to a first recording stopping instruction;
in response to a second recording instruction, in an interface for stopping recording the audio/video information, continuing recording the audio/video information on the basis of the audio/video information;
in response to stopping recording the audio/video information, performing any one of the following operations on the audio/video information:
converting voice information in the audio/video information into text, and transmitting or deleting the text,
deleting said audio/video information, an
And transmitting the audio/video information.
13. An apparatus for information processing, comprising:
an instant audio/video information generation module for:
responding to a first recording instruction, recording and storing first instant audio/video information;
in response to a first recording stopping instruction, stopping recording the first instant audio/video information; and
responding to a second recording instruction, recording and storing second instant audio/video information; and
a synthesis module to:
and responding to the stop of recording the second instant audio/video information, and synthesizing the first instant audio/video information and the second instant audio/video information into combined instant audio/video information.
14. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, causes the processor to carry out the method according to any one of claims 1-12.
15. An electronic device, comprising:
a storage device for storing a program;
a processor configured to execute the program to perform the method of any one of claims 1-12.
CN202011056741.3A 2020-09-30 2020-09-30 Method, apparatus, computer readable medium and electronic device for information processing Pending CN112203134A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011056741.3A CN112203134A (en) 2020-09-30 2020-09-30 Method, apparatus, computer readable medium and electronic device for information processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011056741.3A CN112203134A (en) 2020-09-30 2020-09-30 Method, apparatus, computer readable medium and electronic device for information processing

Publications (1)

Publication Number Publication Date
CN112203134A true CN112203134A (en) 2021-01-08

Family

ID=74007120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011056741.3A Pending CN112203134A (en) 2020-09-30 2020-09-30 Method, apparatus, computer readable medium and electronic device for information processing

Country Status (1)

Country Link
CN (1) CN112203134A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024078236A1 (en) * 2022-10-11 2024-04-18 华为技术有限公司 Recording control method, electronic device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8180382B1 (en) * 2006-07-14 2012-05-15 At&T Mobility Ii Llc Direct and immediate transmittal of voice messages and handset storage thereof
CN108900791A (en) * 2018-07-19 2018-11-27 北京微播视界科技有限公司 A kind of video distribution method, apparatus, equipment and storage medium
CN109360588A (en) * 2018-09-11 2019-02-19 广州荔支网络技术有限公司 A kind of mobile device-based audio-frequency processing method and device
CN109801648A (en) * 2018-12-11 2019-05-24 平安科技(深圳)有限公司 Message pop-up voice edition method, device, computer equipment and storage medium
CN110111793A (en) * 2018-02-01 2019-08-09 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of audio-frequency information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8180382B1 (en) * 2006-07-14 2012-05-15 At&T Mobility Ii Llc Direct and immediate transmittal of voice messages and handset storage thereof
CN110111793A (en) * 2018-02-01 2019-08-09 腾讯科技(深圳)有限公司 Processing method, device, storage medium and the electronic device of audio-frequency information
CN108900791A (en) * 2018-07-19 2018-11-27 北京微播视界科技有限公司 A kind of video distribution method, apparatus, equipment and storage medium
CN109360588A (en) * 2018-09-11 2019-02-19 广州荔支网络技术有限公司 A kind of mobile device-based audio-frequency processing method and device
CN109801648A (en) * 2018-12-11 2019-05-24 平安科技(深圳)有限公司 Message pop-up voice edition method, device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024078236A1 (en) * 2022-10-11 2024-04-18 华为技术有限公司 Recording control method, electronic device, and medium

Similar Documents

Publication Publication Date Title
CN101867487B (en) With the system and method for figure call connection symbol management association centre
CN102811184B (en) Sharing method, terminal, server and system for custom emoticons
EP3068070B1 (en) Method and device for initiating network conference
WO2011085248A1 (en) Methods and apparatus for modifying a multimedia object within an instant messaging session at a mobile communication device
CN104035565A (en) Input method, input device, auxiliary input method and auxiliary input system
CN110417641A (en) A kind of method and apparatus sending conversation message
CN110287473A (en) Electrical form edit methods and device
CN107509051A (en) Long-range control method, device, terminal and computer-readable recording medium
CN104679239B (en) A kind of terminal input method
CN109688051A (en) Session list display methods, device and electronic equipment
US11956531B2 (en) Video sharing method and apparatus, electronic device, and storage medium
CN114500432A (en) Session message transceiving method and device, electronic equipment and readable storage medium
CN103116483A (en) Method, device and terminal for invoking microblog
CN104158719A (en) Information processing method and system, IM application device, and terminal
CN110019058B (en) Sharing method and device for file operation
CN109951400A (en) Instruction sending method, device, electronic equipment and the readable storage medium storing program for executing of terminal
CN110865870B (en) Application calling method and device based on hook technology
CN112203134A (en) Method, apparatus, computer readable medium and electronic device for information processing
CN102655531A (en) Data sharing method and electronic terminal based on internet
CN109582187A (en) Document sending method, device, computer equipment and storage medium
CN107665465A (en) Obtain history sharing information method, mobile terminal and the device with store function
CN105278833B (en) The processing method and terminal of information
CN114374761A (en) Information interaction method and device, electronic equipment and medium
CN107835117A (en) A kind of instant communicating method and system
CN114265714A (en) Drive control method and device based on cloud mobile phone and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination