CN111107407A - Audio and video playing control method, device and equipment and computer readable storage medium - Google Patents
Audio and video playing control method, device and equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN111107407A CN111107407A CN201911401545.2A CN201911401545A CN111107407A CN 111107407 A CN111107407 A CN 111107407A CN 201911401545 A CN201911401545 A CN 201911401545A CN 111107407 A CN111107407 A CN 111107407A
- Authority
- CN
- China
- Prior art keywords
- audio
- video
- playing
- shared
- instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/441—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
- H04N21/4415—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Abstract
An audio and video playing control method, device and equipment and a computer readable storage medium are provided. According to the technical scheme provided by the embodiment of the invention, a user realizes an audio and video playing control function by utilizing voice, limb actions, gestures and the like based on a 5G network, so that people can play, adjust and the like music and video in public places without hands.
Description
Technical Field
The invention relates to the technical field of electronic equipment and Internet of things, in particular to an audio and video playing control method, device, equipment and a computer readable storage medium.
Background
The development of the digital audio-video industry has gone through a process of traditional off-line channel-internet-wireless.
In the traditional audio and video industry, audio and video carriers mainly comprise early vinyl records and recent tapes, video tapes, CDs and DVDs, and audio record companies and movie companies are used as pillars and dominators of the early audio and video industry, and are core links formed by the audio and video industry chain, and audio and video commodities are in physical forms.
In the internet era, audio and video carriers have been converted from physical records to intangible computer digital audio and video files, and MP3 and MPEG-2 have replaced CDs, tapes and the like, and have become the main forms of audio and video commodities. In addition, the more popular computer audio-video formats on the market are WAV, WMA, RAM, AIFF, AVJ, etc., which are generally stored in a computer database, such as: the occupied physical space of the hard disk and the network memory can be almost ignored, and the carriers for audio and video playing in the internet era are changed into a PC (personal computer), a mobile phone, an IPOD (internet protocol digital) and a tablet personal computer and the like.
In the 5G era of everything interconnection, audio and video playing carriers are no longer limited to various private hardware, and various public output devices such as a shared loudspeaker, a shared LED screen, a shared display and the like can also be used as audio and video playing carriers.
Disclosure of Invention
In order to realize an audio and video playing control function based on a common shared output device, the embodiment of the invention provides an audio and video playing control method, an audio and video playing control device, audio and video playing control equipment and a computer readable storage medium.
In a first aspect of the embodiments of the present invention, an audio and video playing control method is provided, where the audio and video playing control method includes:
receiving a voice signal, a limb action signal or/and a gesture signal with an operation instruction, wherein the operation instruction comprises: an audio and video playing instruction;
identifying the identity of a user, wherein the identification method is any one or any combination of voiceprint identification, face identification and gait identification;
sending an audio and video playing control instruction to a shared output device, wherein the shared output device comprises: shared speakers, shared LED screens, shared displays, etc.;
and the sharing output equipment responds to the audio and video playing control instruction to play the audio and video content.
With reference to the first aspect, in a first implementation manner of the first aspect, the receiving the operation instruction further includes: and adjusting the relevant parameters of the audio and video content according to the operation instruction.
With reference to the first aspect, in a second implementation manner of the first aspect, the receiving the operation instruction further includes: and stopping or pausing playing the audio and video content according to the operation instruction.
In a second aspect of the embodiments of the present invention, there is provided an audio/video playback control apparatus, including:
the receiving unit is used for receiving a voice signal, a limb action signal or/and a gesture signal with an operation instruction, wherein the operation instruction comprises the following steps: an audio and video playing instruction;
the identification unit is used for identifying the identity of the user;
the control unit is used for sending a corresponding audio and video playing control instruction to the shared output equipment;
and the sharing output equipment is used for responding to the audio and video playing control instruction and playing the audio and video contents.
With reference to the second aspect, in a first implementation manner of the second aspect, the operation instruction further includes: adjusting relevant parameters of the audio and video contents according to the operation instruction;
the control unit is also used for sending a corresponding control instruction for adjusting the related parameters of the audio and video content to the shared output equipment;
and the sharing output equipment is also used for responding to a control instruction for adjusting the related parameters of the audio and video contents and adjusting the related parameters of the audio and video contents.
With reference to the second aspect, in a second implementation manner of the second aspect, the operation instruction further includes: stopping or pausing playing of the audio and video content according to the operation instruction;
the control unit is also used for sending a corresponding control instruction for stopping or pausing the playing of the audio and video contents to the shared output equipment;
the shared output device is also used for responding to a control instruction for stopping or pausing the playing of the audio and video contents and stopping or pausing the playing of the audio and video contents.
In a third aspect, an embodiment of the present invention provides an audio/video playback control device, where functions of the device may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware and software includes one or more elements corresponding to the functions described above.
In one possible design, the structure of the audio/video playback control device includes a processor, a memory, and a shared input/output device, where the memory is used to store a program that supports the audio/video playback control device to execute the audio/video playback control method, and the processor is configured to execute the program stored in the memory. The audio and video playing control device can also comprise a communication interface used for communicating with other devices or a communication network.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, configured to store computer software instructions for the apparatus for controlling audio/video playing, where the computer software instructions include a program for executing the method for controlling audio/video playing.
According to the technical scheme provided by the embodiment of the invention, a user realizes an audio and video playing control function by utilizing voice, limb actions, gestures and the like based on a 5G network, so that people can play, adjust and the like music and video in public places without hands.
Drawings
Fig. 1 is a schematic flow chart of an audio/video playing control method according to a first aspect of an embodiment of the present invention;
fig. 2 is a schematic flowchart of an audio/video playing control method according to a first implementation manner of the first aspect of the embodiment of the present invention;
fig. 3 is a schematic flowchart of an audio/video playing control method according to a second implementation manner of the first aspect of the embodiment of the present invention;
fig. 4 is a schematic structural diagram of an audio/video playback control apparatus according to a second aspect of the embodiment of the present invention;
fig. 5 is a schematic structural diagram of an audio/video playback control apparatus according to a first implementation manner of a second aspect of the embodiment of the present invention;
fig. 6 is a schematic structural diagram of an audio/video playback control apparatus according to a second implementation manner of the second aspect of the embodiment of the present invention.
Detailed description of the invention
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention aims to provide an audio and video playing control method and device, so that a user does not need to be intelligent and networked (carry cellular network equipment) and only needs shared input/output equipment in public places to realize an audio and video playing control function.
In a private occasion (such as a home) or a private device (such as a vehicle), a user has a plurality of intelligent terminals to realize or combine the video playing control function, in a public occasion (such as a public vehicle, a building, an elevator, the outdoors and the like), the intelligent device serving for the user becomes a public electronic device, and a method for performing audio and video playing control by using public input/output equipment in the public occasion is provided.
Referring to fig. 1, fig. 1 is a schematic flow chart of an audio/video playing control method according to a first aspect of the embodiment of the present invention, including the following four steps from S101 to S104:
s101: receiving a voice signal, a limb action signal or/and a gesture signal with an operation instruction, wherein the operation instruction comprises: and (5) audio and video playing instructions.
A voice signal receivable by the shared microphone;
the limb motion signal or/and the gesture signal may be received by a shared camera.
S102: the user identity is identified.
The method for identifying the user identity comprises the following steps: voiceprint recognition, face recognition and gait recognition.
Voiceprint recognition, one of the biometric techniques, also known as speaker recognition, is of two types, namely speaker recognition and speaker verification. Different tasks and applications may use different voiceprint recognition techniques, such as recognition techniques may be required to narrow criminal investigation, and validation techniques may be required for banking transactions. Voiceprint recognition is the conversion of acoustic signals into electrical signals, which are then recognized by a computer.
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
Gait recognition is a new biological feature recognition technology, aims to identify the identity through the walking posture of people, and has the advantages of non-contact remote distance and difficulty in camouflage compared with other biological recognition technologies. In the field of intelligent video monitoring, the method has more advantages than image recognition.
S103: and sending an audio and video playing control instruction to the sharing output equipment.
Here, according to the user ID, the location, and the environmental factors, a suitable shared output device is selected, and an audio/video playing control instruction is sent.
S104: and the sharing output equipment responds to the audio and video playing control instruction to play the audio and video content.
Shared speakers may be used to play audio content;
a shared speaker and shared display combination may be used to play video content.
Fig. 2 is a schematic flow chart of an audio/video playing control method according to a first implementation manner of the first aspect of the embodiment of the present invention, and includes the following four steps from S201 to S204:
s201: and receiving a voice signal, a limb action signal or/and gesture information for adjusting related parameters of the audio-video content.
The adjusting the relevant parameters of the audio and video content comprises: volume adjustment, video brightness adjustment, song throwing operation and the like. And if the two hands of the user are lifted upwards, the gesture signal for turning up the volume is adopted.
S202: the user identity is identified (as in S102).
S203: and sending a control instruction for adjusting the related parameters of the audio and video content to the shared output equipment.
S204: and the sharing output equipment adjusts the related parameters of the audio and video contents according to the control instruction.
Such as turning down the volume, switching songs, etc.
Fig. 3 is a schematic flow chart of an audio/video playing control method according to a second implementation manner of the first aspect of the embodiment of the present invention, which includes the following four steps from S301 to S304:
s301: and receiving voice signals, limb action signals or/and gesture information for stopping or pausing playing of the audio-video content.
Such as a fist indicating a gesture to stop playing.
S302: the user identity is identified (as in S102).
S303: and sending a control instruction of stopping or pausing the playing of the audio and video content to the shared output equipment.
S304: and the shared output equipment stops or suspends playing the audio and video contents according to the control instruction.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an audio/video playback control apparatus according to a second aspect of the present invention, where a1 is a user, B1 is a receiving unit, C1 is an identification unit, D1 is a control unit, and E1 is a shared output device.
The user A1 expresses an audio and video playing instruction to the receiving unit B1, the identification unit C1 identifies the user A1, after the identification is completed, the control unit D1 sends an audio and video playing control instruction to the shared output device E1, and the shared output device E1 plays the audio and video.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an audio/video playback control apparatus according to a first implementation manner of the second aspect of the present invention, as shown in the figure, where a2 is a user, B2 is a shared microphone, C2 is a shared camera, D2 is a control unit, and E2 is a shared speaker.
The user a2 expresses a voice signal of turning down the volume to the shared microphone B2, and the C2 shared camera performs face recognition and gait recognition on the user a2 to determine the ID of the user a 2. After the identification is completed, the D2 control unit sends a control signal for turning down the volume to the shared speaker E2. The sharing speaker E2 reduces the volume of the played audio-video.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an audio/video playback control apparatus according to a second implementation manner of the second aspect of the present invention, as shown in the drawing, where a3 is a user, B3 is a shared camera, D3 is a control unit, and E3 is a shared display.
The user A3 expresses a control requirement for stopping playing the audio and video to the shared camera B3 through a gesture signal, the shared camera B3 performs face recognition on the user A3, after the identity of the user A3 is determined, the control unit D3 sends a playing stopping control instruction to the shared display E3, and the shared display stops playing the audio and video.
The embodiment of the invention also provides audio and video playing control equipment which can be a server or edge computing equipment and is used for realizing any one of the methods.
An embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and the computer program is used for implementing the method of any one of the above embodiments when being executed by a processor.
In the description herein, references to the terms "one embodiment," "some embodiments," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine features of different embodiments or examples and features of different embodiments or examples described in this specification without contradiction.
Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically limited otherwise.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
All or part of the steps carried by the above implementation method can be implemented by hardware related to instructions of a program, and the program can be stored in a computer readable storage medium, and when executed, the program comprises one or a combination of the steps of the method embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module, and the integrated module may be implemented in a form of hardware, or may be implemented in a form of software functional module.
The above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; within the idea of the invention, also the technical features in the above-described embodiments can be combined, the steps can be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (8)
1. An audio and video playing control method is characterized by comprising the following steps:
receiving a voice signal, a limb action signal or/and a gesture signal with an operation instruction, wherein the operation instruction comprises: an audio and video playing instruction;
identifying the identity of a user, wherein the identification method is any one or any combination of voiceprint identification, face identification and gait identification;
sending an audio and video playing control instruction to a shared output device, wherein the shared output device comprises: shared speakers, shared LED screens, shared displays, etc.;
and the sharing output equipment responds to the audio and video playing control instruction to play the audio and video content.
2. The method of claim 1, wherein in some embodiments, receiving the operational instructions further comprises: and adjusting the relevant parameters of the audio and video content according to the operation instruction.
3. The method of claim 1, wherein in some embodiments, receiving the operational instructions further comprises: and stopping or pausing playing the audio and video content according to the operation instruction.
4. An audio/video playback control apparatus, characterized in that the apparatus comprises:
the receiving unit is used for receiving a voice signal, a limb action signal or/and a gesture signal with an operation instruction, wherein the operation instruction comprises the following steps: an audio and video playing instruction;
the identification unit is used for identifying the identity of the user;
the control unit is used for sending a corresponding audio and video playing control instruction to the shared output equipment;
and the sharing output equipment is used for responding to the audio and video playing control instruction and playing the audio and video contents.
5. The apparatus of claim 4, wherein in some embodiments, the operating instructions further comprise: adjusting relevant parameters of the audio and video contents according to the operation instruction;
the control unit is also used for sending a corresponding control instruction for adjusting the related parameters of the audio and video content to the shared output equipment;
and the sharing output equipment is also used for responding to a control instruction for adjusting the related parameters of the audio and video contents and adjusting the related parameters of the audio and video contents.
6. The apparatus of claim 4, wherein in some embodiments, the operating instructions further comprise: stopping or pausing playing of the audio and video content according to the operation instruction;
the control unit is also used for sending a corresponding control instruction for stopping or pausing the playing of the audio and video contents to the shared output equipment;
the shared output device is also used for responding to a control instruction for stopping or pausing the playing of the audio and video contents and stopping or pausing the playing of the audio and video contents.
7. An audio-video playing control device, characterized in that the functions of the device can be realized by hardware, and also by hardware executing corresponding software, and the hardware and software include one or more units corresponding to the functions of claims 4 to 6.
8. A computer-readable storage medium for storing computer software instructions for the av playback control apparatus, which includes a program for executing the av playback control method of claims 1 to 3.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2019100178897 | 2019-01-08 | ||
CN201910017889 | 2019-01-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111107407A true CN111107407A (en) | 2020-05-05 |
Family
ID=70425201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911401545.2A Pending CN111107407A (en) | 2019-01-08 | 2019-12-30 | Audio and video playing control method, device and equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111107407A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101232535A (en) * | 2008-02-15 | 2008-07-30 | 宇龙计算机通信科技(深圳)有限公司 | Method for sharing broadcast multimedia document and multimedia player |
CN105045122A (en) * | 2015-06-24 | 2015-11-11 | 张子兴 | Intelligent household natural interaction system based on audios and videos |
US20170155872A1 (en) * | 2015-11-30 | 2017-06-01 | Le Holdings (Beijing) Co., Ltd. | Method and device for audio/video sharing |
CN106850847A (en) * | 2017-03-10 | 2017-06-13 | 上海斐讯数据通信技术有限公司 | Voice messaging sharing method and its intelligent earphone based on cloud platform |
CN108052079A (en) * | 2017-12-12 | 2018-05-18 | 北京小米移动软件有限公司 | Apparatus control method, device, plant control unit and storage medium |
CN108846054A (en) * | 2018-05-31 | 2018-11-20 | 出门问问信息科技有限公司 | A kind of audio data continuous playing method and device |
-
2019
- 2019-12-30 CN CN201911401545.2A patent/CN111107407A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101232535A (en) * | 2008-02-15 | 2008-07-30 | 宇龙计算机通信科技(深圳)有限公司 | Method for sharing broadcast multimedia document and multimedia player |
CN105045122A (en) * | 2015-06-24 | 2015-11-11 | 张子兴 | Intelligent household natural interaction system based on audios and videos |
US20170155872A1 (en) * | 2015-11-30 | 2017-06-01 | Le Holdings (Beijing) Co., Ltd. | Method and device for audio/video sharing |
CN106850847A (en) * | 2017-03-10 | 2017-06-13 | 上海斐讯数据通信技术有限公司 | Voice messaging sharing method and its intelligent earphone based on cloud platform |
CN108052079A (en) * | 2017-12-12 | 2018-05-18 | 北京小米移动软件有限公司 | Apparatus control method, device, plant control unit and storage medium |
CN108846054A (en) * | 2018-05-31 | 2018-11-20 | 出门问问信息科技有限公司 | A kind of audio data continuous playing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6811758B2 (en) | Voice interaction methods, devices, devices and storage media | |
JP7324313B2 (en) | Voice interaction method and device, terminal, and storage medium | |
JP6544991B2 (en) | Tactile Design Authoring Tool | |
CN104951335B (en) | The processing method and processing device of application program installation kit | |
CN111177453B (en) | Method, apparatus, device and computer readable storage medium for controlling audio playing | |
JP6783339B2 (en) | Methods and devices for processing audio | |
CN104899610A (en) | Picture classification method and device | |
CN103024630A (en) | Volume regulating method of first electronic equipment and first electronic equipment | |
JP2015517709A (en) | A system for adaptive distribution of context-based media | |
US20210142792A1 (en) | Systems and Methods for Identifying and Providing Information About Semantic Entities in Audio Signals | |
CN110769280A (en) | Method and device for continuously playing files | |
CN107330391A (en) | Product information reminding method and device | |
KR20230118164A (en) | Combining device or assistant-specific hotwords into a single utterance | |
US20210397991A1 (en) | Predictively setting information handling system (ihs) parameters using learned remote meeting attributes | |
CN115113751A (en) | Method and device for adjusting numerical range of recognition parameter of touch gesture | |
CN105430449B (en) | Media file playing method, apparatus and system | |
CN111107407A (en) | Audio and video playing control method, device and equipment and computer readable storage medium | |
CN103577060A (en) | Data processing method and electronic equipment | |
US10489192B2 (en) | Method and controlling apparatus for automatically terminating an application of an electronic apparatus based on audio volume level being adjusted lower than a threshold audio volume level by a user | |
CN105468196A (en) | Photographing device and method | |
CN111031354B (en) | Multimedia playing method, device and storage medium | |
WO2020154916A1 (en) | Video subtitle synthesis method and apparatus, storage medium, and electronic device | |
TWI581626B (en) | System and method for processing media files automatically | |
CN104683550A (en) | Information processing method and electronic equipment | |
CN108089837A (en) | A kind of switching method of microphone, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |