WO2018000953A1 - Procédé de traitement audio et vidéo, appareil, et microphone - Google Patents
Procédé de traitement audio et vidéo, appareil, et microphone Download PDFInfo
- Publication number
- WO2018000953A1 WO2018000953A1 PCT/CN2017/083816 CN2017083816W WO2018000953A1 WO 2018000953 A1 WO2018000953 A1 WO 2018000953A1 CN 2017083816 W CN2017083816 W CN 2017083816W WO 2018000953 A1 WO2018000953 A1 WO 2018000953A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- audio
- microphone
- channels
- channel
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
Definitions
- the present invention relates to the field of communications, and in particular to an audio and video processing method, apparatus, and microphone.
- the split screen device is a simple input device, which can output a certain way to the display device.
- the input source is not only all the way, but also needs to be Multi-channel synthesis, and also available for selection, the traditional split screen device does not have the access source of the mobile device, or the access of the data file, and the input device of the conference television terminal is many, in addition to the video source, there is an audio source, There is no product on the market that combines audio, video and multi-channel video.
- the video source can be mobile devices, computers, data sources, etc.
- the auxiliary stream of the traditional conference TV can only be connected one way, and many people discuss it. In the case of multi-person access, the switching of the auxiliary stream is very troublesome.
- a conventional video access device in the related art cannot provide an effective solution because the input source interface is limited and cannot meet the required problem.
- the embodiment of the invention provides an audio and video processing method, device and a microphone, so as to at least solve the problem that the conventional video access device in the related art cannot meet the requirement due to the limited input source interface.
- an audio and video processing method including: a microphone receiving one or more audio and video; the microphone synthesizing the one or more channels into one channel video, and combining one channel of audio or video The selected audio in the multi-channel audio is encoded; the microphone transmits the synthesized one-channel video and the encoded audio to the audio-visual device.
- the method further includes: the microphone externally broadcasting audio and video access capability by using a universal protocol, where the universal protocol includes DLNA, wireless transmission airplay, Wireless display WIFI display.
- the universal protocol includes DLNA, wireless transmission airplay, Wireless display WIFI display.
- the receiving, by the mic, one or more audio and video comprises: receiving the one or more audio and video by means of a physical port, a wireless local area network (WLAN), a Bluetooth or a near field communication NFC.
- WLAN wireless local area network
- NFC near field communication
- the method further includes: the microphone decoding the received one or more channels of video; according to the foregoing The encoded format negotiated by the audio device for the decoded one or more channels of video Encoding is performed, wherein the encoding format includes H263, H264, H265, Moving Picture Experts Group (MPEG), MP4, VP8, VP9.
- MPEG Moving Picture Experts Group
- the microphone synthesizing the one or more channels into one channel of video includes: the microphone receiving the selected input source and the information of the synthesis mode sent by the video and audio device; and selecting one or more corresponding ones according to the information
- the road video and the corresponding synthesis method combine the selected one or more channels into one channel video.
- the synthesizing manner includes one of the following: a font-shaped layout manner, and a left-right symmetric layout manner.
- the method further includes: the microphone selecting, by the video and audio device, the video to be played from the one or more channels of video .
- an audio and video processing apparatus including: a receiving module configured to receive one or more audio and video; and a synthesizing module configured to synthesize the one or more channels into One channel of video, and encodes one channel of audio or audio selected from multiple channels of audio; the transmitting module is configured to send the synthesized video and the encoded audio to the audio and video device.
- the device further includes: a broadcast module, configured to externally broadcast audio and video access capability by using a universal protocol, where the universal protocol includes a digital living network alliance DLNA, a wireless transmission airplay, and a wireless display WIFI display.
- a broadcast module configured to externally broadcast audio and video access capability by using a universal protocol, where the universal protocol includes a digital living network alliance DLNA, a wireless transmission airplay, and a wireless display WIFI display.
- the receiving module comprises: a receiving unit configured to receive the one or more audio and video by means of a physical port, a wireless local area network WIFI, a Bluetooth or a near field communication NFC.
- a receiving unit configured to receive the one or more audio and video by means of a physical port, a wireless local area network WIFI, a Bluetooth or a near field communication NFC.
- the apparatus further includes: a decoding module configured to decode the received one or more channels of video; and an encoding module configured to decode the decoded image according to an encoding format negotiated in advance with the video and audio device
- the one or more video is encoded, wherein the encoding format includes H263, H264, H265, MPEG, MP4, VP8, VP9.
- a microphone is also provided, including the above device.
- a computer storage medium is further provided, and the computer storage medium may store an execution instruction for performing the implementation of the audio and video processing method in the foregoing embodiment.
- the microphone receives one or more audio and video; the microphone synthesizes the one or more channels into one channel video, and encodes one channel of audio or audio selected from the plurality of channels of audio; The microphone sends the synthesized video and the encoded audio to the audio and video equipment, which solves the problem that the traditional video access equipment in the related art cannot meet the needs due to the limited input source interface, and improves the convenience of cooperation and interaction.
- FIG. 1 is a flowchart of an audio and video processing method according to an embodiment of the present invention.
- FIG. 2 is a block diagram of an audio and video processing apparatus according to an embodiment of the present invention.
- FIG. 3 is a block diagram 1 of an audio and video processing apparatus in accordance with a preferred embodiment of the present invention.
- FIG. 4 is a block diagram 2 of an audio and video processing apparatus in accordance with a preferred embodiment of the present invention.
- Figure 5 is a block diagram showing the structure of a novel microphone in accordance with a preferred embodiment of the present invention.
- FIG. 6 is a first schematic diagram of an audio video access process in accordance with a preferred embodiment of the present invention.
- FIG. 7 is a second schematic diagram of an audio video access process according to a preferred embodiment of the present invention.
- FIG. 8 is a third schematic diagram of an audio video access process according to a preferred embodiment of the present invention.
- FIG. 9 is a schematic diagram 4 of an audio video access process according to a preferred embodiment of the present invention.
- FIG. 10 is a fifth schematic diagram of an audio video access process according to a preferred embodiment of the present invention.
- FIG. 11 is a sixth schematic diagram of an audio video access process in accordance with a preferred embodiment of the present invention.
- FIG. 12 is a schematic diagram 7 of an audio video access process according to a preferred embodiment of the present invention.
- FIG. 13 is a schematic diagram 8 of an audio video access process in accordance with a preferred embodiment of the present invention.
- FIG. 1 is a flowchart of an audio and video processing method according to an embodiment of the present invention. As shown in FIG. 1, the process includes the following steps:
- Step S102 the microphone receives one or more audio and video
- Step S104 the microphone synthesizes one or more channels of video into one channel of video, and encodes one channel of audio or audio selected from the plurality of channels of audio;
- step S106 the microphone sends the synthesized video and the encoded audio to the audio and video device.
- the microphone receives one or more channels of audio and video; the microphone combines one or more channels of video into one channel of video, and encodes one channel of audio or audio selected from multiple channels of audio, wherein The audio is selected by one or more channels of audio for encoding; the microphone sends the synthesized video and the encoded audio to the video and audio device, which solves the problem that the traditional video access device in the related art cannot meet the requirement due to the limited input source interface.
- the problem is to improve the convenience of collaborative interaction.
- the microphone broadcasts audio and video access capabilities through a universal protocol before receiving one or more audio and video.
- the universal protocol includes the Digital Living Network Alliance DLNA, wireless transmission airplay, Wireless display WIFI display, it should be noted that it is not limited to the above protocols.
- the microphone receiving one or more audio and video may include: the microphone is connected through a physical port, a wireless local area network WIFI, a Bluetooth or a near field communication NFC. Receive one or more audio and video.
- the microphone decodes the received one or more channels of video; according to the encoding format negotiated in advance with the video and audio device, the decoded one way or The multi-channel video is encoded, wherein the encoding format includes H263, H264, H265, MPEG, MP4, VP8, VP9, and the like.
- the merging of the one or more channels of video into one channel of the video may include: receiving, by the mic, the information of the selected input source and the compositing mode sent by the video and audio device; selecting corresponding one or more channels of video according to the information, and The corresponding synthesis method combines the selected one or more channels into one channel video.
- the above-mentioned synthesis method includes one of the following: a font layout manner, a left-right symmetric layout manner, and it should be noted that it is not limited to the two implementation manners.
- the microphone can select the video to be played from the one or more videos by controlling the video and audio device, synthesize the selected video, and transmit the selected video to the video and audio device for playing.
- FIG. 2 is a block diagram of an audio and video processing device according to an embodiment of the present invention. As shown in FIG. 2, the method includes:
- the receiving module 22 is configured to receive one or more audio and video
- the synthesizing module 24 is configured to combine the one or more channels of video into one channel of video, and encode one channel of audio or audio selected from the plurality of channels of audio;
- the sending module 26 is configured to send the combined video and the encoded audio to the audio and video device.
- FIG. 3 is a block diagram 1 of an audio and video processing apparatus according to a preferred embodiment of the present invention. As shown in FIG. 3, the apparatus further includes:
- the broadcast module 32 is configured to broadcast audio and video access capabilities through a universal protocol, where the universal protocol includes a digital living network alliance DLNA, a wireless transmission airplay, and a wireless display WIFI display.
- the universal protocol includes a digital living network alliance DLNA, a wireless transmission airplay, and a wireless display WIFI display.
- the receiving module comprises: a receiving unit configured to receive the one or more audio and video by means of a physical port, a wireless local area network WIFI, a Bluetooth or a near field communication NFC.
- a receiving unit configured to receive the one or more audio and video by means of a physical port, a wireless local area network WIFI, a Bluetooth or a near field communication NFC.
- FIG. 4 is a block diagram 2 of an audio and video processing apparatus according to a preferred embodiment of the present invention. As shown in FIG. 4, the apparatus further includes:
- the decoding module 42 is configured to decode the received one or more channels of video
- the encoding module 44 is configured to encode the decoded one or more channels according to an encoding format negotiated in advance with the video and audio device, where the encoding format includes H263, H264, H265, MPEG, MP4, VP8, VP9, etc. .
- Embodiments of the present invention also provide a microphone including the above device.
- Embodiments of the present invention also provide a storage medium.
- the storage medium may be configured to store program code set to perform the following steps:
- Step S1 the microphone receives one or more audio and video
- Step S2 the microphone synthesizes one or more channels of video into one channel of video, and encodes one channel of audio or audio selected from the plurality of channels of audio;
- step S3 the microphone sends the synthesized video and the encoded audio to the audio and video device.
- the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
- ROM Read-Only Memory
- RAM Random Access Memory
- a mobile hard disk e.g., a hard disk
- magnetic memory e.g., a hard disk
- the processor performs the above steps S1, S2 and S3 according to the stored program code in the storage medium.
- the embodiment of the invention surrounds the audio collection device, such as a microphone, to add video access on the microphone to solve the current video and audio field.
- the drawbacks of the present invention are the following technical solutions:
- FIG. 5 is a structural block diagram of a novel microphone according to a preferred embodiment of the present invention. As shown in FIG. 5, the following modules are mainly included:
- the capability notification module 52 is configured to report its ability to be externally accessed to facilitate access to the device by an external source.
- the video capture module 54 is configured to collect video data with physical access, such as a common physical interface such as VGA, HDMI, or DVI.
- the audio collection module 56 is a microphone acquisition sound module.
- the data receiving module 58 is configured to receive video and audio data through a non-physical interface in addition to audio input with a physical interface, and is processed by the data receiving module 58.
- Received data includes wireless WIFI, miracast, wifi display, airplay, dlna and other interconnection protocols, or other data such as NFC, Bluetooth, etc., audio and so on.
- the media negotiation module 510 is configured to be responsible for negotiating with the remote device the media capabilities employed between the two parties.
- the media processing module 512 is configured to process the collected and received video, the audio data, including superimposing or synthesizing the multi-access video data, and generating data in a corresponding format according to the E compression encoding.
- the media sending module 514 is configured to send the superimposed or synthesized data to an external video and audio device, such as a conference television terminal, as needed, and the superimposed and synthesized data may be one of multiple paths in the access system, or in the access system. All roads determine the superposition or composition of several of them as needed.
- the input source control module 516 is configured to receive the control signaling sent by the video and audio device, and is used to select the video and audio source for the new microphone acquisition, to select which way to view and the audio source according to the control, and select which specific synthesis mode to send. For audio and video equipment.
- the method for supporting a new type of microphone application for video input and output includes the following content: a new type of microphone exposes its own vision through the capability notification module 52 through a general protocol. Audio access capability. General protocols include and are not limited to DLNA, airplay, wifi display, etc.
- the communication carrier includes, but is not limited to, WIFI, Bluetooth, NFC, and the like. If the external video source is a physical video signal, the external video source is directly connected to the new microphone and processed by the video capture module 54. If the external source is a wireless video input source, such as a cell phone, PAD, etc.
- the external video source searches for a new type of microphone through a universal protocol, and the new type of microphone realizes access to the wireless video source through the data receiving module 58;
- the general protocol includes and is not limited to DLNA, airplay, wifi display, and the like.
- the wireless method includes and is not limited to WIFI, Bluetooth, NFC and other communication methods. If the external source is a wireless audio input source, such as the music of a mobile phone. Then, the external audio source searches for a new type of microphone through a universal protocol, and the new type of microphone realizes access to the wireless audio frequency source through the data receiving module 58; the general protocol includes and is not limited to DLNA, airplay, wifi display, and the like.
- the wireless method includes and is not limited to WIFI, Bluetooth, NFC and other communication methods.
- the processing of the video and audio data collected and received by the media processing module 512 for the system includes: decoding the collected video physical signal, and then encoding according to the capability negotiated by the negotiation module, the format includes and is not limited to H264, Moving Picture Experts Group (Moving Picture) Experts Group, referred to as MPEG), MP4, etc.
- the non-audio and audio data to be received, such as file data is decoded by means of a folder or a file, and then encoded according to the negotiated capability.
- the encoding format includes and is not limited to H264, MPEG, MP4, and the like.
- the video information collected by the physical and non-physical methods is superimposed or synthesized to synthesize a video.
- the synthesis method includes, but is not limited to, a variety of layout manners such as a font shape and a left-right symmetry.
- the physically acquired audio, as well as the NFC, Bluetooth incoming audio, are encoded as needed.
- the superimposed, synthesized video, and encoded audio data are transmitted to an external AV device through the data module.
- the new microphone communicates with the video and audio device through the input source input control module, receives the selected input source and the synthesis mode information sent by the video and audio device, selects the corresponding input source according to the information new microphone, and performs the corresponding synthesis mode through the media sending module 514. , the video and audio data is sent to the audio and video equipment.
- FIG. 6 is a schematic diagram 1 of the audio video access processing according to a preferred embodiment of the present invention. As shown in FIG. 6, the method includes:
- the first step the new microphone broadcasts its own video and audio access capability through the capability notification module 52.
- the second step the notebook A accesses the new type of microphone, including: physically accessing the new type of microphone, the access mode can be HDMI, VGA, etc., and the new microphone collects the media signal of the notebook through the video acquisition module 54.
- the notebook A searches for a new type of microphone through a protocol such as wifi display or DLNA, airplay, etc., communicates with the data receiving module 58 of the new microphone, and transmits the media data to the new microphone to complete the access.
- the third step the new microphone negotiates the video and audio format to be encoded through the media negotiation module 510 and the conference television terminal;
- the fourth step the input source control module 516 obtains which video source needs to be selected by the external conference television terminal, and the synthesis mode, since only one video source is selected, the notebook A is selected;
- the fifth step the media processing module 512 performs encoding according to the synthesized mode, the selected video source, and the negotiated encoding format;
- Step 6 The media sending module 514 sends the encoded media data to the conference television terminal.
- Step 7 The user can see the video of the processed notebook through the output of the AV processing device;
- the eighth step the video selected by the user changes, and the corresponding video source and the synthesized mode selected by the input source control module 516 are sent to the conference television terminal.
- the first step the new microphone broadcasts its own video and audio access capability through the capability notification module 52;
- Step 2 Notebook A accesses the new mic, including: physically accessing the new mic,
- the access mode may be HDMI, VGA, etc.
- the new microphone collects the media signal of the notebook through the video capture module 54.
- the notebook A searches for a new type of microphone through a protocol such as wifi display or DLNA, airplay, etc., communicates with the data receiving module 58 of the new microphone, and transmits the media data to the new microphone to complete the access.
- the third step the notebook B accesses the new type of microphone, including: physical access to the new type of microphone, the access mode can be HDMI, VGA and other signal access, the new microphone collects the media signal of the notebook through the video capture module 54.
- the notebook B searches for a new type of microphone through a protocol such as wifi display or DLNA, airplay, etc., communicates with the data receiving module 58 of the new microphone, and transmits the media data to the new microphone to complete the access.
- the fourth step the new microphone negotiates the video and audio format to be encoded through the media negotiation module 510 and the conference television terminal;
- Step 5 The input source control module 516 obtains which video source needs to be selected for the external conference television terminal, and the synthesis mode.
- FIG. 7 is a second schematic diagram of an audio video access process according to a preferred embodiment of the present invention. As shown in FIG. 7, notebook A and notebook B are simultaneously selected.
- the synthesis method may be that the notebook A and the notebook B are stacked on the left or right, or may be vertically symmetrical, and is not limited to a specific screen layout.
- FIG. 8 is a third schematic diagram of an audio video access process according to a preferred embodiment of the present invention. As shown in FIG. 8, a notebook A is selected accordingly. Since only one video source is selected, the synthesis method is the content of the notebook A.
- FIG. 9 is a schematic diagram 4 of an audio video access process according to a preferred embodiment of the present invention. As shown in FIG. 9, a notebook B is selected correspondingly. Since only one video source is selected, the synthesis method is the content of the notebook B.
- the fifth step the media processing module 512 performs encoding according to the synthesized mode, the selected video source, and the negotiated encoding format;
- Step 6 The media sending module 514 sends the encoded media data to the conference television terminal.
- Step 7 The user can see the processed video through the output of the AV processing device.
- the eighth step the video selected by the user changes, and the corresponding video source and the synthesized mode selected by the input source control module 516 are sent to the conference television terminal.
- the first step the new microphone broadcasts its own video and audio access capability through the capability notification module 52.
- the second step the notebook A accesses the new type of microphone, including: physically accessing the new type of microphone, the access mode can be HDMI, VGA, etc., and the new microphone collects the media signal of the notebook through the video acquisition module 54.
- the notebook A searches for a new type of microphone through a protocol such as wifi display or DLNA, airplay, etc., communicates with the data receiving module 58 of the new microphone, and transmits the media data to the new microphone to complete the access.
- the third step the notebook B accesses the new type of microphone, including: physical access to the new type of microphone, the access mode can be HDMI, VGA and other signal access, the new microphone collects the media signal of the notebook through the video capture module 54.
- the notebook B searches for a new type of microphone through a protocol such as wifi display or DLNA, airplay, etc., communicates with the data receiving module 58 of the new microphone, and transmits the media data to the new microphone to complete the access.
- the third step the notebook C accesses the new type of microphone, including: physical access to the new type of microphone, the access mode can be HDMI, VGA and other signal access, the new microphone collects the media signal of the notebook through the video capture module 54.
- the notebook C searches for a new type of microphone through a protocol such as wifi display or DLNA, airplay, etc., communicates with the data receiving module 58 of the new microphone, and transmits the media data to the new microphone to complete the access.
- the fourth step the new microphone negotiates the video and audio format to be encoded through the media negotiation module 510 and the conference television terminal.
- Step 5 The input source control module 516 obtains which video source needs to be selected for the external conference television terminal, and the synthesis mode.
- FIG. 10 is a schematic diagram 5 of an audio video access process according to a preferred embodiment of the present invention. As shown in FIG. 10, notebook A and notebook B, notebook C are simultaneously selected.
- the synthesis method can be notebook A and notebook B, and notebook C accounts for one-third of each, and is not limited to a specific screen layout.
- FIG. 11 is a schematic diagram 6 of an audio video access process according to a preferred embodiment of the present invention. As shown in FIG. 11, notebook A and notebook B are selected accordingly.
- the composition method can be half of the contents of the notebook A and the notebook B, and is not limited to the layout of the screen.
- FIG. 12 is a schematic diagram 7 of an audio video access process according to a preferred embodiment of the present invention. As shown in FIG. 12, a notebook C is selected correspondingly. Since only one video source is selected, the synthesis method is the content of the notebook C. You can choose any of the input sources.
- Step 6 The media processing module 512 encodes according to the synthesized mode, the selected video source, and the negotiated encoding format.
- Step 7 The media sending module 514 sends the encoded media data to the conference television terminal.
- Step 8 The user can see the processed video through the output of the AV processing device.
- the ninth step the video selected by the user changes, and the corresponding video source and the synthesized mode selected by the input source control module 516 are sent to the conference television terminal.
- FIG. 13 is a schematic diagram of the audio video access processing according to a preferred embodiment of the present invention. As shown in FIG. 13, the method includes:
- the first step notebook A, notebook B, notebook C according to the first step of the previous example, the second step, the third step, etc. access to the new microphone.
- the second step the media processing module 512 encodes the signal, and the processed video signals of the notebook A, the notebook B, and the notebook C, and the NFC/Bluetooth device transmits the file to the new microphone through the NFC/Bluetooth, and the new microphone displays the contents of the folder;
- the third step the new microphone and the conference television terminal negotiate the coding capability and the synthesis mode
- the fourth step superimposing or synthesizing and encoding the video content to be displayed in the above A, B, C and the received file content according to the result of the third step negotiation;
- Step 5 The new microphone negotiates the video and audio format to be encoded through the media negotiation module 510 and the conference television terminal;
- Step 6 The media sending module 514 sends all the processed data to the conference television terminal
- Step 7 According to the needs of the user, the user can select to view the contents of notebook A, notebook B, notebook C, NFC/Bluetooth information or simultaneously watch notebook A and notebook B, notebook C through the input source control module 516 of the new microphone.
- Video content content displayed by NFC/Bluetooth information.
- the NFC/Bluetooth device accesses the output of the new microphone to the conference television device, and is not limited to three devices, and is not limited to being sent to the conference television device, and the video and audio device capable of outputting can be used.
- the eighth step the video selected by the user changes, and the corresponding video source and the synthesized mode selected by the input source control module 516 are sent to the conference television terminal.
- modules or steps of the present invention described above can be implemented by a general-purpose computing device that can be centralized on a single computing device or distributed across a network of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein.
- the steps shown or described are performed, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps thereof are fabricated as a single integrated circuit module.
- the invention is not limited to any specific combination of hardware and software.
- the microphone receives one or more audio and video; the microphone synthesizes the one or more channels into one channel video, and encodes one channel of audio or audio selected from the plurality of channels of audio; The microphone sends the synthesized video and the encoded audio to the audio and video equipment, which solves the problem that the traditional video access equipment in the related art cannot meet the needs due to the limited input source interface, and improves the convenience of cooperation and interaction.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Telephonic Communication Services (AREA)
Abstract
La présente invention concerne un procédé de traitement audio et vidéo, un appareil, et un microphone. Le procédé comprend les étapes suivantes : le microphone reçoit un ou plusieurs signaux audio et vidéo ; le microphone synthétise le ou les signaux vidéo en un signal vidéo unique, et encode le ou les signaux audio ou audio sélectionnés parmi la pluralité de signaux audio ; le microphone envoie la vidéo synthétisée et l'audio encodé à un dispositif vidéo et audio. La présente invention résout le problème lié, dans l'état de la technique, à l'incapacité d'un dispositif d'accès vidéo classique à répondre aux exigences en raison des limites d'une interface de source d'entrée dudit dispositif. L'invention facilite ainsi une meilleure interaction collaborative.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610495723.2A CN107547824A (zh) | 2016-06-29 | 2016-06-29 | 音视频处理方法、装置及麦克 |
CN201610495723.2 | 2016-06-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018000953A1 true WO2018000953A1 (fr) | 2018-01-04 |
Family
ID=60785831
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/083816 WO2018000953A1 (fr) | 2016-06-29 | 2017-05-10 | Procédé de traitement audio et vidéo, appareil, et microphone |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107547824A (fr) |
WO (1) | WO2018000953A1 (fr) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116962790A (zh) * | 2023-08-02 | 2023-10-27 | 深圳市辉宏科技有限公司 | 一种视频交互系统、方法及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101309390A (zh) * | 2007-05-17 | 2008-11-19 | 华为技术有限公司 | 视讯通信系统、装置及其字幕显示方法 |
US20100157016A1 (en) * | 2008-12-23 | 2010-06-24 | Nortel Networks Limited | Scalable video encoding in a multi-view camera system |
CN102404547A (zh) * | 2011-11-24 | 2012-04-04 | 中兴通讯股份有限公司 | 一种实现视频会议级联的方法及终端 |
CN103841360A (zh) * | 2013-12-11 | 2014-06-04 | 三亚中兴软件有限责任公司 | 分布式视频会议的实现方法及系统、终端、音视频一体化设备 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100505864C (zh) * | 2005-02-06 | 2009-06-24 | 中兴通讯股份有限公司 | 一种多点视频会议系统及其媒体处理方法 |
CN103888488A (zh) * | 2012-12-20 | 2014-06-25 | 三星电子(中国)研发中心 | 一种基于wifi进行数据共享的方法 |
CN104010155B (zh) * | 2013-02-27 | 2017-12-22 | 联芯科技有限公司 | 视频电话的实现方法及移动终端 |
CN103426431B (zh) * | 2013-07-24 | 2016-08-10 | 阳光凯讯(北京)科技有限公司 | 卫星网络与地面网系的融合通信系统及动态声码转换方法 |
CN104994247A (zh) * | 2015-05-19 | 2015-10-21 | 苏州方位通讯科技有限公司 | 一种将SIP终端作为VoIP热点接入通信的方法 |
-
2016
- 2016-06-29 CN CN201610495723.2A patent/CN107547824A/zh active Pending
-
2017
- 2017-05-10 WO PCT/CN2017/083816 patent/WO2018000953A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101309390A (zh) * | 2007-05-17 | 2008-11-19 | 华为技术有限公司 | 视讯通信系统、装置及其字幕显示方法 |
US20100157016A1 (en) * | 2008-12-23 | 2010-06-24 | Nortel Networks Limited | Scalable video encoding in a multi-view camera system |
CN102404547A (zh) * | 2011-11-24 | 2012-04-04 | 中兴通讯股份有限公司 | 一种实现视频会议级联的方法及终端 |
CN103841360A (zh) * | 2013-12-11 | 2014-06-04 | 三亚中兴软件有限责任公司 | 分布式视频会议的实现方法及系统、终端、音视频一体化设备 |
Also Published As
Publication number | Publication date |
---|---|
CN107547824A (zh) | 2018-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9497390B2 (en) | Video processing method, apparatus, and system | |
CN101778285B (zh) | 一种音视频信号无线传输系统及其方法 | |
US10932210B2 (en) | Content output device and control method thereof | |
CN104125434B (zh) | 一种长距离高清传输的系统 | |
WO2011050690A1 (fr) | Procédé et système pour enregistrer et reproduire une conférence multimédia | |
WO2008071110A1 (fr) | Procédé et système de lecture simultanée de signal de télévision multicanal | |
JP2007020144A (ja) | フォーマットとプロトコル転換を有するコンテンツ統合方法 | |
KR101582795B1 (ko) | Hdmi 동글 및 그의 제어방법 | |
US11068148B2 (en) | Information processing device | |
JP2015084513A (ja) | 表示フォワーディング機能および関連デバイスへの互換性通知を利用してコンテンツ共有を行うための方法 | |
CN104301657B (zh) | 一种会议电视终端及其辅流数据接入方法 | |
CN101841639A (zh) | 多媒体信号处理装置及多媒体影音系统 | |
JP5870149B2 (ja) | オーディオ再生装置、マルチメディアビデオ再生システム及びその再生方法 | |
US20130291011A1 (en) | Transcoding server and method for overlaying image with additional information therein | |
WO2018000953A1 (fr) | Procédé de traitement audio et vidéo, appareil, et microphone | |
US9237304B2 (en) | Multi-channel sharing apparatus and multi-channel sharing method | |
US20150020136A1 (en) | Multimedia stream transmission method and system based on terahertz wireless communication | |
US20130097648A1 (en) | Internet-enabled smart television | |
CN106412684A (zh) | 一种高清视频无线传输方法及系统 | |
US20150055716A1 (en) | Decoding apparatus for a set top box | |
CN105812907B (zh) | 一种在线多媒体节目流共享方法和装置 | |
JP2016143954A (ja) | 無線通信装置及び無線通信方法 | |
RU159037U1 (ru) | Устройство потоковой передачи аудиопотока | |
CN209402583U (zh) | 一种数据收发设备和录播系统 | |
WO2023279326A1 (fr) | Procédé et appareil de transmission audio et vidéo, dispositif et support de stockage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17818950 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17818950 Country of ref document: EP Kind code of ref document: A1 |