CN110381197A - Many-one throws the processing method of screen sound intermediate frequency data, apparatus and system - Google Patents
Many-one throws the processing method of screen sound intermediate frequency data, apparatus and system Download PDFInfo
- Publication number
- CN110381197A CN110381197A CN201910567806.1A CN201910567806A CN110381197A CN 110381197 A CN110381197 A CN 110381197A CN 201910567806 A CN201910567806 A CN 201910567806A CN 110381197 A CN110381197 A CN 110381197A
- Authority
- CN
- China
- Prior art keywords
- audio
- electronic device
- message
- screen display
- electronic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title abstract description 9
- 238000000034 method Methods 0.000 claims abstract description 119
- 238000012545 processing Methods 0.000 claims abstract description 97
- 230000004044 response Effects 0.000 claims description 103
- 238000002156 mixing Methods 0.000 claims description 49
- 230000008569 process Effects 0.000 abstract description 25
- 230000006870 function Effects 0.000 description 174
- 238000004891 communication Methods 0.000 description 78
- 230000006854 communication Effects 0.000 description 78
- 239000010410 layer Substances 0.000 description 27
- 239000000523 sample Substances 0.000 description 24
- 238000005516 engineering process Methods 0.000 description 23
- 230000003993 interaction Effects 0.000 description 22
- 238000007726 management method Methods 0.000 description 21
- 238000005266 casting Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 19
- 238000010295 mobile communication Methods 0.000 description 15
- 230000005236 sound signal Effects 0.000 description 12
- 238000003860 storage Methods 0.000 description 12
- 210000000988 bone and bone Anatomy 0.000 description 10
- 238000004590 computer program Methods 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 7
- 230000001133 acceleration Effects 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 6
- 230000001413 cellular effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 229920001621 AMOLED Polymers 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000000977 initiatory effect Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 238000010009 beating Methods 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 239000012792 core layer Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000010349 pulsation Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1423—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
- G06F3/1431—Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using a single graphics controller
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72409—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
- H04M1/72412—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72409—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
- H04M1/72415—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories for remote control of appliances
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72427—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72442—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
This application discloses the processing method and processing devices that many-one throws screen system sound intermediate frequency data.When many-one is thrown and shielded, each electronic equipment can need to play the audio of itself and/or the audio of other electronic equipments according to user, to meet the needs of many-one throws different user under screen scene.When playing after the audio of the audio of oneself and other electronic equipments is done stereo process by electronic equipment, user can hear the audio of all electronic equipments simultaneously.Under scene of game, user can not only experience the scene of game of oneself, can also experience the scene of game of other users, can promote user experience.
Description
Technical Field
The present application relates to the field of screen projection technologies, and in particular, to a method, an apparatus, and a system for processing audio data in a many-to-one screen projection.
Background
With the development of terminal technology, more and more terminals have a screen projection function. For example, one or more mobile phones may project a currently displayed screen onto a large-screen display device such as a television for display. One terminal projects a screen on a large-screen display device, which can be called one-to-one screen projection; the multiple terminals project screens to the large-screen display device at the same time, and the screen projection can be called many-to-one screen projection.
When the screen is projected in many-to-one mode, the pictures from the terminals can be displayed on the large-screen display device in different areas, a plurality of users can see the pictures of the terminals on the large-screen display device at the same time, and user experience is good.
However, in the current scene of one screen projection, the audio data of each terminal is basically sent to the large-screen display device for simultaneous playing, so that the audio played by the large-screen display device is mixed, and the user cannot capture the audio wanted by himself, resulting in poor screen projection effect and poor user experience.
Disclosure of Invention
The application provides a method, a device and a system for processing audio data in a many-to-one screen, wherein each electronic device can play own audio and/or audio of other electronic devices according to the needs of a user when the screen is projected in many-to-one mode.
In a first aspect, the present application provides a method for processing audio data in a many-to-one projection screen. The method is applied to a many-to-one screen projection system, and the many-to-one screen projection system comprises the following steps: the system comprises a first electronic device, a second electronic device and a large-screen display device; it is characterized in that. The method comprises the following steps:
the first electronic equipment sends first screen projection content to the large-screen display equipment, wherein the first screen projection content comprises a first image and a first audio;
the second electronic equipment sends second screen projection content to the large-screen display equipment, wherein the second screen projection content comprises a second image and a second audio;
the large-screen display equipment displays the first image and the second image and plays the first audio and the second audio;
the first electronic equipment detects a first operation, and in response to the first operation, the first electronic equipment stops sending the first audio to the large-screen display equipment;
after the first operation is detected, the first electronic device receives the second audio sent by the second electronic device, and plays the first audio and the second audio after sound mixing processing.
By implementing the method of the first aspect, in a many-to-one screen projection scene, after receiving the first operation, the first electronic device may perform sound mixing processing on the first audio and the second audio and then play the first audio and the second audio. In a game scenario, an associated user of the first electronic device may be made to feel the game scenario of the other user. Furthermore, the audio processing of the second electronic device is not affected, i.e. does not affect the experience of other users. Therefore, the requirements of different users in a many-to-one screen projection scene can be met.
With reference to the first aspect, the large-screen display device and the first electronic device, and the large-screen display device and the second electronic device are connected and communicate with each other through a wireless communication technology. Wireless communication techniques herein may include, but are not limited to: wireless Local Area Network (WLAN) technology, bluetooth (bluetooth), infrared, Near Field Communication (NFC), ZigBee, and other wireless communication technologies that emerge in later developments, and the like.
In the method of the first aspect, the first electronic device and the second electronic device do not establish a connection directly therebetween, and do not communicate directly but can communicate through the large-screen display device. In some embodiments, the first electronic device stops sending the first audio to the large-screen display device without affecting the first electronic device sending the first audio to the second electronic device through the large-screen display device.
In the method of the first aspect, the first electronic device and the second electronic device both send their screen projection content to the large-screen display device, thereby completing many-to-one screen projection. The screen projection content can be from the local end of the electronic equipment or from a cloud server. The screen shot content includes images and audio. The image may be a video or a picture. The audio may be referred to as audio of the electronic device, and may include audio acquired by the electronic device from a network or audio stored locally by the electronic device. In some embodiments of the present application, the audio of the electronic device may not include the audio collected by the electronic device through the microphone or other audio collecting device, that is, the electronic device may not collect the voice of the user or the environmental sound, so as to avoid the user from repeatedly hearing the same voice, thereby providing a relatively simple auditory environment for the user.
With reference to the first aspect, in some embodiments, after the large-screen display device displays the first image and the second image and plays the first audio and the second audio, the first electronic device may further display a first user interface. The first user interface may include a first control, and the first operation may include an operation acting on the first control, such as a touch operation, a click operation, and the like. The first control is not limited to an operation acting on the first control, and the first operation may also be a voice instruction, a gesture instruction, or the like, which is not limited in this application.
With reference to the first aspect, in some embodiments, after the electronic device performs mixing processing on multiple pieces of audio and plays the audio, the user may listen to the multiple pieces of audio at the same time. The electronic device may play the multiple audio pieces after mixing the multiple audio pieces, that is, the electronic device may play the multiple audio pieces by superimposing the multiple audio pieces according to the same time axis.
In conjunction with the first aspect, in some embodiments, the first electronic device may obtain the second audio by:
1. and (4) passive acquisition.
Specifically, the first electronic device may send a first message to the large-screen display device in response to a first operation, where the first message is used to request to acquire the second audio; the large-screen display device can respond to the first message and send the identification of the first electronic device to the second electronic device; and the second electronic equipment sends the second audio to the first electronic equipment according to the identification of the first electronic equipment.
The second audio is acquired through the first mode 1, so that the interaction between the first electronic equipment and other equipment in the many-to-one screen projection system can be reduced.
2. And actively requesting to acquire.
Specifically, the large-screen display device may send, in response to the first message, an identifier of the second electronic device to the first electronic device; the first electronic equipment sends a second message to the second electronic equipment according to the identifier of the second electronic equipment, wherein the second message is used for requesting to acquire the second audio; the second electronic device sends the second audio to the first electronic device in response to the second message.
The second audio is obtained through the 2 nd mode, so that the interaction between the large-screen display device and other devices in the many-to-one screen projection system can be reduced.
With reference to the first aspect, in some embodiments, after the first electronic device receives the first operation, the second electronic device in the many-to-one screen projection system may also perform mixing processing on the first audio and the second audio and play the first audio and the second audio. Specifically, in response to the first message, the large-screen display device sends a third message to the second electronic device; in response to the third message, the second electronic device stops sending the second audio to the large-screen display device; after receiving the third message, the second electronic device receives the first audio sent by the first electronic device, and plays the first audio and the second audio after performing audio mixing processing.
Through the embodiment, when the electronic equipment in the many-to-one screen projection system receives the first operation, all the electronic equipment performs sound mixing processing on the own audio and the audio of other electronic equipment and then plays the audio. Therefore, when the screen is projected in many-to-one mode, each user can feel the game scene of the user, and can also feel the game scenes of other users through the audio of other electronic equipment, so that the user experience can be improved.
In the above embodiment, the second electronic device may acquire the second audio in two ways:
1. and (4) passive acquisition.
Specifically, the large-screen display device may send, in response to the first message, an identifier of the second electronic device to the first electronic device; the first electronic equipment sends the first audio to the second electronic equipment according to the identification of the second electronic equipment.
The interaction between the second electronic device and other devices in the many-to-one screen projection system can be reduced by acquiring the first audio in the 1 st mode.
2. And actively requesting to acquire.
Specifically, the large-screen display device may send, in response to the first message, an identifier of the first electronic device to the second electronic device; the second electronic equipment sends a fourth message to the first electronic equipment according to the identifier of the first electronic equipment, wherein the fourth message is used for requesting to acquire the first audio; the first electronic device sends the first audio to the second electronic device in response to the fourth message.
The interaction between the large-screen display device and other devices in the many-to-one screen projection system can be reduced by acquiring the first audio in the 2 nd mode.
In combination with the first aspect, in some embodiments, the method may further include: responding to the first message, and sending a fifth message to the second electronic equipment by the large-screen display equipment; and responding to the fifth message, the second electronic equipment outputs first prompt information, and the first prompt information is used for inquiring whether the user stops sending the second audio to the large-screen display equipment or not and playing the first audio and the second audio after sound mixing processing.
Here, the first prompt information may be a visual element, voice, vibration feedback, flash feedback, or the like output on the display screen.
Through the embodiment, when the electronic equipment receives the first operation in the many-to-one screen projection system, other electronic equipment can prompt the user whether to stop sending the own audio to the large-screen display equipment or not, and play the own audio and the audio of other electronic equipment after audio mixing processing. After the user makes a selection, each electronic device can execute corresponding operation according to the selection of the user, so that the requirements of different users in a many-to-one screen projection scene can be met, the user can select whether to feel the game scene of other users according to the requirement, and the user experience is improved.
With reference to the foregoing embodiments, in some embodiments, after the second electronic device outputs the first prompt message, the method further includes: the second electronic device detects a second operation, responds to the second operation, stops sending the second audio to the large-screen display device, receives the first audio sent by the first electronic device, and plays the first audio and the second audio after sound mixing processing. Here, the second operation may be an operation (e.g., a touch operation), a voice instruction, a gesture instruction, or the like that acts on a visual element displayed on the display screen.
In combination with the first aspect, in some embodiments, the method further comprises: the first electronic device detects a third operation, responds to the third operation, stops sending the first audio to the large-screen display device, and plays the first audio.
By implementing the method in the above embodiment, in a many-to-one screen projection scene, after the first electronic device receives the third operation, the first audio may stop being sent to the large-screen display device, and the first audio is played, so that the associated user of the first electronic device can feel the own game scene. Furthermore, the audio processing of the second electronic device is not affected, i.e. does not affect the experience of other users. Therefore, the requirements of different users in a many-to-one screen projection scene can be met.
In combination with the above embodiments, in some implementations, the method further includes: in response to the third operation, the first electronic device sends a sixth message to the large-screen display device.
In a specific embodiment, in response to the sixth message, the large-screen display device sends a seventh message to the second electronic device; in response to the seventh message, the second electronic device stops sending the second audio to the large-screen display device and plays the second audio. By the embodiment, when the electronic equipment in the many-to-one screen projection system receives the third operation, all the electronic equipment stops sending the own audio to the large-screen display equipment, and plays the own audio at the local end. Therefore, when the screen is projected in many-to-one mode, each user can feel the game scene of the user, and the user experience can be improved.
In another specific embodiment, in response to the sixth message, the large-screen display device sends an eighth message to the second electronic device; in response to the eighth message, the second electronic device outputs second prompt information for asking the user whether to stop sending the second audio to the large-screen display device and playing the second audio. By the embodiment, when the electronic equipment in the many-to-one screen projection system receives the third operation, other electronic equipment can prompt the user whether to stop sending the own audio to the large-screen display equipment or not and play the own audio at the local end. After the user makes a selection, each electronic device can execute corresponding operation according to the selection of the user, so that the requirements of different users in a many-to-one screen projection scene can be met, the user can select whether to feel the game scene of other users according to the requirement, and the user experience is improved.
Here, the second prompt information may be a visual element, voice, vibration feedback, flash feedback, or the like output on the display screen.
In some embodiments, after the second electronic device outputs the second prompt message, the method further includes: the second electronic device detects a fourth operation, and in response to the fourth operation, the second electronic device stops sending the second audio to the large-screen display device and plays the second audio.
In a second aspect, the present application provides a method for processing audio data in a many-to-one screen projection system, where the method is applied to a first electronic device in the many-to-one screen projection system. The method can comprise the following steps: the method comprises the steps that first electronic equipment sends first screen projection content to large-screen display equipment, wherein the first screen projection content comprises a first image and a first audio; detecting a first operation, and stopping sending the first audio to the large-screen display equipment in response to the first operation; and after the first operation is detected, receiving the second audio sent by the second electronic equipment, and playing the first audio and the second audio after sound mixing processing.
With reference to the second aspect, in some embodiments, the first electronic device may actively acquire the second audio, or may passively acquire the second audio, which may specifically refer to the related description of the first aspect.
With reference to the second aspect, in some embodiments, after receiving the first operation, the first electronic device may receive the identifier of the second electronic device sent by the large-screen display device, and send the first audio to the second electronic device according to the identifier of the second electronic device. This may enable the second electronic device to passively acquire the first audio.
With reference to the second aspect, in some embodiments, after receiving the first operation, the first electronic device may receive a fourth message sent by the second electronic device, where the fourth message is used to request to obtain the first audio; in response to the fourth message, the first electronic device sends the first audio to the second electronic device. This may enable the second electronic device to actively request the acquisition of the first audio.
In combination with the second aspect, in some embodiments, the method of the second aspect may further include: the first electronic device detects a third operation, and in response to the third operation, the first electronic device stops sending the first audio to the large-screen display device and plays the first audio.
In the above embodiment, the method of the second aspect may further include: and responding to the third operation, the first electronic equipment sends a sixth message to the large-screen display equipment, wherein the sixth message is used for indicating that the first electronic equipment stops sending the first audio to the large-screen display equipment and playing the first audio.
Based on the same inventive concept, the processing method and the beneficial effects of the audio data in the many-to-one screen shot in the second aspect may refer to the steps executed by the first electronic device and the beneficial effects thereof in each possible method implementation manner of the first aspect and the first aspect, and repeated parts are not described again.
In a third aspect, an embodiment of the present application provides a method for processing audio data in a many-to-one screen. The method is applied to large-screen display equipment in a many-to-one screen projection system. The method can comprise the following steps: the large-screen display equipment receives first screen projection content sent by first electronic equipment, wherein the first screen projection content comprises a first image and a first audio; receiving second screen projection content sent by second electronic equipment, wherein the second screen projection content comprises a second image and second audio; displaying the first image and the second image, and playing the first audio and the second audio; stopping receiving the first audio sent by the first electronic device, and receiving a first message sent by the first electronic device, wherein the first message is used for requesting to acquire the second audio; in response to the first message, sending an identification of the first electronic device to the second electronic device; or, in response to the first message, sending an identification of the second electronic device to the first electronic device.
Based on the same inventive concept, the processing method and the beneficial effects of the audio data in the many-to-one screen projection in the third aspect may refer to the steps executed by the large-screen display device and the beneficial effects thereof in each possible method implementation manner of the first aspect and the first aspect, and repeated parts are not described again.
In a fourth aspect, an embodiment of the present application provides a method for processing audio data in a many-to-one screen. The method is applied to the second electronic equipment in the many-to-one screen projection system. The method can comprise the following steps: the second electronic equipment sends second screen projection content to the large-screen display equipment, wherein the second screen projection content comprises a second image and a second audio; receiving the identifier of the first electronic device sent by the large-screen display device, and sending the second audio to the first electronic device according to the identifier of the first electronic device; or receiving a second message sent by the first electronic device, wherein the second message is used for requesting to acquire the second audio; in response to the second message, the second electronic device sends the second audio to the first electronic device.
In combination with the fourth aspect, in some embodiments, the method of the fourth aspect may further include: the second electronic equipment receives a third message sent by the large-screen display equipment, wherein the third message is used for indicating the second electronic equipment to stop sending a second audio to the large-screen display equipment and playing the first audio and the second audio after audio mixing processing; in response to the third message, the second electronic device stops sending second audio to the large-screen display device.
In a specific embodiment, after receiving the third message, the second electronic device receives the first audio sent by the first electronic device, and performs audio mixing processing on the first audio and the second audio to play.
In another specific implementation, after receiving the third message, the second electronic device receives the identifier of the first electronic device sent by the large-screen display device, and sends a fourth message to the first electronic device according to the identifier of the first electronic device, where the fourth message is used to request to acquire the first audio; receiving the first audio sent by the first electronic equipment; and playing the first audio and the second audio after sound mixing processing.
In combination with the fourth aspect, in some embodiments, the method of the fourth aspect may further include: the second electronic equipment receives a fifth message sent by the large-screen display equipment, wherein the fifth message is used for indicating that the first electronic equipment stops sending the first audio to the large-screen display equipment and playing the first audio and the second audio after audio mixing processing; and responding to the fifth message, the second electronic equipment outputs first prompt information, and the first prompt information is used for inquiring whether the user stops sending the second audio to the large-screen display equipment or not and playing the first audio and the second audio after sound mixing processing.
In combination with the fourth aspect, in some embodiments, the method of the fourth aspect may further include: the second electronic equipment receives a seventh message sent by the large-screen display equipment, wherein the seventh message is used for indicating the second electronic equipment to stop sending the second audio to the large-screen display equipment and playing the second audio; in response to the seventh message, the second electronic device stops sending the second audio to the large-screen display device and plays the second audio.
In combination with the fourth aspect, in some embodiments, the method of the fourth aspect may further include: the second electronic device receives an eighth message sent by the large-screen display device, wherein the eighth message is used for indicating that the first electronic device stops sending the first audio to the large-screen display device and playing the first audio; in response to the eighth message, the second electronic device outputs second prompt information for asking the user whether to stop sending the second audio to the large-screen display device and playing the second audio.
Based on the same inventive concept, the processing method and the beneficial effects of the audio data in the many-to-one screen projection in the fourth aspect may refer to the steps executed by the large-screen display device and the beneficial effects brought by the steps in each possible method implementation manner of the first aspect and the first aspect, and repeated parts are not described again.
In a fifth aspect, an embodiment of the present application provides a many-to-one screen projection system. The many-to-one screen projection system comprises: the device comprises a first electronic device, a second electronic device and a large-screen display device. Wherein the first electronic device is configured to perform the method as described in any one of the possible embodiments of the second aspect or the second aspect, the large-screen display device is configured to perform the method as described in any one of the possible embodiments of the third aspect or the third aspect, and the second electronic device is configured to perform the method as described in any one of the possible embodiments of the fourth aspect or the fourth aspect.
In a sixth aspect, embodiments of the present application provide an electronic device, which includes one or more processors, a memory; the memory is coupled to the one or more processors and is configured to store computer program code comprising computer instructions that are invoked by the one or more processors to cause the electronic device to perform the method as described in the second aspect or any one of the possible implementations of the second aspect.
In a seventh aspect, an embodiment of the present application provides a large-screen display device, where the large-screen display device includes one or more processors and a memory; the memory is coupled to the one or more processors and is configured to store computer program code comprising computer instructions that are invoked by the one or more processors to cause the electronic device to perform the method as described in the third aspect or any one of the possible implementations of the third aspect.
In an eighth aspect, embodiments of the present application provide an electronic device, which includes one or more processors, a memory; the memory is coupled to the one or more processors and is configured to store computer program code comprising computer instructions that are invoked by the one or more processors to cause the electronic device to perform the method as described in the fourth aspect or any one of the possible implementations of the fourth aspect.
In a ninth aspect, the present application provides a computer program product containing instructions, which when run on an electronic device, causes the electronic device to perform the method described in the second aspect and any possible implementation manner of the second aspect.
In a tenth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions that, when executed on an electronic device, cause the electronic device to perform a method as described in the second aspect and any possible implementation manner of the second aspect.
In an eleventh aspect, the present application provides a computer program product containing instructions, which when run on an electronic device, causes the electronic device to perform the method described in any one of the possible implementation manners of the third aspect and the third aspect.
In a twelfth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions that, when executed on an electronic device, cause the electronic device to perform a method as described in any possible implementation manner of the third aspect and the third aspect.
In a thirteenth aspect, the present application provides a computer program product containing instructions, which when run on an electronic device, causes the electronic device to perform the method as described in any one of the possible implementation manners of the fourth aspect and the fourth aspect.
In a fourteenth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions that, when executed on an electronic device, cause the electronic device to perform a method as described in any one of the possible implementation manners of the fourth aspect and the fourth aspect.
According to the technical scheme, when the screen is projected in a many-to-one mode, each electronic device can play the audio of the electronic device and/or the audio of other electronic devices according to the needs of the user, and therefore the requirements of different users in the many-to-one screen projection scene are met. When the electronic device performs sound mixing processing on the own audio and the audio of other electronic devices and plays the audio, the user can simultaneously hear the audio of all the electronic devices. In a game scene, a user can feel not only the game scene of the user but also the game scenes of other users, so that the user experience can be improved.
Drawings
FIG. 1 is a schematic structural diagram of a many-to-one screen projection system provided by an embodiment of the present application;
fig. 2A is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
FIG. 2B is a diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of a large-screen display device provided in an embodiment of the present application;
4A-4D, 5, 6A-6C, 7, 8, 9B, 10B, 11B, 12, 13B, 14B, 15B are schematic diagrams of human-computer interaction provided by embodiments of the present application;
fig. 9A is a schematic flowchart of a method for processing audio data in a many-to-one screen projection system according to an embodiment of the present application;
fig. 10A is a schematic flowchart of another method for processing audio data in a many-to-one screen projection system according to an embodiment of the present application;
fig. 11A is a schematic flowchart of a processing method of audio data in another many-to-one screen projection system according to an embodiment of the present application;
fig. 13A is a schematic flowchart of a processing method of audio data in another many-to-one screen projection system according to an embodiment of the present application;
fig. 14A is a schematic flowchart of a processing method of audio data in another many-to-one screen projection system according to an embodiment of the present application;
fig. 15A is a flowchart illustrating a processing method of audio data in another many-to-one screen projection system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
The following embodiments of the application provide a method, a device and a system for processing audio data in a many-to-one projection screen. When the screen is projected in many-to-one mode, each electronic device can play own audio and/or audio of other electronic devices according to the needs of users.
The audio of the electronic device may include audio retrieved by the electronic device from a network or audio stored locally by the electronic device. For example, when a user plays a game using an electronic device, the server providing the game may send audio to the electronic device, such as a prompt voice "request support", "retreat", "gunshot", "running", and/or the like. For another example, when a user watches a video on line using an electronic device, a video server that provides the video transmits audio corresponding to the video to the electronic device. For example, when a user views a video stored in a local area using an electronic device, the electronic device stores audio corresponding to the video. It can be understood that in the present application, when a screen is projected many to one, a plurality of users are generally in the same area, and can talk directly in a real scene. Therefore, in some embodiments of the present application, the audio of the electronic device may not include the audio collected by the electronic device through the microphone or other audio collecting device, i.e., the electronic device may not collect the voice of the user or the environmental sound, etc., so as to avoid the user from repeatedly hearing the same sound, thereby providing a relatively simple auditory environment for the user.
In some embodiments, when the electronic device turns on "reverberation" during many-to-one screen projection, the electronic device may perform mixing processing on its own audio and the audio of the other electronic device and play the audio. The reverberation can be a service or function provided by the electronic device, and can support the electronic device to acquire the audio of other electronic devices in a many-to-one screen projection system in a many-to-one screen projection scene, and play the audio of other electronic devices and the audio of the electronic device after audio mixing processing. The 'reverberation' function can enable a user to feel the screen projection scene of the user, and can also feel the screen projection scenes of other users through the audio frequency of other electronic equipment, so that the user experience can be improved.
In some embodiments, when the electronic device turns on "listen in secrecy" in many-to-one screen projection, the electronic device may play the audio at the local end without sending its own audio to the large screen display device. The 'private listening' can be a service or function provided by the electronic equipment, and can support the electronic equipment not to send own audio to the large-screen display equipment but to play the audio at a local end in a many-to-one screen-casting scene. Under the screen projection scene, the 'secret listening' function can enable the electronic equipment to play own audio at a local end, can protect the privacy of a user, gives better hearing experience to the user, and can also avoid causing interference to other people.
It is understood that "reverberation" and "listening" are only words used in this embodiment, and their representative meanings are described in this embodiment, and their names do not limit this embodiment in any way. In addition, in some other embodiments of the present application, "reverberation" and "binaural hearing" may also be referred to by other names.
In the many-to-one screen projection scene provided by the embodiment of the application, the electronic device projects the screen projection content to the large-screen display device through the wireless screen projection, so that the large-screen display device displays and/or plays the screen projection content. The wireless screen projection is a service or function provided by the electronic device, and can support the electronic device to project screen projection content to the large-screen display device in a wireless manner.
The following first describes a many-to-one screen projection system provided in an embodiment of the present application. Referring to fig. 1, the many-to-one screen projection system may include a plurality of electronic devices 100 and a large screen display device 200.
In the embodiment of the present application, in the many-to-one screen projection system 100, the large-screen display device 200 and the plurality of electronic devices 100 may be connected and communicate with each other through a wireless communication technology. Wireless communication techniques herein may include, but are not limited to: wireless Local Area Network (WLAN) technology, bluetooth (bluetooth), infrared, Near Field Communication (NFC), ZigBee, and other wireless communication technologies that emerge in later developments, and the like. For convenience of description, the following embodiments will be described by taking as an example communication between the large-screen display device 200 and the plurality of electronic devices 100 through a wireless fidelity direct (also called wireless fidelity peer-to-peer (Wi-Fi P2P)) technology.
The electronic device 100 may be a portable electronic device such as a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a wearable device, a laptop computer (laptop), and the like. Exemplary embodiments of portable electronic devices include, but are not limited to, portable electronic devices that carry an iOS, android, microsoft, or other operating system. The portable electronic device may also be other portable electronic devices such as laptop computers (laptop) with touch sensitive surfaces (e.g., touch panels), etc. It should also be understood that in some other embodiments of the present application, the electronic device may not be a portable electronic device, but may be a desktop computer with a touch-sensitive surface (e.g., a touch panel), a smart television, or the like.
The large-screen display apparatus 200 refers to an electronic apparatus having a large-sized display screen, such as a television, a desktop computer, or an electronic billboard, etc. The larger size of the display screen configured by the large-screen display device 200 is relative to the electronic device 100, that is, the display screen size of the large-screen display device 200 is larger than the display screen size of the electronic device 100. In some embodiments, the television may be used with a television box (not shown in fig. 1) that converts received digital signals to analog signals that are sent to the television for display. In the following embodiments, the large-screen display device 200 may be a television set having a digital-to-analog conversion function, or may be a television set equipped with a television box.
The large-screen display device 200 and the electronic device 100 may each provide a solution for a wireless fidelity (Wi-Fi) network, which may support Wi-Fi P2P technology. With Wi-Fi P2P technology, two devices can connect and communicate directly without an Access Point (AP).
The large screen display device 200 may construct a WiFi P2P group (WiFi P2P group) through Wi-Fi P2P technology. The WiFi P2P group includes a management end (GO) and a plurality of clients (GC). The GO can be simultaneously connected with a plurality of GCs, and can manage and monitor the plurality of GCs in real time. In a WiFi P2P group, GO is equivalent to an Access Point (AP) in a wireless local area network, and can support communication between any two GCs. In the WiFi P2P group constructed by the large-screen display device 200, the large-screen display device 200 is used as GO and the electronic device 100 is used as GC.
The electronic device 100 may add a WiFi P2P group constructed by the large-screen display device 200 through a Wi-Fi P2P technology, connect to the large-screen display device 200, and send the screen-casting content to the large-screen display device 200 through the established Wi-Fi P2P connection, so that the large-screen display device 200 displays and/or plays the screen-casting content, thereby completing screen casting. The screen-shot content may include, but is not limited to, images (e.g., video, pictures, etc.) and audio. The images in the projected content can come from the local side of the electronic device or from a network (such as a cloud server). The audio in the projected content can come from the local side of the electronic device or from a network (e.g., a cloud server). When a plurality of electronic apparatuses 100 project screen projection contents onto the large screen display apparatus 200, it is a many-to-one screen projection.
In the embodiment of the application, both the electronic device 100 and the large-screen display device 200 support the wireless display standard miracast based on Wi-Fi Direct (Wi-Fi Direct), so that the screen can be projected wirelessly through Wi-Fi P2P. The wireless screen projection function of the electronic device 100 and the large-screen Display device 200 may be referred to as "Wi-Fi Display".
Understandably, when the many-to-one screen projection system performs wireless screen projection through the Wi-Fi P2P, the electronic devices are not directly connected with each other and cannot directly communicate with each other. The communication between the electronic devices mentioned in the following embodiments of the present application means that the electronic devices communicate through the large-screen display device 200 in the many-to-one projection system. For example, the first electronic device sends its own audio to the second electronic device means that the first electronic device forwards its own audio to the second electronic device through the large-screen display device 200 according to the identifier of the second electronic device, that is, the transmission path of the audio of the first electronic device is: first electronic device-large screen display device 200-second electronic device. In the following embodiments of the present application, a protocol used in communication between electronic devices may be a real-time transport protocol (real-time transport protocol).
In the following embodiments of the present application, the communication path between the electronic device 100 and another electronic device, and the communication path between the electronic device 100 and the large-screen display device 200 are two unrelated communication paths. The communication paths between the electronic device 100 and other electronic devices are: electronic device 100-large screen display device 200-other electronic devices. The communication path between the electronic apparatus 100 and the large screen display apparatus 200 is: electronic device 100-large screen display device 200. The two communication paths do not affect each other.
For example, the electronic apparatus 100 stopping transmitting the own audio to the large-screen display apparatus 200 means that the electronic apparatus 100 does not transmit the own audio to the large-screen display apparatus 200 through a communication path between itself and the large-screen display apparatus 200. It can be understood that the electronic apparatus 100 stops transmitting its own audio to the large screen display apparatus 200, and does not affect whether the electronic apparatus 100 transmits its own audio to other electronic apparatuses through a communication path between itself and the other electronic apparatuses. That is, the electronic apparatus 100 stops transmitting its own audio to the large-screen display apparatus 200, and does not affect the electronic apparatus 100 from transmitting the audio to other electronic apparatuses through the large-screen display apparatus 200. At this time, if the electronic apparatus 100 needs to transmit audio to other electronic apparatuses, the electronic apparatus 100 may also transmit audio to other electronic apparatuses through the large screen display apparatus 200. Here, when the electronic apparatus 100 stops transmitting its own audio to the large-screen display apparatus 200, whether the electronic apparatus 100 transmits its own audio to the other electronic apparatus through the communication path between itself and the other electronic apparatus may refer to the related description of the subsequent embodiments.
That is, a send audio B, A mentioned in the following embodiments of the present application stops sending audio to B, etc., a being the source device of the audio and B being the destination device of the audio.
Referring to fig. 2A, fig. 2A shows a schematic structural diagram of the electronic device 100 provided in the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a display screen serial interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
In some embodiments of the present application, after the electronic device 100 detects the operation for enabling the "reverberation" function, the audio module 170 may mix the audio of the electronic device 100 and the audio of other electronic devices in the many-to-one screen projection system. The audio of the electronic device 100 is the audio in the screen-projected content of the electronic device 100.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
In some embodiments of the present application, upon the electronic device 100 detecting an operation for enabling the "reverberation" function, the speaker 170A may play the audio module 170 mixing the processed audio of the electronic device 100 and the audio of other electronic devices in the many-to-one screen projection system.
In other embodiments of the present application, audio module 170 may play audio from electronic device 100 upon electronic device 100 detecting an operation to enable the "listen to" function.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 2B is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2B, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2B, the application framework layers may include a window manager (window manager), content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
Referring to fig. 3, fig. 3 shows a schematic structural diagram of a large-screen display device 200 provided in an embodiment of the present application. As shown in fig. 3, the large screen display apparatus 200 may include: a processor 202, a memory 203, a wireless communication processing module 204, a power switch 205, a wired LAN communication processing module 206, an HDMI communication processing module 207, a USB communication processing module 208, and a display screen 209. Wherein:
processor 202 may be used to read and execute computer readable instructions. In particular implementations, processor 202 may include primarily controllers, operators, and registers. The controller is mainly responsible for instruction decoding and sending out control signals for operations corresponding to the instructions. The arithmetic unit is mainly responsible for executing fixed-point or floating-point arithmetic operation, shift operation, logic operation and the like, and can also execute address operation and conversion. The register is mainly responsible for storing register operands, intermediate operation results and the like temporarily stored in the instruction execution process. In particular implementations, the hardware architecture of the processor 202 may be an Application Specific Integrated Circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, or an NP architecture, among others.
In some embodiments, the processor 202 may be configured to parse signals received by the wireless communication processing module 204 and/or the wired LAN communication processing module 206, such as probe requests (e.g., probe request frames) broadcast by the electronic device 100, display requests sent by the electronic device 100, and so on. The processor 202 may be configured to perform corresponding processing operations according to the parsing result, such as generating a probe response, driving the display screen 209 to perform display according to the display request or the display instruction, and so on.
In some embodiments, the processor 202 may also be configured to generate signals, such as bluetooth broadcast signals, beacon signals, and signals for feeding back display status (e.g., display success, display failure, etc.), which are sent out by the wireless communication processing module 204 and/or the wired LAN communication processing module 206, for example, and are sent to the electronic device 100.
A memory 203 is coupled to the processor 202 for storing various software programs and/or sets of instructions. In particular implementations, memory 203 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid state storage devices. The memory 203 may store an operating system, such as an embedded operating system like uCOS, VxWorks, RTLinux, etc. Memory 203 may also store communication programs that may be used to communicate with electronic device 100, one or more servers, or additional devices.
The wireless communication processing module 204 may include one or more of a Bluetooth (BT) communication processing module 204A, WLAN communication processing module 204B.
In some embodiments, one or more of the Bluetooth (BT) communication processing module and the WLAN communication processing module may listen to signals transmitted by other devices (e.g., the electronic device 100), such as probe request frames, scan signals, and the like, and may transmit response signals, such as probe response frames, scan responses, and the like, so that the other devices (e.g., the electronic device 100) may discover the large-screen display device 200 and establish wireless communication connections with the other devices (e.g., the electronic device 100) and communicate with the other devices (e.g., the electronic device 100) through one or more wireless communication technologies in bluetooth or WLAN.
In the present application, the large screen display device 200 may establish a Wi-Fi P2P connection with the electronic device 100 via Wi-Fi P2P technology. After the large-screen display device 200 and the electronic device 100 establish the Wi-Fi P2P connection, the communication processing module 204B may receive the screen projection content sent by the electronic device 100. The screen shot content may include, but is not limited to, images (e.g., video, pictures, etc.) and audio.
The wireless communication processing module 204 may also include a cellular mobile communication processing module (not shown). The cellular mobile communication processing module may communicate with other devices, such as servers, via cellular mobile communication technology.
The power switch 205 may be used to control the power supply of the power supply to the large screen display device 200.
The wired LAN communication processing module 206 is operable to communicate with other devices in the same LAN through a wired LAN, and is also operable to connect to a WAN through a wired LAN, and to communicate with devices in the WAN.
The HDMI communication processing module 207 can be used to communicate with other devices through an HDMI interface (not shown).
The USB communication processing module 208 may be used to communicate with other devices through a USB interface (not shown).
The display screen 209 may be used to display images, video, and the like. The display screen 209 may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED) display screen, an active-matrix organic light-emitting diode (AMOLED) display screen, a flexible light-emitting diode (FLED) display screen, a quantum dot light-emitting diode (QLED) display screen, or the like.
In some embodiments, the large screen display device 200 may also include an audio module 210. The audio module 210 may be used to output an audio signal through the audio output interface, which may enable the large-screen display device 200 to support audio playback. The audio module may also be configured to receive audio data via the audio input interface. The audio module 210 may include, but is not limited to: microphones, speakers, receivers, and the like.
In some embodiments, the large screen display device 200 may also include a serial interface such as an RS-232 interface. The serial interface can be connected to other devices, such as audio play-out devices like a sound box, so that the display and the audio play-out devices can cooperatively play audio and video.
The large-screen display device 200 may be a television or other media playing device.
It is to be understood that the structure illustrated in fig. 3 does not constitute a specific limitation of the large-screen display device 200. In other embodiments of the present application, the large screen display device 200 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The software system of the large-screen display device 200 is similar to the software system of the electronic device 100, and may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture or a cloud architecture. When the software system of the large-screen display device 200 is an Android system adopting a layered architecture, the structure and the function of each layer in the Android system may refer to the related description in fig. 2B, and are not repeated again.
In this embodiment of the present application, the electronic device 100 shown in fig. 2A and the large screen display device 200 shown in fig. 3 may cooperate to perform the method for processing audio data in the many-to-one screen projection system provided in this embodiment of the present application. The number of the electronic devices 100 may be multiple.
The plurality of electronic devices 100 may transmit the screen-projected contents to the large-screen display device 200, and the large-screen display device 200 may display the screen-projected screen and play audio of the electronic devices 100. Here, the operations performed by the electronic device 100 and the large-screen display device 200 can be specifically described with reference to the following embodiments of fig. 4A to 4D, fig. 5, fig. 6A to 6C, and fig. 7.
In some embodiments, if an electronic device turns on a "reverberation" function, the electronic device may mix its own audio with the audio of other electronic devices in the many-to-one screen projection system and play the audio. Here, operations performed by the electronic device turning on the "reverberation" function, the remaining electronic devices, and the large-screen display device in the many-to-one projection system may be described with reference to the following embodiments of fig. 9A and 9B.
In some embodiments, if an electronic device of the plurality of electronic devices 100 turns on a "reverberation" function, all electronic devices in the many-to-one screen projection system play audio of the electronic device and audio of other electronic devices after mixing processing. Here, operations performed by each electronic device and the large-screen display device in the many-to-one screen projection system may refer to the following description related to the embodiment of fig. 10A and 10B.
In some embodiments, if an electronic device of the plurality of electronic devices 100 turns on the "reverberation" function, the electronic device may perform mixing processing on its own audio and the audio of other electronic devices in the many-to-one screen projection system and then play the audio, and the other electronic devices in the many-to-one screen projection system may prompt the user to enable the "reverberation" function. Here, operations performed by the electronic device turning on the "reverberation" function, the remaining electronic devices, and the large-screen display device in the many-to-one projection system may be described with reference to the following embodiments of fig. 11A and 11B.
In some embodiments, if an electronic device of the plurality of electronic devices 100 turns on the "private listening" function, the electronic device stops sending its own audio to the large-screen display device and plays its own audio at the local end. Here, operations performed by the electronic device that activates the "private listening" function, the remaining electronic devices, and the large-screen display device in the many-to-one screen projection system may refer to the following description of the embodiment of fig. 13A and 13B.
In some embodiments, if any electronic device in the plurality of electronic devices 100 starts the "private listening" function, all electronic devices in the many-to-one screen projection system stop sending their own audio to the large-screen display device and play their own audio at the local end. Here, operations performed by each electronic device and the large-screen display device in the many-to-one screen projection system may refer to the following description of the embodiment of fig. 14A and 14B.
In some embodiments, if an electronic device of the plurality of electronic devices 100 starts the "private listening" function, the electronic device may stop sending its own audio to the large-screen display device and play its own audio at the local end, and the other electronic devices in the many-to-one screen projection system may prompt the user to start the "private listening" function. Here, operations performed by the electronic device that activates the "private listening" function, the remaining electronic devices, and the large-screen display device in the many-to-one screen projection system may refer to the following description of the embodiment of fig. 15A and 15B.
Based on the many-to-one screen projection system shown in fig. 1, the electronic device 100 shown in fig. 2A, and the large-screen display device 200 shown in fig. 3, some exemplary graphical user interfaces provided by the embodiments of the present application and implemented on the electronic device 100 and the large-screen display device 200 are described below.
Fig. 4A-4D illustrate exemplary graphical user interfaces provided by the electronic device 100 and the large screen display device 200 during initiation of screen projection by the many-to-one screen projection system.
FIG. 4A illustrates an exemplary user interface 41 on the electronic device 100 for exposing installed applications. The user interface 41 may include: status bars, calendar indicators, weather indicators, trays with frequently used application icons, navigation bars, and other application icons, etc. In some embodiments, the user interface 41 illustratively shown in FIG. 4A may be a Home screen (Home Screen).
As shown in fig. 4A and 4B, when the electronic device detects a slide-down gesture on the status bar, the electronic device 100 may display a window 401 on the user interface 41 in response to the gesture. As shown in fig. 4B, a control 401A may be displayed in the window 401, and the control 401A may receive an operation (e.g., a touch operation) of turning on/off a screen projecting function of the electronic device. The representation of control 401A may include icons and/or text (e.g., text "wireless projection screen", "multi-screen interaction", or "screen mirror", etc.). Other switch controls for functions such as Wi-Fi, bluetooth, flashlight, etc. may also be displayed in window 401.
When an operation (e.g., a touch operation) on control 401A in window 401 is detected, in response to the operation, electronic device 100 starts searching for other devices in the vicinity through Wi-Fi P2P technology. The electronic device 100 searches for other devices by scanning probe requests (e.g., probe request frames) sent by other devices. In the embodiment of the present application, the large-screen display device 200 in the many-to-one screen projection system may broadcast a probe request (e.g., a probe request frame). The large-screen display device 200 may continuously broadcast the probe request through the Wi-Fi P2P technology after being powered on, or may broadcast the probe request under the trigger of the user (for example, after the user turns on the miracast function of the large-screen display device 200). The electronic apparatus 100 may scan a probe request (e.g., a probe request frame) sent by the large-screen display apparatus 200, that is, the electronic apparatus 100 may discover the large-screen display apparatus 200. It can be understood that when the large-screen display device 200 and the electronic device 100 are connected and communicate through other wireless communication technologies, in response to the operation on the control 401A in the window 401, the electronic device 100 may search for other nearby devices through other wireless communication technologies, which is not limited by the embodiment of the present application.
Fig. 4C illustrates a window 402 displayed on the user interface 41 after the electronic device 100 discovers the large screen display device 200. As shown in fig. 4C, a window 402 may have displayed therein: an interface indicator 402A, an icon 402B, an image 402C of the large screen display device 200 as discovered by the electronic device, an identification 402D (e.g., the string "Android _545 f"), and screen-shot information 402E.
In window 402, an interface indicator 402A is used to indicate that the content displayed in window 402 is information of a device discovered when wirelessly projected. Icon 402B is used to indicate that electronic device 100 is also continuously discovering other devices in the vicinity. The screen projection information 402E of the large-screen display device 200 may be used to indicate the screen projection modes supported by the large-screen display device, such as whether the large-screen display device supports a computer mode, a mobile phone mode, and the like. Here, the image 402C, the logo 402D, and the screen-projection information 402E of the large-screen display apparatus 200 discovered by the electronic apparatus may be carried in a probe request (e.g., a probe request frame).
The image 402C and/or the logo 402D may receive a user operation (e.g., a touch operation), and in response to the user operation, the electronic device may feed back a probe response (e.g., a probe response frame) to the large-screen display device 200 corresponding to the image 402C and/or the logo 402D.
Fig. 4D exemplarily shows the user interface displayed after the large-screen display device 200 receives the probe response transmitted by the electronic device 100. The user interface may include a window 403, the window 403 including: an image and/or identification 403A, a control 403B, and a control 403C of the electronic device 100. The image and/or identification 403A of the electronic device 100 may be carried in a probe response (e.g., probe response frame). Control 403B may receive a user operation in response to which large screen display device 200 refuses to establish a Wi-Fi P2P connection with electronic device 100. Control 403C may receive a user operation in response to which large screen display device 200 agrees to establish a Wi-Fi P2P connection with electronic device 100. Here, the user operations received on the controls 403B, 403C displayed in the large-screen display apparatus 200 described above may include, but are not limited to: 1. when the large-screen display device 200 is configured with a touch screen, the user operation may be a touch operation, a click operation, or the like that acts on the controls 403B, 403C. 2. The user operation may be a pressing operation input on a confirmation key of the remote controller after the remote controller of the large-screen display apparatus 200 selects the controls 403B and 403C. The remote controller and the large screen display device 200 are connected by infrared or other wireless communication means. The remote controller can be an entity remote controller or a remote controller simulated by a mobile phone.
As shown in fig. 4D, when an operation on a control 403C in a window 403 is detected, in response to the operation, the large-screen display device 200 and the electronic device 100 establish a connection through Wi-Fi P2P. In the embodiment of the present application, the process of establishing the connection between the large-screen display device 200 and the electronic device 100 through the Wi-Fi P2P includes:
step 1, the large-screen display device 200 and the electronic device 100 negotiate that the large-screen display device 200 is used as GO and the electronic device is used as GC to jointly construct a Wi-Fi P2P group. The large-screen display device 200 assigns an identifier to the electronic device 100 that is specific to use in the Wi-Fi P2P group and that can be used for the large-screen display device 200 or other electronic devices to communicate with the electronic device 100. The identification may be an internet protocol address (IP).
And step 2, the large-screen display device 200 and the electronic device 100 negotiate screen projection parameters. The screen projection parameters can comprise audio and video compression rate, audio and video format, resolution/frame rate supported by both sides and other contents. The large-screen display device 200 and the electronic device 100 negotiate screen projection parameters used when determining screen projection, thereby ensuring execution of subsequent screen projection work.
The Wi-Fi P2P connection established by the large-screen display device 200 and the electronic device 100 is a Transmission Control Protocol (TCP) connection. After the connection is established, the large-screen display device 200 and the electronic device 100 may transmit audio and video data through a real-time transport protocol (RTP) protocol, thereby completing screen projection. In some embodiments, after the large-screen display device 200 and the electronic device 100 successfully establish the Wi-Fi P2P connection, if there is a request from another electronic device to establish the Wi-Fi P2P connection with the large-screen display device 200, the large-screen display device 200 and the electronic device 100 need to renegotiate screen projection parameters (e.g., resolution) to adapt to a subsequent possible many-to-one screen projection scenario.
Not limited to the manner in which the screen is launched by pulling down window 401 on the main interface (Home screen) shown in fig. 4A-4D, the user may also launch the screen by pulling down window 401 on other interfaces. For example, the user may also start the screen projection by pulling down the window 401 on a game interface or a video interface provided by the electronic device 100, which is not limited in this embodiment of the application.
After a Wi-Fi P2P connection is established between multiple electronic devices 100 and the large screen display device 200, the multi-cast one-cast system may initiate a many-to-one screen cast. The procedure for establishing the Wi-Fi P2P connection between the other electronic device and the large-screen display device 200 is the same as the procedure exemplarily shown in fig. 4A to 4D, and reference may be made to the related description.
Fig. 5 illustrates an exemplary graphical user interface provided by the electronic device 100 during initiation of a screen shot by another multi-shot one-shot system.
As shown in fig. 5, the large-screen display device 200 may generate and display a two-dimensional code for constructing a Wi-Fi P2P group. The two-dimensional code carries a Media Access Control (MAC) address of the large screen display device 200. The large-screen display device 200 may generate the two-dimensional code under the trigger of the user, for example, when the user turns on the "wireless screen projection" function of the large-screen display device 200.
The plurality of electronic devices 100 (e.g., the first electronic device and the second electronic device) may start the camera, scan the two-dimensional code provided by the large-screen display device 200, acquire the MAC address of the large-screen display device 200, and thereby establish the Wi-Fi P2P connection with the large-screen display device 200. The process of establishing connection between the large-screen display device 200 and the electronic device 100 through the Wi-Fi P2P is the same as the process of establishing connection between the large-screen display device 200 and the electronic device 100 through the Wi-Fi P2P in the embodiments of fig. 4A to 4D, and reference may be made to the related description.
Fig. 6A-6C illustrate exemplary graphical user interfaces provided by electronic device 100 during initiation of a screen shot by yet another multi-shot one-shot system.
The user interface 61 exemplarily shown in fig. 6A may be a settings homepage provided for settings (settings) application. The "setting" application is an application program installed on an electronic device such as a smart phone or a tablet computer and used for setting various functions of the electronic device, and the name of the application program is not limited in the embodiment of the application. The user interface 61 may be a user interface opened by a user clicking a setting icon in a main interface (e.g., the user interface 41 shown in fig. 4A) of the electronic device.
As shown in fig. 6A, the user interface 61 may include: a status bar 601, a current page indicator 602, a search box 603, an icon 604, an area 605 including one or more settings items.
Status bar 601 may include one or more signal strength indicators for mobile communication signals (which may also be referred to as cellular signals), operator name (e.g., "china mobile"), one or more signal strength indicators for wireless fidelity (Wi-Fi) signals, battery status indicators, and time indicators.
The current page indicator 602 may be used to indicate the current page, e.g., the textual information "settings" may be used to indicate that the current page is used to present one or more settings. Not limited to text information, the current page indicator 602 may also be an icon.
The search box 603 may receive an operation (e.g., a touch operation) of searching for a setting item through text. In response to this operation, the electronic device may display a text input box so that the user displays a setting item desired to be searched in the input box.
The icon 604 may receive an operation (e.g., a touch operation) of searching the setting item by voice. In response to the operation, the electronic device may display a voice input interface so that the user inputs a voice in the voice input interface to search for the setting item.
The area 605 includes one or more setting items including a device connection setting item 605A and the like. The representation of each setting item may include an icon and/or text, which is not limited in this application. Each setting item may receive an operation (e.g., a touch operation) that triggers display of the setting content of the corresponding setting item, and in response to the operation, the electronic device may open a user interface for displaying the setting content of the corresponding setting item.
In some embodiments, upon detecting an operation (e.g., a touch operation) on the device connection setting item 605A as in fig. 6A, the electronic device may display the user interface 62 as shown in fig. 6B.
As shown in fig. 6B, the user interface 62 is used to display the corresponding contents of the device connection setting item 605A. The user interface 62 may include: status bar 606, return key 607, current page indicator 308, bluetooth switch control, NFC switch control, screen projection entry 609.
The status bar 606 can refer to the status bar 601 in the user interface 61 shown in fig. 6A, and is not described in detail here.
The return key 607 is an APP level return key, and can be used to return to a level above the menu. The top level page of the user interface 62 may be the user interface 61 as shown in fig. 6A.
The current page indicator 608 may be used to indicate the current page, e.g., the textual information "device connected" may be used to indicate that the current page is used to present the corresponding content of the device connection settings. Not limited to text information, the current page indicator 608 may also be an icon.
The screen-cast entry 609 may receive an operation (e.g., a touch operation) that triggers display of the related content of the screen cast, in response to which the electronic device may open a user interface for displaying the related content of the screen cast.
Fig. 6C illustrates a user interface 63 for displaying the relevant content for the screen shot. As shown in fig. 6C, user interface 63 may include: status bar 610, return key 611, current page indicator 612, screen projection mode entry 613, switch control 614 for "wireless screen projection," switch control 615 for "private listening," switch control 616 for "reverberation.
The status bar 610 can refer to the status bar 601 in the user interface 61 shown in fig. 6A, and is not described in detail here.
The return key 611 is an APP level return key, and can be used to return to a level above the menu. The top level page of the user interface 63 may be the user interface 62 as shown in fig. 6B.
The current page indicator 612 may be used to indicate the current page, e.g., the textual information "screen shot" may be used to indicate that the current page is used to present the relevant content for the screen shot. Not limited to text information, current page indicator 612 may also be an icon.
The screen-projection mode entry 613 may receive a user operation, in response to which the electronic device 100 may display a screen-projection mode, such as a cell phone mode or a computer mode, currently supportable in screen-projection interaction with the large-screen display device 200, for selection by the user. As shown in fig. 6C, the mode currently used in the screen projection interaction of the electronic device 100 and the large-screen display device 200 is a computer mode. In the mobile phone mode, the projected image of the electronic device 100 onto the large-screen display device 200 is the same as the image displayed on the local side of the electronic device 100. In the computer mode, the projected image of the electronic device 100 onto the large screen display device 200 and the image displayed on the local side of the electronic device 100 may be different, for example, the electronic device 100 may project a video onto the large screen display device 200 but display an instant chat interface on the local side.
The "wireless screen projection" switch control 614 may receive an operation (e.g., a touch operation) to turn on/off the wireless screen projection function of the electronic device. The representation of control 614 may include icons and/or text. As shown in fig. 6C, the switch control 614 of "wireless screen projection" indicates that the screen projection function (e.g., touch operation) is turned on by the current electronic device. After the electronic device starts the screen projection function, other nearby devices are searched by the Wi-Fi P2P technology. Here, the process of the electronic device searching for other devices may refer to the relevant description of the embodiments of fig. 4A-4D.
After the electronic device 100 discovers other devices, the following may be displayed on the user interface 63: icon 617, area 618 for presenting information of the discovered device, control 619. Icon 617 is used to indicate that device 100 is also continuously discovering other devices around. Region 618 for presenting information of discovered devices includes: the image 618A of the large-screen display device 200, the identification 618B (e.g., the character string "Android _545 f"), and the screen-shot information 618C, which are discovered by the electronic device. Here, the image 618A may refer to the image 402C in the window 402 shown in fig. 4C, the mark 618B may refer to the mark 402D in the window 402 shown in fig. 4C, and the screen projection information 618C may refer to the screen projection information 402E in the window 402 shown in fig. 4C, which is not described herein again. Control 619 can receive a user operation (e.g., a touch operation) in response to which electronic device 100 can cease discovering other devices.
The image 618A and/or the identifier 618B may receive a user operation (e.g., a touch operation), and in response to the user operation, the electronic device may feed back a probe response (e.g., a probe response frame) to the large-screen display device 200 corresponding to the image 618A and/or the identifier 618B.
The large-screen display device 200 may receive the probe response sent by the electronic device 100, and after receiving the probe response, the large-screen display device may display a user interface as shown in fig. 4D. As shown in fig. 4D, when an operation (e.g., a touch operation) on a control 403C in a window 403 is detected, in response to the operation, the large-screen display device 200 and the electronic device 100 establish a connection through Wi-Fi P2P. The process of establishing the connection between the large-screen display device 200 and the electronic device 100 through the Wi-Fi P2P may refer to the related description of the embodiment of fig. 4A-4D, and is not repeated here.
Without being limited to the manners shown in the above embodiments of fig. 4A to 4D, fig. 5, and fig. 6A to 6C, the electronic device 100 may also establish the Wi-Fi P2P connection with the large-screen display device 200 through other manners, for example, the electronic device may also establish the Wi-Fi P2P connection with the large-screen display device in response to a voice instruction of a user, and the like, which is not limited in this application.
After the plurality of electronic devices 100 and the large-screen display device 200 establish the Wi-Fi P2P connection, that is, after the plurality of electronic devices 100 and the large-screen display device 200 form one Wi-Fi P2P group, the plurality of electronic devices 100 may send their screen projection contents to the large-screen display device 200 through the Wi-Fi P2P connection, that is, to implement multi-to-one screen projection. The screen-projected content of the electronic device 100 may include, but is not limited to, images and/or audio. It can be understood that, when the screen is projected in many-to-one manner, each electronic device 100 can operate in a mobile phone mode or a computer mode according to the user's needs, that is, the content of the local end of the electronic device 100 can be the same as or different from the content of the screen.
The "reverberation" function and the "binaural listening" function provided by the embodiments of the present application are described in detail below through a specific application scenario.
In this specific application scenario, users may experience a game together through the electronic device 100, and project game content (including a game screen and game audio) provided by a game application executed by each electronic device as screen projection content into the large-screen display device 200. Therefore, the user can fully utilize the larger-size screen configured by the large-screen display device 200 to view the game picture, and the user experience is improved.
Fig. 7 a-7C exemplarily show a schematic diagram in which a plurality of electronic devices 100 project game contents as screen-projected contents onto a large-screen display device 200 after establishing a Wi-Fi P2P connection with the large-screen display device 200 in a many-to-one screen-projected system. In the many-to-one screen projection system shown in fig. 7 a-7c, the plurality of electronic devices 100 includes a first electronic device and a second electronic device. The screen projection modes of the first electronic device and the second electronic device are both mobile phone modes, namely, the screen displayed at the local end of the electronic device is the same as the screen projection image.
As shown in fig. 7 a and 7 b, the user interface 71 displayed on the first electronic device is provided by a racing game, and the user interface 72 displayed on the second electronic device is provided by a racing game. The racing game may be, for example, a wild racing car (Aspalat), a required for Speed (Need for Speed), or the like. The user of the first electronic device and the user of the second electronic device enter the same race in the racing game, and the race may be an opponent or a teammate, which is not limited in the present application.
As shown, the first electronic device sends the projected content to the large screen display device 200 via a Wi-Fi P2P connection. The screen-casting content of the first electronic device may include: a projected image of the first electronic device (e.g., a game view provided by a racing game run by the first electronic device), audio of the first electronic device (e.g., audio provided by a racing game run by the first electronic device, such as the alert tones "speed up," "turn," "slow down," etc.).
Similarly, the second electronic device sends the projected content to the large screen display device 200 via a Wi-Fi P2P connection. The screen-casting content of the second electronic device may include: a projected image of the second electronic device (e.g., a game view provided by a racing game run by the second electronic device), audio of the second electronic device (e.g., audio provided by a racing game run by the second electronic device, such as the alert tones "speed up," "turn," "slow down," etc.).
After receiving the screen projection contents sent by the first electronic device and the second electronic device, the large-screen display device 200 may display screen projection images in the screen projection contents in different regions. Referring to c of fig. 7, the large screen display device 200 may display a projected image of the first electronic device in a first area on the display screen and a projected image of the second electronic device in a second area on the display screen. The first region and the second region do not overlap. The positions of the first area and the second area in the display screen may be determined by the large-screen display device 200, or may be negotiated between the large-screen display device 200 and each electronic device 100, which is not limited in this application. In some embodiments, the first area is a left area of the display screen and the second area is a right area of the display screen; or the first area is an upper area of the display screen, and the second area is a lower area of the display screen.
Referring to c of fig. 7, the large screen display device 200 may also play the audio of the first electronic device and the audio of the second electronic device through an audio device (e.g., a speaker in the audio module 210). Specifically, the large-screen display device 200 may perform audio mixing processing on the audio of the first electronic device and the audio of the second electronic device, and then play the audio.
In the embodiment of the application, when the many-to-one screen projection system projects the screen, the user can enable the 'reverberation' function of the electronic equipment as required. After the user starts the 'reverberation' function of the electronic device, the electronic device can perform sound mixing processing on the audio of the electronic device and the audio of other electronic devices in the many-to-one screen projection system and then play the audio, namely the electronic device can perform sound mixing processing on the audio in the screen projection content of the electronic device and the audio in the screen projection content of other electronic devices and then play the audio.
Fig. 8 illustrates one way in which a first electronic device, which may be the first electronic device in the many-to-one projection screen system shown in a-c of fig. 7, enables a "reverberation" function. As shown in fig. 8, the first electronic device may provide a user interface 81. The user interface 81 may be a user interface provided by a user exiting the game interface as shown in a of fig. 7 and entering a "settings" application while the user is casting. The user interface 81 is the same as the user interface 63 shown in fig. 6C, and reference may be made to the description relating to the embodiment of fig. 6C.
As shown in fig. 8, the user interface 81 may include a switch control 616 for a "reverberation" function. The control 616 may receive an operation (e.g., a touch operation) to turn on/off a "reverberation" function of the electronic device. As shown in fig. 8, in response to an operation (e.g., a touch operation) acting on control 616, the first electronic device may enable a "reverberation" function.
It will be appreciated that, without limitation to the operation on the control 616 shown in fig. 8, the "reverberation" function of the electronic device may also be enabled in other ways in the embodiment of the present application, for example, the user may also enable the "reverberation" function of the electronic device through a voice instruction or a gesture.
In the embodiment of the present application, after the first electronic device enables the "reverberation" function, the first electronic device may play the audio of the first electronic device after mixing the audio of the first electronic device with the audio of other electronic devices (e.g., a second electronic device) in the many-to-one projection system. Several possible message interaction processes between the devices in the many-to-one screen projection system after the first electronic device enables the "reverberation" function during the many-to-one screen projection are described below.
In the first case, the user enables the "reverberation" function of the first electronic device, and the operation of the user enabling the "reverberation" function of the first electronic device is not used for enabling the "reverberation" function of other electronic devices in the many-to-one screen projection system.
Referring to fig. 9A, fig. 9A shows a schematic diagram of the interaction between the devices in the many-to-one projection system in the first case. As shown in fig. 9A, the interaction process may include the following steps:
step S110, the first electronic device detects an operation for enabling the "reverberation" function.
The operation for enabling the "reverberation" function may be an operation (for example, a touch operation) performed by the user on the control 616 in the user interface 81 shown in fig. 8, or may be a voice instruction or a gesture instruction input by the user. Without being limited thereto, in some embodiments, the user may also enter a long press gesture on a control 401 in the user interface as shown in fig. 4B, through which the "reverb" function of the first electronic device is enabled after the first electronic device displays a switch control of the "reverb" function in response to the long press gesture.
Step S120, in response to the operation, the first electronic device stops sending the audio in the screen projection content to the large-screen display device, and sends a first message to the large-screen display device 200, where the first message is used to request to acquire the audio of the other electronic devices except the first electronic device in the many-to-one screen projection system.
Step S130, the large-screen display device 200 receives the first message, and sends the identifier of the first electronic device to other electronic devices except the first electronic device in response to the first message.
Specifically, after the large-screen display device 200 creates the Wi-Fi P2P group, the identification of each electronic device in the Wi-Fi P2P group may be stored and maintained. Here, the identification of the electronic device may be an identification that the large-screen display device 200 assigned to the electronic device when creating the Wi-Fi P2P group, and may be, for example, an IP address assigned to the electronic device.
Step S140, other electronic devices except the first electronic device in the many-to-one screen projection system receive the identifier of the first electronic device, and send their own audio to the first electronic device according to the identifier. That is, the other electronic device transmits its own audio to the first electronic device.
Step S150, the first electronic device receives the audio of other electronic devices except the first electronic device in the many-to-one screen projection system, and plays the audio of the first electronic device and the audio of the other electronic devices after performing audio mixing processing.
For example, a second electronic device in the many-to-one screen projection system may receive the identifier of the first electronic device sent by the large-screen display device 200, and send the audio of the second electronic device to the first electronic device according to the identifier. After the first electronic device receives the audio of the second electronic device, the audio of the first electronic device and the audio of the second electronic device may be mixed and played.
In the application, after the electronic device performs sound mixing processing on the multiple audio sections and plays the multiple audio sections, the user can listen to the multiple audio sections at the same time. The electronic device may play the multiple audio pieces after mixing the multiple audio pieces, that is, the electronic device may play the multiple audio pieces by superimposing the multiple audio pieces according to the same time axis. For example, if after the game starts, the first electronic device receives the sound of vehicle acceleration sent by the game server, and the second electronic device receives the alert sound "ready to accelerate" sent by the game server, after the first electronic device turns on the "reverberation" function, the first electronic device may mix the sound of vehicle acceleration and the alert sound "ready to accelerate" and play the mixed sound, and the user may hear the sound of vehicle acceleration and the alert sound "ready to accelerate" at the same time, and the user may feel the game scene of other users through the alert sound "ready to accelerate". In some embodiments, when the electronic device plays the overlapped multiple pieces of audio, different channels may be used to play different pieces of audio, for example, the audio of the first electronic device is played through a left channel, the audio of the second electronic device is played through a right channel, and so on.
In the above step S130 and step S140, the other electronic devices in the many-to-one screen projection system send their own audio to the first electronic device, that is, the first electronic device passively receives the audio of the other electronic devices. In some other embodiments, the first electronic device may also actively request audio from other electronic devices. That is, in the first case, the above-described step S130 may be replaced with the step S130-1, and the step S140 may be replaced with the steps S140-1 and S140-2.
Step S130-1, the large-screen display device 200 receives the first message, and sends, in response to the first message, an identifier of another electronic device other than the first electronic device in the many-to-one screen projection system to the first electronic device.
Step S140-1, the first electronic device receives an identifier of another electronic device (e.g., a second electronic device), and sends a second message to the other electronic device according to the identifier, where the second message carries the identifier and is used to request to acquire audio of the other electronic device in the many-to-one screen-casting system.
In step S140-2, the other electronic device (e.g., the second electronic device) receives the second message, and sends its own audio to the first electronic device in response to the second message.
For example, through the above steps S130-1, S140-1 and S140-2, the first electronic device may receive the identifier of the second electronic device sent by the large-screen display device 200, and send the second message to the second electronic device according to the identifier. And after receiving the second message, the second electronic equipment sends the audio of the second electronic equipment to the first electronic equipment.
As can be seen from the embodiment of fig. 9A, in the first case, after the user enables the "reverberation" function of the first electronic device, the screen projection operations of the other electronic devices in the many-to-one screen projection system are not affected, that is, the other electronic devices (e.g., the second electronic device) in the many-to-one screen projection system still send the screen projection contents to the large-screen display device 200 for displaying or playing.
Referring to fig. 9B, fig. 9B is a diagram illustrating the audio playing of each device in the many-to-one projection system after the user enables the "reverberation" function of the first electronic device in the first case. As shown in fig. 9B, the first electronic device performs audio mixing processing on its own audio and the audio of the second electronic device, and then plays the audio, and the second electronic device still sends its own audio to the large-screen display device for playing.
Through the method shown in fig. 9A, in a many-to-one screen projection scene, the electronic device with the "reverberation" function performs audio mixing processing on its own audio and the audio of other electronic devices, and then plays the audio, and the other electronic devices without the "reverberation" function still send their own audio to the large screen display device for playing. Therefore, the requirements of different users in a many-to-one screen projection scene can be met, so that the users can select whether to feel the game scenes of other users according to the requirements, and the user experience is improved.
In the second case, the user enables the "reverberation" function of the first electronic device, and the operation of the user enabling the "reverberation" function of the first electronic device can be used to enable the "reverberation" function of other electronic devices in the many-to-one projection screen system.
Referring to fig. 10A, fig. 10A shows a schematic diagram of the interaction between the devices in the many-to-one projection system in the second case. As shown in fig. 10A, the interaction process may include the following steps:
step S210, the first electronic device detects an operation for enabling the "reverberation" function.
Step S220, in response to the operation, the first electronic device stops sending the audio in the screen projection content to the large-screen display device, and sends a first message to the large-screen display device 200, where the first message is used to request to acquire the audio of the other electronic devices except the first electronic device in the many-to-one screen projection system.
Step S210 may refer to step S110 shown in fig. 9A, and step S220 may refer to step S120 shown in fig. 9A, which are not described herein again.
Step S230, the large-screen display device 200 receives the first message, sends the identifier of the other electronic device to each electronic device in the many-to-one screen projection system in response to the first message, and sends a third message to other electronic devices other than the first electronic device in the many-to-one screen projection system. The third message is used to instruct other electronic devices in the many-to-one projection system other than the first electronic device to enable a "reverberation" function.
Here, the identification of the electronic device may be an identification that the large-screen display device 200 assigned to the electronic device when creating the Wi-Fi P2P group, and may be, for example, an IP address assigned to the electronic device.
Step S240, each electronic device in the many-to-one screen projection system receives the identifier of the other electronic device, and sends its own audio to the other electronic device according to the identifier.
For example, the large-screen display device 200 may send the identification of the first electronic device in the many-to-one projection system to the second electronic device. And after receiving the identifier of the first electronic equipment, the second electronic equipment sends the audio of the second electronic equipment to the first electronic equipment according to the identifier.
For another example, the large-screen display device 200 may send the identifier of the second electronic device in the many-to-one projection system to the first electronic device. And after receiving the identifier of the second electronic equipment, the first electronic equipment sends the audio of the first electronic equipment to the second electronic equipment according to the identifier.
And S250, the first electronic equipment receives the audio of other electronic equipment in the many-to-one screen projection system, and plays the audio of the first electronic equipment and the audio of the other electronic equipment after audio mixing processing. And other electronic equipment except the first electronic equipment in the many-to-one screen projection system receives the third message, stops sending the audio of the other electronic equipment to the large-screen display equipment in response to the third message, and plays the audio of the other electronic equipment after mixing the audio of the other electronic equipment with the audio of the other electronic equipment.
For example, after receiving the audio of the second electronic device, the first electronic device performs audio mixing processing on the audio of the first electronic device and the audio of the second electronic device and plays the audio.
For example, after receiving the third message, the second electronic device stops sending its own audio to the large-screen display device 200 in response to the third message. And after receiving the audio of the first electronic equipment, the second electronic equipment performs sound mixing processing on the audio of the second electronic equipment and the audio of the first electronic equipment and plays the audio.
In step S240, after each electronic device in the many-to-one screen projection system receives the identifier of the other electronic device, the electronic device sends its own audio to the other electronic device according to the identifier. In some other embodiments, each electronic device in the many-to-one screen-casting system may actively request the other electronic devices for audio after receiving the identification of the other electronic devices. For example, the first electronic device may actively request the second electronic device for audio, and the second electronic device may also actively request the first electronic device for audio.
Here, when the first electronic device actively requests the first electronic device to acquire audio, the step S240 may be replaced with the following steps S240-1A and S240-2A:
step S240-1A, after receiving the identifier of the second electronic device, the first electronic device sends a second message to the second electronic device according to the identifier. The second message carries an identifier of the second electronic device, and is used for requesting to acquire the audio of the second electronic device.
Step S240-2A, the second electronic device receives the second message, and sends the audio of the second electronic device to the first electronic device in response to the second message.
Step S240-1A and step S240-2A can refer to step S140-1 and step S140-2 in the first case described above.
Similarly, when the second electronic device actively requests the first electronic device to acquire audio, the step S240 may be replaced by the following steps S240-1B and S240-2B:
step S240-1B, after receiving the identifier of the first electronic device, the second electronic device sends a fourth message to the first electronic device according to the identifier. The fourth message carries an identifier of the first electronic device, and is used for requesting to acquire the audio of the first electronic device.
Step S240-2B, the first electronic device receives the fourth message, and sends the audio of the first electronic device to the second electronic device in response to the fourth message.
As can be seen from the embodiment of fig. 10A, in the second case, after the user enables the "reverberation" function of the first electronic device, the screen-projecting operations of the other electronic devices in the many-to-one screen-projecting system are affected, and the other electronic devices (e.g., the second electronic device) in the many-to-one screen-projecting system stop sending the audio in the screen-projecting content to the large-screen display device 200 for playing.
Referring to fig. 10B, fig. 10B is a diagram illustrating the audio playing of each device in the many-to-one projection system after the user enables the "reverberation" function of the first electronic device in the first case. As shown in fig. 10B, the first electronic device performs audio mixing processing on the audio of the first electronic device and the audio of the second electronic device, and then plays the audio of the first electronic device, and the large-screen display device does not play any audio.
With the method shown in fig. 10A, when an electronic device in the many-to-one screen projection system enables the "reverberation" function, other electronic devices in the many-to-one screen projection system also automatically enable the "reverberation" function. That is to say, when electronic devices in the many-to-one screen projection system enable the "reverberation" function, all electronic devices in the many-to-one screen projection system play audio of their own and audio of other electronic devices after mixing processing. Therefore, each user can feel the game scene of the user and can also feel the game scenes of other users through the audio of other electronic equipment, and the user experience can be improved.
And (iii) in a third case, the user enables the 'reverberation' function of the first electronic device, and the operation of the user enabling the 'reverberation' function of the first electronic device can be used for triggering other electronic devices in the many-to-one screen projection system to prompt the user to enable the 'reverberation' function.
Referring to fig. 11A, fig. 11A shows a schematic diagram of interaction between devices in a many-to-one projection system in a third scenario. As shown in fig. 11A, the interaction process may include the following steps:
step S310, the first electronic device detects an operation for enabling the "reverberation" function.
Step S320, in response to the operation, the first electronic device stops sending the audio in the screen projection content to the large-screen display device, and sends a first message to the large-screen display device 200, where the first message is used to request to acquire the audio of the other electronic devices except the first electronic device in the many-to-one screen projection system.
Step S310 may refer to step S110 shown in fig. 9A, and step S320 may refer to step S120 shown in fig. 9A.
Step S330, the large-screen display device 200 receives the first message, sends the identifier of the first electronic device to other electronic devices (for example, the second electronic device) outside the first electronic device in response to the first message, and sends a fifth message to other electronic devices outside the first electronic device, where the fifth message is used to indicate that the first electronic device enables the "reverberation" function.
Step S340, other electronic devices except the first electronic device in the many-to-one screen projection system receive the identifier of the first electronic device, and send their own audio to the first electronic device according to the identifier.
Through the above steps S330 and S340, the other electronic devices in the many-to-one screen projection system actively send their own audio to the first electronic device. In some other embodiments, the first electronic device may also request audio from other electronic devices. The process that the first electronic device may also request to obtain the audio from other electronic devices is the same as the process that the first electronic device requests to obtain the audio from other electronic devices in the first case, and reference may be made to related descriptions, which are not described herein again.
Step S350, the first electronic device receives the audio of other electronic devices (for example, the second electronic device) outside the first electronic device in the many-to-one screen projection system, and plays the audio of the first electronic device and the audio of the other electronic devices after performing audio mixing processing.
Step S360, other electronic devices (for example, second electronic devices) outside the first electronic device in the many-to-one screen projection system receive the fifth message, and in response to the fifth message, the other electronic devices prompt the user that the first electronic device enables the "reverberation" function, and ask the user whether the "reverberation" function is enabled.
Specifically, after other electronic devices (for example, a second electronic device) in the many-to-one screen projection system receives the fifth message, it can be known that the "reverberation" function is enabled by the first electronic device. Other electronic devices outside the first electronic device may prompt the user that the first electronic device has the "reverberation" function enabled and ask the user whether the "reverberation" function is enabled. In this embodiment of the application, the manner for prompting the user by the other electronic device except the first electronic device may include the following:
1. and other electronic equipment except the first electronic equipment displays prompt information on the user interface.
Referring to fig. 11B, the user interface 111 shown in fig. 11B illustrates one way in which the second electronic device prompts the user that the first electronic device enables the "reverb" function and asks the user whether the "reverb" function is enabled.
As shown in fig. 11B, the second electronic device may display a window 1101 on the currently displayed screen after receiving a fifth message transmitted by the large-screen display device. The currently displayed picture of the second electronic device may be a screen-casting picture displayed on the local side by the second electronic device, for example, a picture provided by a racing game. As shown in fig. 11B, window 1101 may include: prompt 1101A, control 1101B, and control 1101C.
The prompt 1101A is for prompting the user that the first electronic device has the "reverberation" function enabled, and the presentation of the prompt 1101 may include text information (e.g., text information "huabei Mate20 initiates reverberation control, whether? is accepted"), or an icon, etc.
The control 1101B may receive a user operation (e.g., a touch operation) in response to which the second electronic device does not enable the "reverberation" function.
The control 1101C may receive a user operation (e.g., a touch operation), in response to which the second electronic device enables a "reverberation" function.
2. And prompting the user by other electronic equipment except the first electronic equipment in a voice or vibration mode.
After the other electronic devices outside the first electronic device prompt the user that the first electronic device enables the "reverberation" function and ask whether the user enables the "reverberation" function, if the user selects to enable the "reverberation" function of the other electronic devices (e.g., the second electronic device), for example, if the user inputs a user operation on a control 1101C in the user interface 111 shown in fig. 11B, in response to the user operation, the other electronic devices (e.g., the second electronic device) acquire the audio of each electronic device in the many-to-one screen projection system, perform mixing processing on the audio, and play the audio. Here, the other electronic device (e.g., the second electronic device) acquires the audio of each electronic device in the many-to-one screen projection system in the same manner as the first electronic device acquires the audio of each electronic device in the many-to-one screen projection system, and reference may be made to the related description.
It is understood that the steps S340 to S350 and the step S360 are not in sequence, and the steps S340 to S350 and the step S360 can be executed simultaneously, which is not limited in the present application.
In the third case, after the user starts the "reverberation" function of the first electronic device, if other electronic devices in the many-to-one screen projection system start the "reverberation" function according to the user requirement, the audio playing situation of each device in the many-to-one screen projection system may refer to fig. 10B. If other electronic devices in the many-to-one screen projection system refuse to enable the "reverberation" function according to the user's needs, the audio playing situation of each device in the many-to-one screen projection system can refer to fig. 9B.
Through the method shown in fig. 11A, in the many-to-one projection screen system, after an electronic device activates the "reverberation" function, other electronic devices may prompt the user to activate the "reverberation" function and activate the "reverberation" function according to the user's needs. The electronic equipment with the 'reverberation' function plays the audio of the electronic equipment and the audio of other electronic equipment after sound mixing processing, and the electronic equipment without the 'reverberation' function still sends the audio of the electronic equipment to the large-screen display equipment for playing. Therefore, the requirements of different users in a many-to-one screen projection scene can be met, so that the users can select whether to feel the game scenes of other users according to the requirements, and the user experience is improved.
In the embodiment of the application, in the screen projection process of the many-to-one screen projection system, a user can start the 'private listening' function as required. After the user enables the 'secret listening' function of the electronic equipment, the electronic equipment can play the audio in the screen-casting content at the local end.
Fig. 12 illustrates one way in which a first electronic device, which may be the first electronic device in the many-to-one projection system shown in a-c of fig. 7, enables a "listen to secret" function. As shown in fig. 12, the first electronic device may provide a user interface 121. The user interface 121 may be a user interface provided by a user who exits the game interface as shown in a of fig. 7 and enters a "settings (settings)" application when the user throws a screen. The user interface 121 is the same as the user interface 63 shown in fig. 6C, and reference may be made to the description relating to the embodiment of fig. 6C.
As shown in fig. 12, the user interface 121 may include a switch control 615 for a "listen" function. The control 615 can receive an operation (e.g., a touch operation) to turn on/off a "listen" function of the electronic device. As shown in fig. 12, the first electronic device can enable a "listen" function in response to an operation (e.g., a touch operation) acting on the control 615.
It can be understood that, without being limited to the operation on the control 615 shown in fig. 12, the "overhearing" function of the electronic device may also be enabled in other manners in the embodiment of the present application, for example, the user may also enable the "overhearing" function of the electronic device through a voice instruction or a gesture.
In the embodiment of the application, after the first electronic device enables the "overhearing" function, the first electronic device plays the audio in the screen-casting content at the local end. Several possible message interaction processes between the devices in the many-to-one screen projection system after the first electronic device enables the "overhearing" function during the many-to-one screen projection process are described below.
In a first scenario, a user enables the "overhearing" function of a first electronic device, and the user's operation of enabling the "overhearing" function of the first electronic device is not used to enable the "overhearing" function of other electronic devices in the many-to-one screen-casting system.
Referring to fig. 13A, fig. 13A shows a schematic diagram of interaction between devices in a many-to-one projection system in a first case. As shown in fig. 13A, the interaction process may include the following steps:
step S410, the first electronic device detects an operation for enabling the "listen-in-secret" function.
The operation for enabling the "listening" function may be an operation (for example, a touch operation) performed by the user on the control 615 in the user interface 121 shown in fig. 12, or may be a voice instruction or a gesture instruction input by the user. Without being limited thereto, in some embodiments, the user may also enter a long press gesture on control 401 in the user interface as shown in fig. 4B, and after the first electronic device displays a switch control for the "listen to" function in response to the long press gesture, the "listen to" function of the first electronic device is enabled through the switch control.
Step S420, in response to the operation, the first electronic device stops sending the audio in the screen-shot content to the large-screen display device, and plays its own audio at the local end.
As can be seen from the embodiment of fig. 13A, in the first case, after the user enables the "listen" function of the first electronic device, the screen-projection operations of the other electronic devices in the many-to-one screen-projection system are not affected, that is, the other electronic devices (e.g., the second electronic device) in the many-to-one screen-projection system still send the screen-projection content to the large-screen display device 200 for displaying or playing.
Referring to fig. 13B, fig. 13B is a diagram illustrating the audio playing of each device in the many-to-one screen-casting system after the user enables the "listening" function of the first electronic device in the first case. As shown in fig. 13B, the first electronic device stops sending its own audio to the large-screen display device and plays its own audio at the local end, and the second electronic device still sends its own audio to the large-screen display device for playing.
With the method shown in fig. 13A, in a many-to-one screen projection scene, the electronic device with the "private listening" function stops sending its own audio to the large-screen display device and plays its own audio at the local end, and other electronic devices without the "private listening" function still send their own audio to the large-screen display device for playing. Therefore, the requirements of different users in a many-to-one screen projection scene can be met, and the user experience is improved.
In the second case, the user enables the "overhearing" function of the first electronic device, and the operation of the user to enable the "overhearing" function of the first electronic device can also be used for enabling the "overhearing" function of other electronic devices in the many-to-one screen projection system.
Referring to fig. 14A, fig. 14A shows a schematic diagram of interaction between devices in a many-to-one projection system in a second case. As shown in fig. 14A, the interactive process may include the steps of:
step S510, the first electronic device detects an operation for enabling the "listen-in-secret" function.
Step S510 may refer to step S410 shown in fig. 13A, and is not described herein again.
Step S520, in response to the operation, the first electronic device stops sending the audio in the screen-shot content to the large-screen display device, plays the audio at the local end, and sends a sixth message to the large-screen display device 200. The sixth message is used to indicate that the first electronic device is enabled for the "overhearing" function.
Step S530, the large-screen display device 200 receives the sixth message, and sends a seventh message to the other electronic devices in the many-to-one screen-casting system in response to the sixth message, where the seventh message is used to instruct the other electronic devices in the many-to-one screen-casting system to enable the "overhearing" function.
Specifically, after the large-screen display device 200 receives the sixth message sent by the first electronic device, it can be known that the "private listening" function is enabled by the first electronic device. In response to the sixth message, the large-screen display device 200 may send a seventh message to the other electronic device (e.g., the second electronic device) in the many-to-one screen-casting system instructing the other electronic device (e.g., the second electronic device) in the many-to-one screen-casting system to enable the "overhearing" function.
And S540, the other electronic equipment in the many-to-one screen projection system receives the seventh message, responds to the seventh message, stops sending the audio in the screen projection content to the large-screen display equipment, and plays the audio at the local end.
Referring to fig. 14B, fig. 14B is a diagram illustrating the audio playing situation of each device in the many-to-one screen-projection system after the user enables the "listening" function of the first electronic device in the second situation. As shown in fig. 14B, the first electronic device stops sending its own audio to the large-screen display device and plays its own audio at the local side, the second electronic device stops sending its own audio to the large-screen display device and plays its own audio at the local side, and the large-screen display device does not play any audio.
Through the method shown in fig. 14A, when an electronic device in the many-to-one screen projection system activates the "overhearing" function, other electronic devices in the many-to-one screen projection system also automatically activate the "overhearing" function. That is to say, when the electronic device in the many-to-one screen projection system starts the "private listening" function, all the electronic devices in the many-to-one screen projection system stop sending their own audio to the large-screen display device and play their own audio at the local end. Therefore, each user can fully feel the own game scene, the influence of other users is avoided, the interference to other users can be avoided, and the user experience is improved.
And in the third situation, the user starts the 'steganography' function of the first electronic equipment, and the operation that the user starts the 'steganography' function of the first electronic equipment can be used for triggering other electronic equipment in the many-to-one screen projection system to prompt the user to start the 'steganography' function.
Referring to fig. 15A, fig. 15A shows a schematic diagram of interaction between devices in a many-to-one projection system in a third scenario. As shown in fig. 15A, the interaction process may include the following steps:
step S610, the first electronic device detects an operation for enabling the "listen-in-secret" function.
Step S610 can refer to step S410 shown in fig. 13A, and is not described herein again.
Step S620, in response to the operation, the first electronic device stops sending the audio in the screen-shot content to the large-screen display device, and plays the audio at the local end, and the first electronic device sends a sixth message to the large-screen display device 200, where the sixth message is used to indicate that the first electronic device enables a "private listening" function.
Step S630, after receiving the sixth message, the large-screen display device 200 sends, in response to the sixth message, an eighth message to other electronic devices except the first electronic device in the many-to-one screen projection system, where the eighth message is used to indicate that the first electronic device enables a "listen in secret" function.
Step S640, other electronic devices (for example, second electronic devices) outside the first electronic device in the many-to-one screen projection system receive the eighth message, and in response to the eighth message, the other electronic devices prompt the user that the first electronic device enables the "overhearing" function, and ask the user whether to enable the "overhearing" function.
Specifically, after other electronic devices (for example, a second electronic device) outside the first electronic device in the many-to-one screen projection system receives the eighth message, it can be known that the "private listening" function is enabled by the first electronic device. Other electronic devices outside the first electronic device may prompt the user that the first electronic device has enabled the "listen to" function and ask the user whether the "listen to" function is enabled. In this embodiment of the application, the manner for prompting the user by the other electronic device except the first electronic device may include the following:
1. and other electronic equipment except the first electronic equipment displays prompt information on the user interface.
Referring to fig. 15B, fig. 15B shows a user interface 151 illustrating one way in which the second electronic device prompts the user that the first electronic device enables the "listen" function and asks the user whether the "listen" function is enabled.
As shown in fig. 15B, the second electronic device may display a window 1501 on the currently displayed screen after receiving the eighth message transmitted by the large-screen display device. The currently displayed picture of the second electronic device may be a screen-casting picture displayed on the local side by the second electronic device, for example, a picture provided by a racing game. As shown in fig. 15B, window 1501 may include: hint 1501A, control 1501B, and control 1501C.
The prompting information 1501A is used to prompt the user that the first electronic device has the "overhearing" function enabled, and the presentation form of the prompting information 1501 may include text information (e.g., text information "HUAWEI Mate20 initiates overhearing control, whether? is accepted"), an icon, or the like.
Control 1501B can receive a user operation (e.g., a touch operation) in response to which the second electronic device does not enable the "overhearing" function.
Control 1501C can receive a user operation (e.g., a touch operation) in response to which the second electronic device enables a "listen-through" function.
2. And prompting the user by other electronic equipment except the first electronic equipment in a voice or vibration mode.
After the other electronic device outside the first electronic device prompts the user that the first electronic device has enabled the "listen to" function and asks the user whether the "listen to" function is enabled, if the user selects to enable the "listen to" function of the other electronic device (e.g., the second electronic device), for example, if the user inputs a user operation on a control 1501C in the user interface 151 shown in fig. 15B, the other electronic device (e.g., the second electronic device) stops transmitting its own audio to the large screen display device 200 and plays its own audio on the local side in response to the user operation.
In the third case, after the first electronic device starts the "private listening" function, if other electronic devices in the many-to-one screen projection system start the "private listening" function according to the user requirement, the audio playing situation of each device in the many-to-one screen projection system may refer to fig. 14B. If other electronic devices in the many-to-one screen projection system refuse to enable the "private listening" function according to the user's needs, the audio playing situation of each device in the many-to-one screen projection system can refer to fig. 13B.
Through the method shown in fig. 15A, in the many-to-one screen projection system, after the electronic device activates the "listen-secret" function, other electronic devices can prompt the user to start the "listen-secret" function and activate the "listen-secret" function according to the user's needs. The electronic device that has enabled the "steganography" function stops sending its own audio to the large screen display device 200 and plays its own audio at the local end, and the electronic device that has not enabled the "steganography" function still sends its own audio to the large screen display device for playing. Therefore, the requirements of different users in a many-to-one screen projection scene can be met, and the user experience is improved.
It is understood that the number of the first electronic device and the second electronic device mentioned in the above embodiments may be one or more, and the application is not limited thereto.
In the embodiment of the present application, the operation for turning on "reverberation" received by the first electronic device may be referred to as a first operation. The first operation may include, but is not limited to: and operations acting on the control 615 in the user interface 81 shown in fig. 8, such as touch operations, click operations and the like, voice instructions, gesture instructions and the like.
In the embodiment of the present application, the operation received by the first electronic device to turn on "listening" may be referred to as a third operation. The third operation may include, but is not limited to: and operations acting on the control 616 in the user interface 81 shown in fig. 8, such as a touch operation, a click operation and the like, a voice instruction, a gesture instruction and the like.
In the embodiment of the present application, the user interface shown in fig. 8 may be referred to as a first user interface, and the control 615 may be referred to as a first control.
In this embodiment of the application, the information output by the second electronic device and used for inquiring whether the user stops sending the second audio to the large-screen display device and playing the first audio and the second audio after mixing processing may be referred to as first prompt information. The first prompt message may include, but is not limited to: the visual elements displayed on the display screen are, for example, text information 1101A, voice, vibration feedback, or flash feedback in the user interface 111 shown in fig. 11B.
In this embodiment of the application, the operation received by the second electronic device, which is used for causing the second electronic device to stop sending the second audio to the large-screen display device and playing the first audio and the second audio after performing audio mixing processing, may be referred to as a second operation. The second operation may include, but is not limited to: an operation, a voice instruction, a gesture instruction, or the like, which acts on a control 1101C in the user interface 111 shown in fig. 11B.
In this embodiment, the information output by the second electronic device to inquire whether the user stops sending the second audio to the large-screen display device and plays the second audio on the local side may be referred to as second prompt information. The second prompting message may include, but is not limited to: the visual elements displayed on the display screen are, for example, text information 1501A, voice, vibration feedback, or flash feedback in the user interface 151 shown in fig. 15B.
In this embodiment, an operation received by the second electronic device to cause the second electronic device to stop sending the second audio to the large-screen display device and play the second audio on the local end may be referred to as a fourth operation. The fourth operation may include, but is not limited to: an operation, a voice instruction, a gesture instruction, or the like, which acts on a control 1501C in the user interface 151 shown in fig. 15B.
The embodiments of the present application can be combined arbitrarily to achieve different technical effects.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in accordance with the present application are generated, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid state disk), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.
Claims (21)
1. A method for processing audio data in a many-to-one screen projection system, the method being applied to the many-to-one screen projection system, the many-to-one screen projection system comprising: the system comprises a first electronic device, a second electronic device and a large-screen display device; characterized in that the method comprises:
the first electronic equipment sends first screen projection content to the large-screen display equipment, wherein the first screen projection content comprises a first image and a first audio;
the second electronic equipment sends second screen projection content to the large-screen display equipment, wherein the second screen projection content comprises a second image and a second audio;
the large-screen display equipment displays the first image and the second image and plays the first audio and the second audio;
the first electronic equipment detects a first operation, and in response to the first operation, the first electronic equipment stops sending the first audio to the large-screen display equipment;
after the first operation is detected, the first electronic device receives the second audio sent by the second electronic device, and plays the first audio and the second audio after sound mixing processing.
2. The method according to claim 1, wherein after detecting the first operation, the first electronic device receives the second audio sent by the second electronic device, and specifically includes:
responding to the first operation, the first electronic equipment sends a first message to the large-screen display equipment, wherein the first message is used for requesting to acquire the second audio;
responding to the first message, and sending the identification of the first electronic equipment to the second electronic equipment by the large-screen display equipment; the second electronic equipment sends the second audio to the first electronic equipment according to the identification of the first electronic equipment; or,
responding to the first message, and sending the identification of the second electronic equipment to the first electronic equipment by the large-screen display equipment; the first electronic equipment sends a second message to the second electronic equipment according to the identifier of the second electronic equipment, wherein the second message is used for requesting to acquire the second audio; the second electronic device sends the second audio to the first electronic device in response to the second message.
3. The method of claim 2, further comprising:
responding to the first message, and sending a third message to the second electronic equipment by the large-screen display equipment;
in response to the third message, the second electronic device stops sending the second audio to the large-screen display device;
after receiving the third message, the second electronic device receives the first audio sent by the first electronic device, and plays the first audio and the second audio after performing audio mixing processing.
4. The method of claim 3, further comprising:
in response to the first message, the large-screen display device sending an identification of the second electronic device to the first electronic device; the first electronic equipment sends the first audio to the second electronic equipment according to the identification of the second electronic equipment;
or,
in response to the first message, the large-screen display device sending an identification of the first electronic device to the second electronic device; the second electronic equipment sends a fourth message to the first electronic equipment according to the identifier of the first electronic equipment, wherein the fourth message is used for requesting to acquire the first audio; the first electronic device sends the first audio to the second electronic device in response to the fourth message.
5. The method of claim 2, further comprising:
responding to the first message, and sending a fifth message to the second electronic equipment by the large-screen display equipment;
and responding to the fifth message, the second electronic equipment outputs first prompt information, and the first prompt information is used for inquiring whether the user stops sending the second audio to the large-screen display equipment or not and playing the first audio and the second audio after audio mixing processing.
6. The method of claim 5, wherein after the second electronic device outputs the first prompt message, the method further comprises:
and the second electronic equipment detects a second operation, responds to the second operation, stops sending the second audio to the large-screen display equipment, receives the first audio sent by the first electronic equipment, and plays the first audio and the second audio after audio mixing processing.
7. The method according to any one of claims 1-6, further comprising:
and the first electronic equipment detects a third operation, responds to the third operation, stops sending the first audio to the large-screen display equipment and plays the first audio.
8. The method of claim 7, further comprising:
responding to the third operation, and sending a sixth message to the large-screen display device by the first electronic device;
in response to the sixth message, the large-screen display device sends a seventh message to the second electronic device; in response to the seventh message, the second electronic device stops sending the second audio to the large-screen display device and plays the second audio;
or,
in response to the sixth message, the large-screen display device sends an eighth message to the second electronic device; and responding to the eighth message, the second electronic equipment outputs second prompt information, and the second prompt information is used for inquiring whether the user stops sending the second audio to the large-screen display equipment and playing the second audio.
9. The method of claim 8, wherein after the second electronic device outputs the second prompt message, the method further comprises:
and the second electronic equipment detects a fourth operation, responds to the fourth operation, and stops sending the second audio to the large-screen display equipment and plays the second audio.
10. The method according to any one of claims 1-9, further comprising:
the first electronic device displaying a first user interface, the first user interface comprising a first control; the first operation comprises an operation acting on the first control.
11. A method for processing audio data in a many-to-one projection screen, the method comprising:
the method comprises the steps that first electronic equipment sends first screen projection content to large-screen display equipment, wherein the first screen projection content comprises a first image and a first audio;
the first electronic equipment detects a first operation, and in response to the first operation, the first electronic equipment stops sending the first audio to the large-screen display equipment;
after the first operation is detected, the first electronic device receives the second audio sent by the second electronic device, and plays the first audio and the second audio after sound mixing processing.
12. The method according to claim 11, wherein after detecting the first operation, the first electronic device receives the second audio sent by the second electronic device, and specifically includes:
responding to the first operation, the first electronic equipment sends a first message to the large-screen display equipment, wherein the first message is used for requesting to acquire the second audio;
the first electronic device receives the second audio sent by the second electronic device in response to the first message; or,
the first electronic equipment receives the identification of the second electronic equipment sent by the large-screen display equipment in response to the first message; sending a second message to the second electronic device according to the identifier of the second electronic device, wherein the second message is used for requesting to acquire the second audio; receiving the second audio sent by the second electronic device in response to the second message.
13. The method of claim 11, further comprising:
the first electronic equipment receives the identification of the second electronic equipment sent by the large-screen display equipment; sending the first audio to the second electronic equipment according to the identification of the second electronic equipment; or,
the first electronic equipment receives a fourth message sent by the second electronic equipment, wherein the fourth message is used for requesting to acquire the first audio; in response to the fourth message, the first electronic device sends the first audio to the second electronic device.
14. The method according to any one of claims 11-13, further comprising:
and the first electronic equipment detects a third operation, responds to the third operation, stops sending the first audio to the large-screen display equipment and plays the first audio.
15. The method of any one of claims 13, further comprising:
and responding to the third operation, the first electronic equipment sends a sixth message to the large-screen display equipment, wherein the sixth message is used for indicating that the first electronic equipment stops sending the first audio to the large-screen display equipment and playing the first audio.
16. The method according to any one of claims 11-15, further comprising:
the first electronic device displaying a first user interface, the first user interface comprising a first control; the first operation comprises an operation acting on the first control.
17. A method for processing audio data in a many-to-one projection screen, the method comprising:
the method comprises the steps that large-screen display equipment receives first screen projection content sent by first electronic equipment, wherein the first screen projection content comprises a first image and a first audio;
the large-screen display equipment receives second screen projection content sent by second electronic equipment, wherein the second screen projection content comprises a second image and a second audio;
the large-screen display equipment displays the first image and the second image and plays the first audio and the second audio;
the large-screen display equipment stops receiving the first audio sent by the first electronic equipment and receives a first message sent by the first electronic equipment, wherein the first message is used for requesting to acquire the second audio;
in response to the first message, the large-screen display device sending an identification of the first electronic device to the second electronic device; or, in response to the first message, the large-screen display device sends the identifier of the second electronic device to the first electronic device.
18. The method of claim 17, further comprising:
responding to the first message, the large-screen display device sends a third message to the second electronic device, wherein the third message is used for instructing the second electronic device to stop sending a second audio to the large-screen display device and playing the first audio and the second audio after performing audio mixing processing;
in response to the first message, the large-screen display device sending an identification of the second electronic device to the first electronic device; or, in response to the first message, the large-screen display device sends the identifier of the first electronic device to the second electronic device.
19. The method of claim 17, further comprising:
in response to the first message, the large-screen display device sends a fifth message to the second electronic device, the fifth message being indicative of: and the first electronic equipment stops sending the first audio to the large-screen display equipment, and plays the first audio and the second audio after sound mixing processing is carried out on the first audio and the second audio.
20. The method according to any one of claims 17-19, further comprising:
the large-screen display device receives a sixth message sent by the first electronic device, wherein the sixth message is used for indicating that the first electronic device stops sending the first audio to the large-screen display device and playing the first audio;
in response to the sixth message, the large-screen display device sends a seventh message to the second electronic device, where the seventh message is used to instruct the second electronic device to stop sending the second audio to the large-screen display device and playing the second audio; or,
responding to the sixth message, the large-screen display device sends an eighth message to the second electronic device, wherein the eighth message is used for indicating that the first electronic device stops sending the first audio to the large-screen display device and playing the first audio.
21. A many-to-one screen projection system, comprising: a first electronic device, a second electronic device, and a large screen display device, the many-to-one projection system for performing the method of any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910567806.1A CN110381197B (en) | 2019-06-27 | 2019-06-27 | Method, device and system for processing audio data in many-to-one screen projection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910567806.1A CN110381197B (en) | 2019-06-27 | 2019-06-27 | Method, device and system for processing audio data in many-to-one screen projection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110381197A true CN110381197A (en) | 2019-10-25 |
CN110381197B CN110381197B (en) | 2021-06-15 |
Family
ID=68251012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910567806.1A Active CN110381197B (en) | 2019-06-27 | 2019-06-27 | Method, device and system for processing audio data in many-to-one screen projection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110381197B (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110825709A (en) * | 2019-10-30 | 2020-02-21 | 维沃移动通信有限公司 | Interaction method and electronic equipment |
CN111031368A (en) * | 2019-11-25 | 2020-04-17 | 腾讯科技(深圳)有限公司 | Multimedia playing method, device, equipment and storage medium |
CN111131866A (en) * | 2019-11-25 | 2020-05-08 | 华为技术有限公司 | Screen-projecting audio and video playing method and electronic equipment |
CN111404808A (en) * | 2020-06-02 | 2020-07-10 | 腾讯科技(深圳)有限公司 | Song processing method |
CN111711843A (en) * | 2020-06-09 | 2020-09-25 | 海信视像科技股份有限公司 | Multimedia equipment and screen projection playing method |
CN111796787A (en) * | 2020-06-30 | 2020-10-20 | 联想(北京)有限公司 | Display method and display device |
CN111988653A (en) * | 2020-08-25 | 2020-11-24 | 京东方科技集团股份有限公司 | Interaction method, device, equipment and storage medium for multi-video screen projection information |
CN112153457A (en) * | 2020-09-10 | 2020-12-29 | Oppo(重庆)智能科技有限公司 | Wireless screen projection connection method and device, computer storage medium and electronic equipment |
CN112162718A (en) * | 2020-10-16 | 2021-01-01 | 深圳乐播科技有限公司 | Reverse interaction method, device, equipment and storage medium |
CN112181353A (en) * | 2020-10-15 | 2021-01-05 | Oppo广东移动通信有限公司 | Audio playing method and device, electronic equipment and storage medium |
CN112331202A (en) * | 2020-11-04 | 2021-02-05 | 北京奇艺世纪科技有限公司 | Voice screen projection method and device, electronic equipment and computer readable storage medium |
CN112565876A (en) * | 2020-11-30 | 2021-03-26 | 深圳乐播科技有限公司 | Screen projection method, device, equipment, system and storage medium |
CN112835549A (en) * | 2019-11-25 | 2021-05-25 | 华为技术有限公司 | Method and device for switching audio output device |
CN112911383A (en) * | 2021-01-19 | 2021-06-04 | 深圳乐播科技有限公司 | Multipath screen projection method, device and system under local area network |
CN112988102A (en) * | 2021-05-11 | 2021-06-18 | 荣耀终端有限公司 | Screen projection method and device |
CN113050916A (en) * | 2021-04-09 | 2021-06-29 | 深圳Tcl新技术有限公司 | Audio playing method, device and storage medium |
CN113225592A (en) * | 2020-01-21 | 2021-08-06 | 华为技术有限公司 | Screen projection method and device based on Wi-Fi P2P |
CN113542841A (en) * | 2021-06-16 | 2021-10-22 | 杭州当贝网络科技有限公司 | Screen projection method and screen projection system |
CN113747047A (en) * | 2020-05-30 | 2021-12-03 | 华为技术有限公司 | Video playing method and device |
CN113766305A (en) * | 2021-09-27 | 2021-12-07 | 海信视像科技股份有限公司 | Display device and mirror image screen projection audio output control method |
CN113992963A (en) * | 2021-10-28 | 2022-01-28 | 海信视像科技股份有限公司 | Display device and screen projection method |
WO2022057485A1 (en) * | 2020-09-15 | 2022-03-24 | 华为技术有限公司 | Data sharing method, electronic devices, and system |
CN114237531A (en) * | 2021-09-26 | 2022-03-25 | 浪潮软件股份有限公司 | Remote screen projection control method and system |
CN114296670A (en) * | 2021-04-30 | 2022-04-08 | 海信视像科技股份有限公司 | Display equipment and control method for multi-equipment screen projection on same screen display |
CN114428597A (en) * | 2022-01-30 | 2022-05-03 | 深圳创维-Rgb电子有限公司 | Multi-channel terminal screen projection control method and device, screen projector and storage medium |
CN115278377A (en) * | 2020-03-26 | 2022-11-01 | 华为技术有限公司 | Method for continuously playing multimedia content between devices |
WO2023024630A1 (en) * | 2021-08-27 | 2023-03-02 | 海信视像科技股份有限公司 | Display device, terminal device, and content display method |
WO2024001362A1 (en) * | 2022-06-30 | 2024-01-04 | 海信视像科技股份有限公司 | Display device, bluetooth device, and data processing method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107801088A (en) * | 2016-08-31 | 2018-03-13 | 南京极域信息科技有限公司 | One kind throws screen and receives share system and implementation method |
CN108124172A (en) * | 2017-12-08 | 2018-06-05 | 北京奇艺世纪科技有限公司 | The method, apparatus and system of cloud projection |
CN108536410A (en) * | 2018-04-03 | 2018-09-14 | 广州视源电子科技股份有限公司 | Wireless screen transmission method and system |
CN109032555A (en) * | 2018-07-06 | 2018-12-18 | 广州视源电子科技股份有限公司 | Method and device for processing audio data in screen projection, storage medium and electronic equipment |
-
2019
- 2019-06-27 CN CN201910567806.1A patent/CN110381197B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107801088A (en) * | 2016-08-31 | 2018-03-13 | 南京极域信息科技有限公司 | One kind throws screen and receives share system and implementation method |
CN108124172A (en) * | 2017-12-08 | 2018-06-05 | 北京奇艺世纪科技有限公司 | The method, apparatus and system of cloud projection |
CN108536410A (en) * | 2018-04-03 | 2018-09-14 | 广州视源电子科技股份有限公司 | Wireless screen transmission method and system |
CN109032555A (en) * | 2018-07-06 | 2018-12-18 | 广州视源电子科技股份有限公司 | Method and device for processing audio data in screen projection, storage medium and electronic equipment |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110825709B (en) * | 2019-10-30 | 2022-08-02 | 维沃移动通信有限公司 | Interaction method and electronic equipment |
CN110825709A (en) * | 2019-10-30 | 2020-02-21 | 维沃移动通信有限公司 | Interaction method and electronic equipment |
JP7408803B2 (en) | 2019-11-25 | 2024-01-05 | 華為技術有限公司 | Projected audio and video playback methods and electronic devices |
EP4050900A4 (en) * | 2019-11-25 | 2022-12-28 | Huawei Technologies Co., Ltd. | Screen projection audio and video playback method and electronic device |
JP2023503956A (en) * | 2019-11-25 | 2023-02-01 | 華為技術有限公司 | Projected audio and video reproduction method and electronic device |
CN111131866A (en) * | 2019-11-25 | 2020-05-08 | 华为技术有限公司 | Screen-projecting audio and video playing method and electronic equipment |
US11812098B2 (en) | 2019-11-25 | 2023-11-07 | Huawei Technologies Co., Ltd. | Projected audio and video playing method and electronic device |
CN112835549B (en) * | 2019-11-25 | 2024-03-26 | 华为技术有限公司 | Method and device for switching audio output device |
CN111031368A (en) * | 2019-11-25 | 2020-04-17 | 腾讯科技(深圳)有限公司 | Multimedia playing method, device, equipment and storage medium |
CN112835549A (en) * | 2019-11-25 | 2021-05-25 | 华为技术有限公司 | Method and device for switching audio output device |
WO2021103920A1 (en) * | 2019-11-25 | 2021-06-03 | 华为技术有限公司 | Audio output device switching method and device |
CN113225592B (en) * | 2020-01-21 | 2022-08-09 | 华为技术有限公司 | Screen projection method and device based on Wi-Fi P2P |
CN113225592A (en) * | 2020-01-21 | 2021-08-06 | 华为技术有限公司 | Screen projection method and device based on Wi-Fi P2P |
CN115278377A (en) * | 2020-03-26 | 2022-11-01 | 华为技术有限公司 | Method for continuously playing multimedia content between devices |
CN113747047B (en) * | 2020-05-30 | 2023-10-13 | 华为技术有限公司 | Video playing method and device |
CN113747047A (en) * | 2020-05-30 | 2021-12-03 | 华为技术有限公司 | Video playing method and device |
CN111404808A (en) * | 2020-06-02 | 2020-07-10 | 腾讯科技(深圳)有限公司 | Song processing method |
CN111711843A (en) * | 2020-06-09 | 2020-09-25 | 海信视像科技股份有限公司 | Multimedia equipment and screen projection playing method |
CN111796787A (en) * | 2020-06-30 | 2020-10-20 | 联想(北京)有限公司 | Display method and display device |
CN111796787B (en) * | 2020-06-30 | 2022-07-26 | 联想(北京)有限公司 | Display method and display device |
US11924617B2 (en) * | 2020-08-25 | 2024-03-05 | Boe Technology Group Co., Ltd. | Method for projecting screen, display device, screen projection terminal, and storage medium |
CN111988653A (en) * | 2020-08-25 | 2020-11-24 | 京东方科技集团股份有限公司 | Interaction method, device, equipment and storage medium for multi-video screen projection information |
US20220070599A1 (en) * | 2020-08-25 | 2022-03-03 | Boe Technology Group Co., Ltd. | Method for projecting screen, display device, screen projection terminal, and storage medium |
CN112153457A (en) * | 2020-09-10 | 2020-12-29 | Oppo(重庆)智能科技有限公司 | Wireless screen projection connection method and device, computer storage medium and electronic equipment |
WO2022057485A1 (en) * | 2020-09-15 | 2022-03-24 | 华为技术有限公司 | Data sharing method, electronic devices, and system |
WO2022078056A1 (en) * | 2020-10-15 | 2022-04-21 | Oppo广东移动通信有限公司 | Audio playback method and apparatus, electronic device, and storage medium |
CN112181353A (en) * | 2020-10-15 | 2021-01-05 | Oppo广东移动通信有限公司 | Audio playing method and device, electronic equipment and storage medium |
CN112181353B (en) * | 2020-10-15 | 2022-05-20 | Oppo广东移动通信有限公司 | Audio playing method and device, electronic equipment and storage medium |
CN112162718A (en) * | 2020-10-16 | 2021-01-01 | 深圳乐播科技有限公司 | Reverse interaction method, device, equipment and storage medium |
CN112331202A (en) * | 2020-11-04 | 2021-02-05 | 北京奇艺世纪科技有限公司 | Voice screen projection method and device, electronic equipment and computer readable storage medium |
CN112331202B (en) * | 2020-11-04 | 2024-03-01 | 北京奇艺世纪科技有限公司 | Voice screen projection method and device, electronic equipment and computer readable storage medium |
CN112565876A (en) * | 2020-11-30 | 2021-03-26 | 深圳乐播科技有限公司 | Screen projection method, device, equipment, system and storage medium |
CN112911383A (en) * | 2021-01-19 | 2021-06-04 | 深圳乐播科技有限公司 | Multipath screen projection method, device and system under local area network |
CN113050916A (en) * | 2021-04-09 | 2021-06-29 | 深圳Tcl新技术有限公司 | Audio playing method, device and storage medium |
CN114296670A (en) * | 2021-04-30 | 2022-04-08 | 海信视像科技股份有限公司 | Display equipment and control method for multi-equipment screen projection on same screen display |
WO2022228021A1 (en) * | 2021-04-30 | 2022-11-03 | 海信视像科技股份有限公司 | Display device and method for controlling multi-device screen projection same-screen display |
CN114296670B (en) * | 2021-04-30 | 2023-09-15 | 海信视像科技股份有限公司 | Display device and control method for same-screen display of multi-device screen throwing |
CN112988102B (en) * | 2021-05-11 | 2021-09-14 | 荣耀终端有限公司 | Screen projection method and device |
CN112988102A (en) * | 2021-05-11 | 2021-06-18 | 荣耀终端有限公司 | Screen projection method and device |
CN113542841A (en) * | 2021-06-16 | 2021-10-22 | 杭州当贝网络科技有限公司 | Screen projection method and screen projection system |
WO2023024630A1 (en) * | 2021-08-27 | 2023-03-02 | 海信视像科技股份有限公司 | Display device, terminal device, and content display method |
CN114237531A (en) * | 2021-09-26 | 2022-03-25 | 浪潮软件股份有限公司 | Remote screen projection control method and system |
CN113766305A (en) * | 2021-09-27 | 2021-12-07 | 海信视像科技股份有限公司 | Display device and mirror image screen projection audio output control method |
CN113992963A (en) * | 2021-10-28 | 2022-01-28 | 海信视像科技股份有限公司 | Display device and screen projection method |
CN114428597A (en) * | 2022-01-30 | 2022-05-03 | 深圳创维-Rgb电子有限公司 | Multi-channel terminal screen projection control method and device, screen projector and storage medium |
WO2024001362A1 (en) * | 2022-06-30 | 2024-01-04 | 海信视像科技股份有限公司 | Display device, bluetooth device, and data processing method |
Also Published As
Publication number | Publication date |
---|---|
CN110381197B (en) | 2021-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110381197B (en) | Method, device and system for processing audio data in many-to-one screen projection | |
CN110138937B (en) | Call method, device and system | |
CN111345010B (en) | Multimedia content synchronization method, electronic equipment and storage medium | |
CN113542839B (en) | Screen projection method of electronic equipment and electronic equipment | |
CN113691842B (en) | Cross-device content projection method and electronic device | |
CN110381195A (en) | A kind of throwing screen display methods and electronic equipment | |
CN113497909B (en) | Equipment interaction method and electronic equipment | |
CN113923230B (en) | Data synchronization method, electronic device, and computer-readable storage medium | |
JP7416519B2 (en) | Multi-terminal multimedia data communication method and system | |
CN112119641B (en) | Method and device for realizing automatic translation through multiple TWS (time and frequency) earphones connected in forwarding mode | |
CN114115770B (en) | Display control method and related device | |
CN113170279B (en) | Communication method based on low-power Bluetooth and related device | |
CN114185503B (en) | Multi-screen interaction system, method, device and medium | |
CN114040242A (en) | Screen projection method and electronic equipment | |
CN112335294B (en) | Emergency call method and user terminal | |
CN112543447A (en) | Device discovery method based on address list, audio and video communication method and electronic device | |
WO2023125847A1 (en) | Audio processing method and system, and related apparatuses | |
CN114827581A (en) | Synchronization delay measuring method, content synchronization method, terminal device, and storage medium | |
CN114124980A (en) | Method, device and system for starting application | |
CN113141483B (en) | Screen sharing method based on video call and mobile device | |
CN114489876A (en) | Text input method, electronic equipment and system | |
CN112532508B (en) | Video communication method and video communication device | |
WO2022152167A1 (en) | Network selection method and device | |
CN115708059A (en) | Data communication method between devices, electronic device and readable storage medium | |
CN114860178A (en) | Screen projection method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |