WO2022105445A1 - 基于浏览器的应用投屏方法及相关装置 - Google Patents
基于浏览器的应用投屏方法及相关装置 Download PDFInfo
- Publication number
- WO2022105445A1 WO2022105445A1 PCT/CN2021/121331 CN2021121331W WO2022105445A1 WO 2022105445 A1 WO2022105445 A1 WO 2022105445A1 CN 2021121331 W CN2021121331 W CN 2021121331W WO 2022105445 A1 WO2022105445 A1 WO 2022105445A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- browser
- data stream
- data
- content
- playback content
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000012545 processing Methods 0.000 claims description 34
- 238000004891 communication Methods 0.000 claims description 30
- 238000009877 rendering Methods 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 20
- 230000006870 function Effects 0.000 description 25
- 238000007726 management method Methods 0.000 description 21
- 230000008569 process Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 14
- 239000000872 buffer Substances 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000010295 mobile communication Methods 0.000 description 8
- 230000006978 adaptation Effects 0.000 description 7
- 230000003993 interaction Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013529 biological neural network Methods 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/958—Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1454—Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
- G06F9/452—Remote windowing, e.g. X-Window System, desktop virtualisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8166—Monomedia components thereof involving executable data, e.g. software
- H04N21/8173—End-user applications, e.g. Web browser, game
Definitions
- the present application relates to the field of electronic technologies, and in particular, to a browser-based application screen projection method and related devices.
- Embodiments of the present application provide a browser-based application screen projection method and related apparatus.
- an embodiment of the present application provides a browser-based application screen projection method, which is applied to a first device, and the method includes:
- an embodiment of the present application provides a browser-based application screen projection device, which is applied to a first device.
- the device includes: an acquiring unit and a playing unit, wherein,
- the obtaining unit is configured to receive an operation instruction of the user operating in the browser, and obtain the second playback content corresponding to the target application in the second device;
- the playing unit is configured to play the first playing content in the browser based on the second playing content, where the first playing content is the same as the second playing content.
- the arranging unit is configured to arrange the adaptation controls in response to a first operation instruction of the user to operate the adaptation controls in the control editing interface, the adaptation controls correspond to the selected native controls, and the adaptation controls correspond to the selected native controls.
- the selected native control is a control on the first application interface displayed when the second device runs the target application;
- the property changing unit configured to change the first property information corresponding to the adaptation control in response to a second operation instruction of the user to operate the adaptation control
- the generating unit is configured to generate a second application interface based on the adaptation control, where the second application interface and the first application interface correspond to the same functional interface of the target application.
- embodiments of the present application provide an electronic device, including a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be processed by the above-mentioned processing
- the above program includes instructions for executing steps in any method of the first aspect of the embodiments of the present application.
- an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to execute the computer program as described in the first embodiment of the present application. In one aspect some or all of the steps described in any method.
- an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute as implemented in the present application. Examples include some or all of the steps described in any method of the first aspect.
- the computer program product may be a software installation package.
- the first device receives the operation instruction of the user operating in the browser, and obtains the second playback content corresponding to the target application in the second device;
- the first playback content is the same as the second playback content.
- the user can adapt the target application in other devices to the currently used device without re-downloading the new application, which is conducive to improving the user experience; the target application can be completed without the participation of third-party applications.
- the screen projection of the displayed content is conducive to improving the security of the device.
- FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present application.
- 3A is a schematic diagram of a network architecture of a browser-based application screen projection method provided by an embodiment of the present application
- 3B is a hierarchical model architecture diagram of a browser-based application screen projection method provided by an embodiment of the present application
- FIG. 4A is a schematic flowchart of a browser-based application screen projection method provided by an embodiment of the present application
- 4B is a schematic flowchart of a decoding process in a decoding module provided by an embodiment of the present application
- 4C is a schematic flowchart of a browser-based application screen projection method provided by an embodiment of the present application.
- FIG. 5 is a block diagram of functional units of a first device provided by an embodiment of the present application.
- FIG. 6 is a block diagram of functional units of a browser-based application screen projection device provided by an embodiment of the present application.
- the electronic device may be a portable electronic device that also includes other functions such as personal digital assistant and/or music player functions, such as mobile phones, tablet computers, wearable electronic devices (such as smart watches) with wireless communication functions, etc.
- portable electronic devices include, but are not limited to, portable electronic devices powered by IOS systems, Android systems, Microsoft systems, or other operating systems.
- the above-mentioned portable electronic device may also be other portable electronic devices, such as a laptop computer (Laptop) or the like. It should also be understood that, in some other embodiments, the above-mentioned electronic device may not be a portable electronic device, but a desktop computer.
- the APP can be the application currently displayed on the mobile phone or the application running in the background.
- WebAssembly is a new format that is portable, small in size, fast in loading and compatible with the Web. It runs in a sandboxed execution environment and can fully utilize the hardware capabilities to achieve native execution efficiency.
- FFmpeg is a set of open source computer programs that can be used to record, convert digital audio and video, and convert them into streams. Licensed under LGPL or GPL. It provides a complete solution for recording, converting and streaming audio and video.
- WebGL is a 3D drawing protocol that allows combining JavaScript and OpenGL ES 2.0 to provide hardware 3D accelerated rendering for HTML5 Canvas.
- Application definition means that when the current device displays an application in another device, since both the display and the layout have changed, an application definition is required to redefine the presentation form of the current device.
- FIG. 1 shows a schematic structural diagram of an electronic device 100 .
- the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, compass 190, motor 191, indicator 192, camera 193, display screen 194 and user Identity module (subscriber identification module, SIM) card interface 195 and so on.
- SIM subscriber identification module
- the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
- the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
- the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
- the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent components, or may be integrated in one or more processors.
- electronic device 100 may also include one or more processors 110 .
- the controller can generate an operation control signal according to the instruction operation code and the timing signal, and complete the control of fetching and executing instructions.
- a memory may also be provided in the processor 110 for storing instructions and data.
- the memory in the processor 110 may be a cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. In this way, repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the electronic device 100 in processing data or executing instructions.
- the processor 110 may include one or more interfaces.
- the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal) asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interface, SIM card interface and/or USB interface, etc.
- the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
- the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices.
- the USB interface 130 can also be used to connect an earphone, and play audio through the earphone.
- the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
- the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
- the charging management module 140 is used to receive charging input from the charger.
- the charger may be a wireless charger or a wired charger.
- the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
- the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
- the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
- the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, the wireless communication module 160, and the like.
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
- the power management module 141 may also be provided in the processor 110 .
- the power management module 141 and the charging management module 140 may also be provided in the same device.
- the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
- Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
- the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
- the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
- the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
- the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
- at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
- at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
- the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation Satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR), UWB and other wireless communication solutions.
- WLAN wireless local area networks
- BT wireless fidelity
- GNSS global navigation Satellite system
- frequency modulation frequency modulation, FM
- NFC near field communication technology
- infrared technology infrared, IR
- UWB wireless communication solutions.
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
- the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it,
- the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
- the GPU is a microprocessor for browser-based application screen projection, and connects the display screen 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
- the display screen 194 is used to display images, videos, and the like.
- Display screen 194 includes a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode, or an active matrix organic light emitting diode (active-matrix organic light).
- emitting diode, AMOLED flexible light-emitting diode (flex light-emitting diode, FLED), mini light-emitting diode (mini light-emitting diode, miniled), MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), etc.
- electronic device 100 may include one or more display screens 194 .
- the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
- the ISP is used to process the data fed back by the camera 193 .
- the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
- ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize parameters such as exposure and color temperature of the shooting scene.
- the ISP may be provided in the camera 193 .
- Camera 193 is used to capture still images or video.
- the object is projected through the lens to generate an optical image onto the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
- the ISP outputs the digital image signal to the DSP for processing.
- DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
- the electronic device 100 may include one or more cameras 193 .
- a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
- Video codecs are used to compress or decompress digital video.
- the electronic device 100 may support one or more video codecs.
- the electronic device 100 can play or record videos in various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
- MPEG Moving Picture Experts Group
- the NPU is a neural-network (NN) computing processor.
- NN neural-network
- Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
- the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
- the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
- Internal memory 121 may be used to store one or more computer programs including instructions.
- the processor 110 may execute the above-mentioned instructions stored in the internal memory 121, thereby causing the electronic device 100 to execute the method for displaying page elements, various applications and data processing provided in some embodiments of the present application.
- the internal memory 121 may include a storage program area and a storage data area.
- the stored program area may store the operating system; the stored program area may also store one or more applications (such as gallery, contacts, etc.) and the like.
- the storage data area may store data (such as photos, contacts, etc.) created during the use of the electronic device 100 and the like.
- the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage components, flash memory components, universal flash storage (UFS), and the like.
- the processor 110 may cause the electronic device 100 to execute the instructions provided in the embodiments of the present application by executing the instructions stored in the internal memory 121 and/or the instructions stored in the memory provided in the processor 110 . Methods for displaying page elements, as well as other applications and data processing.
- the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
- the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an ambient light sensor 180L, bone conduction sensor 180M, etc.
- the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
- the pressure sensor 180A may be provided on the display screen 194 .
- the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
- the electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
- the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
- touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
- the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
- the angular velocity of the electronic device 100 about three axes ie, the X, Y, and Z axes
- the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to offset the shaking of the electronic device 100 through reverse motion to achieve anti-shake.
- the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
- the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
- the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
- the ambient light sensor 180L is used to sense ambient light brightness.
- the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
- the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
- the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket, so as to prevent accidental touch.
- the fingerprint sensor 180H is used to collect fingerprints.
- the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
- the temperature sensor 180J is used to detect the temperature.
- the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
- the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature.
- the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
- Touch sensor 180K also called “touch panel”.
- the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
- the touch sensor 180K is used to detect a touch operation on or near it.
- the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
- Visual output related to touch operations may be provided through display screen 194 .
- the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
- FIG. 2 shows a software structural block diagram of the electronic device 100 .
- the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
- the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
- the application layer can include a series of application packages.
- the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
- the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
- the application framework layer includes some predefined functions.
- the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
- a window manager is used to manage window programs.
- the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
- Content providers are used to store and retrieve data and make these data accessible to applications.
- the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
- the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
- a display interface can consist of one or more views.
- the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
- the phone manager is used to provide the communication function of the electronic device 100 .
- the management of call status including connecting, hanging up, etc.).
- the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
- the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
- the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
- Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
- the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
- the application layer and the application framework layer run in virtual machines.
- the virtual machine executes the java files of the application layer and the application framework layer as binary files.
- the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
- a system library can include multiple functional modules. For example: surface manager (surface manager), media library (media library), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
- surface manager surface manager
- media library media library
- 3D graphics processing library eg: OpenGL ES
- 2D graphics engine eg: SGL
- the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
- the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
- the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
- the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
- 2D graphics engine is a drawing engine for 2D drawing.
- the kernel layer is the layer between hardware and software.
- the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
- FIG. 3A shows a schematic diagram of the network architecture of the browser-based application screen projection method to which the present application applies.
- the schematic diagram of the architecture includes multiple servers, which may include: a cloud server 200a, a background server 200b, and a plurality of electronic devices, the electronic devices may be smart phones, tablet computers, desktop computers, wearable electronic devices with wireless communication functions, etc., which are not specifically limited herein.
- each electronic device can exchange information with the above-mentioned cloud server, and the background server 200b can establish a connection with the cloud server 200a; each electronic device can communicate.
- the electronic device 100b (the second device) can project the first playback content in the currently running target application to the browser of the electronic device 100a (the first device).
- the operation instruction for operating in the browser obtains the data stream corresponding to the target application in the second device, and the data stream is the second playback content corresponding to the target application played by the second device; based on the data stream, the browser plays the first playback content content, the first playback content and the second playback content correspond to the same function in the target application.
- the user can adapt the target application in other devices to the currently used device without re-downloading the new application, which is conducive to improving the user experience; the target application can be completed without the participation of third-party applications.
- the screen projection of the displayed content is conducive to improving the security of the device.
- FIG. 3B shows a hierarchical model architecture diagram of a browser-based application screen projection method to which the present application applies.
- the architecture diagram may include: a main module, a decoding module, and an IO module. .
- the main module can be divided into a control interaction layer and a rendering layer.
- the control interaction layer can be used for UI interaction with the front-end display and some event processing, and the control interaction layer can be built based on the JS framework.
- the rendering layer in the main module is used to render video data to the front-end display; the rendering layer can be built based on two players, WebGL Player and PCM Player.
- the decoding module may include a custom decoder, wherein the decoding algorithm is applicable to the H264 standard, the H265 standard, the AAC-LC standard, and so on.
- the above-mentioned custom decoder can be an FFmpeg decoder, which can control the buffer of the data stream during the decoding process, so that there is always data in the FFmpeg decoder for decoding, so as to ensure the smooth operation of video data and audio data.
- the IO module may include a transport layer, which may encapsulate some live broadcast protocols based on the websocket kernel, such as custom live broadcast protocols for application projection, FLV/RTMP live broadcast protocols, and ARTC/ARTP live broadcast protocols.
- a transport layer which may encapsulate some live broadcast protocols based on the websocket kernel, such as custom live broadcast protocols for application projection, FLV/RTMP live broadcast protocols, and ARTC/ARTP live broadcast protocols.
- the first device can combine WebWorker (a sub-thread opened on the basis of Javascript single-threaded execution provided by the Html5 protocol), WebAudio Api (an interface dedicated to audio processing), WebGL ( Web Graphics Library, 3D drawing protocol) and other browser technologies can download and display the data stream, and can use WebAssembly and FFmpeg to complete the decoding of the data stream, where WebAssembly is the operating environment of the browser in the first device.
- the decoding of audio data and video data in the data stream is completed by running a custom decoding algorithm based on FFmpeg.
- the above-mentioned second playback content is displayed in the browser of a device (the currently used device), so as to realize application-level screen projection.
- FIG. 4A is a schematic flowchart of a browser-based application screen projection method provided by an embodiment of the present application, which is applied to a first device.
- the browser-based application screen projection method includes the following operate.
- S401 Receive an operation instruction of a user operating in a browser, and acquire second playback content corresponding to a target application in a second device.
- the first device and/or the second device may be electronic devices as shown in FIG. 1 and FIG. 2 , the first device may include a browser, and the first device may refer to the current device being operated and used by the user.
- the second device may refer to a remote device, and may refer to a device where a user needs to access an application in the remote device through the current device.
- the above-mentioned first device can establish a communication connection with the second device, and the application currently running on the second device, that is, the second playback content of the target application, can be displayed in the browser of the first device through the browser in the first device. to play or display.
- the above target applications may include at least one of the following: social applications, news applications, shopping applications, entertainment applications, financial applications, life applications, tool applications, etc., which are not limited here; the target application
- the application may be an application currently used in the second device, and may be a currently displayed application or an application running in the background.
- the first device can generate a two-dimensional code in the browser, and the user can establish a communication connection between the first device and the second device by scanning the two-dimensional code.
- the browser can obtain the corresponding data of the target application. address information, and establishing a screen projection channel according to the address information, which is conducive to subsequent screen projection of the second playback content in the target application in the browser.
- the above-mentioned first playback content and the above-mentioned second playback content both correspond to the target application, and based on the data stream, the current display content of the target application in the second device by the first device, that is, the cross-end of the first playback content can be realized. screencast.
- playing the first playback content in the browser may include the following steps: decoding a data stream to obtain the first playback content, and the data stream is the playing the second playback content corresponding to the target application in the second device; and playing the first playback content in the browser.
- the above-mentioned data stream may include audio data, video data, game data, etc. being played or displayed in the target application, which is not limited herein.
- the above-mentioned data stream may refer to the data obtained after encrypting, compressing, etc. the second playback content corresponding to the target application.
- the user can trigger the first device to use the instruction of the target application in the second device across the terminal through operations such as clicking or dragging in the browser interface, and establish a communication connection between the first device and the second device, and then , the first device can download and obtain a data stream corresponding to the target application in the second device, where the data stream corresponds to the second playback display content currently played or displayed in the second device.
- the first device can receive the data stream from the server through WebSocket through the ebWorker in the independent IO module.
- the file information corresponding to the above data stream is downloaded at a rate that is a certain multiple of the code rate.
- the first device can download the program adapted to the current device environment from the cloud server, and load the program adapted to the first device, which is conducive to the subsequent smooth realization of the second playback content of the target application to be projected to the screen.
- the data stream corresponding to the target application in the second device can be obtained, and further, the data stream can be processed to obtain the playback content, so that the user does not need to re-download or install a new application program , the data stream can be processed directly through the browser to realize application-level screen projection.
- the operating environment of the browser can be initialized, that is, the WebAssembly and the FFmpeg decoder can be initialized; and the decoding memory ring can be constructed. Since the memory of the browser is limited, The memory of the browser can be reused by decoding the memory ring, so as to prevent the data stream from occupying too much memory of the browser, which is beneficial to improve the decoding efficiency of the data stream.
- the above data stream can be decoded by an FFmpeg decoder, wherein the above FFmpeg is a set of open source computer programs that can be used to record and convert digital audio and video, and convert them into streams.
- performing the decoding operation on the data stream to obtain the first playback content may include the following steps: acquiring a parameter set for decoding the data stream; The timing task corresponds to a time interval; within the time interval, based on the parameter set, the data stream is decoded at a preset rate to obtain the first playback content.
- the above-mentioned parameter set may include audio parameters and video parameters; wherein, the audio parameters may include at least one of the following: number of channels, sampling rate, sampling size, data format, etc., which are not limited here; video parameters may include at least one of the following Type: resolution, video duration, color space, etc., which are not limited here.
- the above-mentioned parameter set may be obtained in the above-mentioned process of initializing the operating environment.
- the data stream can be decoded by the preset algorithm in the above-mentioned FFmpeg decoder based on the above-mentioned parameter set, and the preset algorithm can be set by the user or the system defaults, which is not limited here.
- the preset algorithm can be set according to the encoding method of the data stream.
- the time interval corresponding to the above-mentioned scheduled task can be set by the user or the system defaults, which is not limited here.
- the preset rate can be set by the user or the system defaults, which is not limited here; for example, the preset rate can be set according to the rate at which the user plays the video data in the data stream.
- the above method may further include the following steps: performing buffer processing on the data stream; determining the current buffered data amount; If the current amount of cached data is greater than or equal to a preset threshold, the step of starting a scheduled task is performed; if the current amount of cached data is less than the preset threshold, then the data flow at a preset rate is suspended.
- a decoding operation is performed to obtain the first playback content, and when the current buffered data amount is greater than or equal to the preset threshold, the step of starting a timing task is performed.
- the above-mentioned acquisition of the data stream is carried out in real time, then in the continuous acquisition of the data stream, the memory of the browser is limited, so the process of decoding the data stream while caching, In order to ensure the smooth progress of the subsequent rendering process of the data stream, and play the first playback content in the browser.
- the above-mentioned decoding memory ring can be used to control the data amount of the data stream in the process of decoding the above-mentioned data stream.
- the FFmpeg decoder can be initialized, and the data stream can be buffered into the buffer data in the decoder through the decoding memory ring.
- the size of the available memory in the browser is within a reasonable range, and to ensure that the subsequent FFmpeg decoder can always read the data when decoding the cached data; in this way, the size of the cached data can be controlled to make the decoding
- the memory ring buffers the above cached data to avoid excessive memory usage in the browser.
- the above-mentioned current buffered data amount may refer to the data amount used for rendering the video data in the data stream, that is, the data amount used for decoding.
- the above-mentioned preset threshold can be set by the user or the system defaults, which is not limited here.
- the video data is decoded frame by frame. If the above preset algorithm is to decode 64K data frames, then if only the currently downloaded data is downloaded 32K is not enough to decode the rendering data. Then, the above FFmpeg will be started to decode the data stream after the buffered data volume reaches the decoded data volume; when the data volume is insufficient, the decoding operation can be suspended.
- the above-mentioned current buffered data amount is greater than or equal to the preset threshold, it indicates that the current buffered data amount is sufficient for decoding, and the above-mentioned FFmpeg decoder can be started or called, and a timing task can be started to decode the data stream. , that is, in the time region corresponding to the timing task, based on the above parameter set, the data stream is decoded by a preset number.
- the IO module in Figure 4B is still buffering subsequent data streams, Then, when the current amount of buffered data is greater than or equal to the above-mentioned preset threshold, the decoding of the data stream can be implemented based on the decoding memory ring.
- the data corresponding to the current buffered data volume in the decoding memory ring can be processed.
- the size of the cache in the browser memory can be controlled within a reasonable range to ensure that the FFmpeg decoder can always read the data when decoding, so as to ensure the accurate playback of the second playback content.
- the above method may further include the following steps: before the file meta information is obtained, executing the step of caching the data stream; after obtaining the file meta information, executing the The step of decoding the data stream to obtain the first playback content.
- the above-mentioned file meta-information may include at least one of the following: the frame format of the data stream, the coding and decoding convention information, the frequency of audio sampling, etc., which are not limited here; Therefore, the acquisition time of different data may be different; the above-mentioned file meta information can be used to help the FFmpeg decoder to decode the audio data and video data in the data stream.
- the first playback content includes: video data and audio data; the above method may further include the following step: synchronously playing the audio data and video data of the first playback content in the browser.
- the audio data and the video data can be synchronized and displayed correctly in the above browser, so as to complete the display of the first playback content.
- synchronously playing the audio data and video data of the first playback content in the browser may include the following steps: receiving the user's operation in the interface corresponding to the browser. Playing an operation instruction, determining a timestamp corresponding to the audio data; using the timestamp as a benchmark, adjusting the rendering time of the video data, so that the rendering step of the video data and the playing step of the audio data are adjusted This is done synchronously in the browser.
- the above audio data can obtain the timestamp of the currently playing audio through the Api (interface) of Web Audio, and use the timestamp as the time base to synchronize the video frame. If the corresponding rendering time of the current video frame has fallen behind, immediately If the rendering is relatively early, it needs to be delayed. In this way, the synchronization of the audio data and the video data can be ensured, which is beneficial to improve the user experience and complete the display of the second playback content in the second device.
- FIG. 4C it is a schematic flowchart of a browser-based application screen projection method, which may include: a main module, a decoding module, and an IO module.
- the WebAssembly and FFmpeg decoders can be initialized through the main module.
- the acquisition or download of the data stream and the decoding of the data stream need to be completed in a separate thread, which can be done through a browser. WebWorker to achieve.
- the main module can use Canvas to draw to display the first playback content.
- the video data obtained by decoding the data stream by the FFmpeg decoder can be converted into color space to obtain the first playback content.
- control of the buffering of the data stream in the above method steps can be used in the decoding module and the IO module to ensure that the FFmpeg decoder can read the data at any time, and can return the data;
- a timing task is started to perform the decoding task, decoding at a certain rate, and decoding is suspended when the rendering buffer is full.
- the first device receives the operation instruction of the user operating in the browser, and obtains the second playback content corresponding to the target application in the second device; based on the second playback content, the browser plays the second playback content.
- the first playing content is the same as the second playing content.
- FIG. 5 is a schematic structural diagram of a first device provided by an embodiment of the present application.
- the first device includes a processor, a memory, a A communication interface and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the program comprising instructions for performing the following steps:
- the first device described in the embodiment of the present application can receive the operation instruction of the user operating in the browser, and obtain the second playback content corresponding to the target application in the second device; based on the second playback content, in the The first playing content is played in the browser, and the first playing content is the same as the second playing content.
- the user can adapt the target application in other devices to the currently used device without re-downloading the new application, which is conducive to improving the user experience; the target application can be completed without the participation of third-party applications.
- the screen projection of the displayed content is conducive to improving the security of the device.
- the above program further includes instructions for executing the following steps:
- the first playback content is played in the browser.
- the above-mentioned program further includes an instruction for performing the following steps:
- the data stream is decoded at a preset rate to obtain the first playback content.
- the above program before the decoding operation is performed on the data stream to obtain the first playback content, the above program further includes an instruction for performing the following steps:
- the step of starting a scheduled task is performed
- the step of starting a scheduled task is performed.
- the above program further includes instructions for performing the following steps:
- the step of performing the cache processing on the data stream is performed;
- the step of decoding the data stream to obtain the first playback content is performed.
- the first playback content includes: video data and audio data; the above program also includes instructions for executing the following steps:
- the audio data and the video data of the first playing content are played synchronously in the browser.
- the above program further includes instructions for performing the following steps:
- the rendering time of the video data is adjusted, so that the step of rendering the video data and the step of playing the audio data are performed synchronously in the browser.
- the electronic device includes corresponding hardware structures and/or software modules for executing each function.
- the present application can be implemented in hardware or in the form of a combination of hardware and computer software, in combination with the units and algorithm steps of each example described in the embodiments provided herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
- the electronic device may be divided into functional units according to the foregoing method examples.
- each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit.
- the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and other division methods may be used in actual implementation.
- FIG. 6 shows a schematic diagram of a browser-based application screen projection device.
- the browser-based application screen projection device 600 is applied to the first device , the browser-based application screen projection device 600 may include: an acquiring unit 601 and a playing unit 602, wherein,
- the obtaining unit 601 may be used to support the electronic device to perform the above-mentioned step 401, and/or be used for other processes of the techniques described herein.
- the playback unit 602 may be used to support the electronic device to perform the above-described step 402, and/or other processes for the techniques described herein.
- the browser-based application screen projection device provided by the embodiment of the present application can receive the operation instruction of the user operating in the browser, and obtain the second playback content corresponding to the target application in the second device; based on the second playback content , play the first playback content in the browser, and the first playback content is the same as the second playback content.
- the user can adapt the target application in other devices to the currently used device without re-downloading the new application, which is conducive to improving the user experience; the target application can be completed without the participation of third-party applications.
- the screen projection of the displayed content is conducive to improving the security of the device.
- the playback unit 602 is specifically configured to:
- the first playback content is played in the browser.
- the playback unit 602 is specifically configured to:
- the data stream is decoded at a preset rate to obtain the first playback content.
- the electronic device provided in this embodiment is used to execute the above browser-based application screen projection method, so the same effect as the above implementation method can be achieved.
- the electronic device may include a processing module, a memory module and a communication module.
- the processing module may be used to control and manage the actions of the electronic device, for example, may be used to support the electronic device to perform the steps performed by the acquisition unit 601 and the playback unit 602 above.
- the storage module may be used to support the electronic device to execute stored program codes and data, and the like.
- the communication module can be used to support the communication between the electronic device and other devices.
- the processing module may be a processor or a controller. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
- a processor may also be a combination that implements computing functions, such as a combination comprising one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, and the like.
- the storage module may be a memory.
- the communication module may specifically be a device that interacts with other electronic devices, such as a radio frequency circuit, a Bluetooth chip, and a Wi-Fi chip.
- the electronic device involved in this embodiment may be a device having the structure shown in FIG. 1 .
- Embodiments of the present application further provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program causes the computer to execute part or all of the steps of any method described in the above method embodiments , the above computer includes electronic equipment.
- Embodiments of the present application further provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute any one of the method embodiments described above. some or all of the steps of the method.
- the computer program product may be a software installation package, and the computer includes an electronic device.
- the disclosed apparatus may be implemented in other manners.
- the device embodiments described above are only illustrative.
- the division of the above-mentioned units is only a logical function division.
- multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical or other forms.
- the units described above as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
- the above-mentioned integrated units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable memory.
- the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art, or all or part of the technical solution, and the computer software product is stored in a memory.
- a computer device which may be a personal computer, a server, or a network device, etc.
- the aforementioned memory includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种基于浏览器的应用投屏方法及相关装置,应用于第一设备,方法包括:接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容(S401);基于第二播放内容,在浏览器中播放第一播放内容,第一播放内容与第二播放内容相同(S402)。采用该方法能够在用户可不重新下载新的应用程序,把其他设备中的目标应用在当前使用的设备上进行适配,有利于提高用户体验。
Description
本申请涉及电子技术领域,具体涉及一种基于浏览器的应用投屏方法及相关装置。
随着电子设备(手机、平板电脑等)的大量普及应用,电子设备向着多样化、个性化的方向发展,电子设备能够支持的应用越来越多,功能越来越强大成为用户生活中不可缺少的电子用品。
手机与PC交互有很多种方式,从早期通过USB手机助手连接到后来各种Wifi环境下的互联APP,用户可以在手机和PC间共享图片、视频、文档等。除了共享外交互最直观的方式就是投屏,通过把手机屏幕的内容投影到PC上完成交互。目前,手机投屏有DLNA、Airplay、Miracast等多种方式,但需要在本地安装一个新的应用程序,存在一定的安全风险,安装步骤麻烦。
发明内容
本申请实施例提供了一种基于浏览器的应用投屏方法及相关装置。
第一方面,本申请实施例提供一种基于浏览器的应用投屏方法,应用于第一设备,所述方法包括:
接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;
基于所述第二播放内容,在所述浏览器中播放第一播放内容,所述第一播放内容与所述第二播放内容相同。
第二方面,本申请实施例提供一种基于浏览器的应用投屏装置,应用于第一设备,所述装置包括:获取单元和播放单元,其中,
所述获取单元,用于接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;
所述播放单元,用于基于所述第二播放内容,在所述浏览器中播放第一播放内容,所述第一播放内容与所述第二播放内容相同。
所述编排单元,用于响应于用户对所述控件编辑界面中适配控件进行操作的第一操作指令,编排所述适配控件,所述适配控件与被选择的原生控件对应,所述被选择的原生控件是第二设备运行目标应用时所显示的第一应用界面上的控件;
所述属性更改单元,用于响应于所述用户对所述适配控件进行操作的第二操作指令,对所述适配控件对应的第一属性信息进行更改;
所述生成单元,用于基于所述适配控件生成第二应用界面,所述第二应用界面与所述第一应用界面对应于所述目标应用的同一个功能界面。
第三方面,本申请实施例提供一种电子设备,包括处理器、存储器、通信接口以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行本申请实施例第一方面任一方法中的步骤的指令。
第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存 储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面任一方法中所描述的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面任一方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
可以看出,本申请实施例中,第一设备接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;基于第二播放内容,在浏览器中播放第一播放内容,第一播放内容与第二播放内容相同。如此,用户可不重新下载新的应用程序,把其他设备中的目标应用在当前使用的设备上进行适配,有利于提高用户体验;可在第三方应用不参与的情况下,完成对目标应用中显示内容的投屏,有利于提高设备安全性。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种电子设备的结构示意图;
图2是本申请实施例提供的一种电子设备的软件结构示意图;
图3A是本申请实施例提供的一种基于浏览器的应用投屏方法的网络架构示意图;
图3B是本申请实施例提供的一种基于浏览器的应用投屏方法的层次模型架构图;
图4A是本申请实施例提供的一种基于浏览器的应用投屏方法的流程示意图;
图4B是本申请实施例提供的一种解码模块中解码过程的流程示意图;
图4C是本申请实施例提供的一种基于浏览器的应用投屏方法的流程示意图;
图5是本申请实施例提供的一种第一设备的功能单元组成框图;
图6是本申请实施例提供的一种基于浏览器的应用投屏装置的功能单元组成框图。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
1)电子设备可以是还包含其它功能诸如个人数字助理和/或音乐播放器功能的便携式电子设备,诸如手机、平板电脑、具备无线通讯功能的可穿戴电子设备(如智能手表)等。便携式电子设备的示例性实施例包括但不限于搭载IOS系统、Android系统、Microsoft系统或者其它操作系统的便携式电子设备。上述便携式电子设备也可以是其它便携式电子设备,诸如膝上型计算机(Laptop)等。还应当理解的是,在其他一些实施例中,上述电子设备也可以不是便携式电子设备,而是台式计算机。
2)将手机APP显示内容投影到PC、TV等其他设备的屏幕上的技术,APP可以手机当前显示的应用也可以后台运行的应用。
3)WebAssembly是一个可移植、体积小、加载快并且兼容Web的全新格式,它运行在一个沙箱化的执行环境中,能充分发挥硬件能力以达到原生执行效率。
4)FFmpeg是一套可以用来记录、转换数字音频、视频,并能将其转化为流的开源计算机程序。采用LGPL或GPL许可证。它提供了录制、转换以及流化音视频的完整解决方案。
5)WebGL是一种3D绘图协议,允许把JavaScript和OpenGL ES 2.0结合在一起,为HTML5 Canvas提供硬件3D加速渲染。
6)应用定义指在当前设备显示另一设备中的应用时,由于显示和布局都发生了变化,因此需要有一个应用定义来重新定义当前设备的展现形式。
第一部分,本申请所公开的技术方案的软硬件运行环境介绍如下。
示例性的,图1示出了电子设备100的结构示意图。电子设备100可以包括处理器110、外部存储器接口120、内部存储器121、通用串行总线(universal serial bus,USB)接口130、充电管理模块140、电源管理模块141、电池142、天线1、天线2、移动通信模块150、无线通信模块160、音频模块170、扬声器170A、受话器170B、麦克风170C、耳机接口170D、传感器模块180、指南针190、马达191、指示器192、摄像头193、显示屏194以及用户标识模块(subscriber identification module,SIM)卡接口195等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的部件,也可以集成在一个或多个处理器中。在一些实施例中,电子设备100也可以包括一个或多个处理器110。其中,控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。在其他一些实施例中,处理器110中还可以设置存储器,用于存储指令和数据。示例性地,处理器110中的存储器可以为高速缓冲存储器。该存储器可以保存处理器110刚用 过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。这样就避免了重复存取,减少了处理器110的等待时间,因而提高了电子设备100处理数据或执行指令的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路间(inter-integrated circuit,I2C)接口、集成电路间音频(inter-integrated circuit sound,I2S)接口、脉冲编码调制(pulse code modulation,PCM)接口、通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口、移动产业处理器接口(mobile industry processor interface,MIPI)、用输入输出(general-purpose input/output,GPIO)接口、SIM卡接口和/或USB接口等。其中,USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口、Micro USB接口、USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。该USB接口130也可以用于连接耳机,通过耳机播放音频。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110、内部存储器121、外部存储器、显示屏194、摄像头193和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量、电池循环次数、电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1、天线2、移动通信模块150、无线通信模块160、调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施 例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络)、蓝牙(blue tooth,BT),全球导航卫星系统(global navigation satellite system,GNSS)、调频(frequency modulation,FM)、近距离无线通信技术(near field communication,NFC)、红外技术(infrared,IR)、UWB等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为基于浏览器的应用投屏的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像、视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD)、有机发光二极管(organic light-emitting diode,OLED)、有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED)、柔性发光二极管(flex light-emitting diode,FLED)、迷你发光二极管(mini light-emitting diode,miniled)、MicroLed、Micro-oLed、量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或多个显示屏194。
电子设备100可以通过ISP、摄像头193、视频编解码器、GPU、显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点、亮度、肤色进行算法优化。ISP还可以对拍摄场景的曝光、色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或多个摄像头193。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1、MPEG2、MPEG3、MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别、人脸识别、语音识别、文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器110可以通过运行存储在内部存储器121的上述指令,从而使得电子设备100执行本申请一些实施例中所提供的显示页面元素的方法,以及各种应用以及数据处理等。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统;该存储程序区还可以存储一个或多个应用(比如图库、联系人等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如照片,联系人等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如一个或多个磁盘存储部件,闪存部件,通用闪存存储器(universal flash storage,UFS)等。在一些实施例中,处理器110可以通过运行存储在内部存储器121的指令,和/或存储在设置于处理器110中的存储器的指令,来使得电子设备100执行本申请实施例中所提供的显示页面元素的方法,以及其他应用及数据处理。电子设备100可以通过音频模块170、扬声器170A、受话器170B、麦克风170C、耳机接口170D、以及应用处理器等实现音频功能。例如音乐播放、录音等。
传感器模块180可以包括压力传感器180A、陀螺仪传感器180B、气压传感器180C、磁传感器180D、加速度传感器180E、距离传感器180F、接近光传感器180G、指纹传感器180H、温度传感器180J、触摸传感器180K、环境光传感器180L、骨传导传感器180M等。
其中,压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即X、Y和Z轴)的角速度。陀螺 仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
示例性的,图2示出了电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。应用程序层可以包括一系列应用程序包。
如图2所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于 构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
第二部分,本申请实施例所公开的示例应用场景介绍如下。
图3A示出了本申请所适用的基于浏览器的应用投屏方法的网络架构示意图,如图3A所示,该架构示意图中包括多个服务器,其中可包括:云端服务器200a,后台服务器200b,以及多个电子设备,该电子设备可以为智能手机、平板电脑、台式电脑、具备无线通讯功能的可穿戴电子设备等等,具体的在此不作限定。
其中,每一电子设备可与上述云端服务器进行信息交互,该后台服务器200b可与云端服务器200a建立连接;每一电子设备之间可进行通信。
举例来说,电子设备100b(第二设备)可将当前运行的目标应用中的第一播放内容投屏到电子设备100a(第一设备)的浏览器中,具体实现中,第一设备接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的数据流,数据流为第二设备播放目标应用对应的第二播放内容;基于数据流,在浏览器中播放第一播放内容,第一播放内容 与第二播放内容对应目标应用中同一功能。如此,用户可不重新下载新的应用程序,把其他设备中的目标应用在当前使用的设备上进行适配,有利于提高用户体验;可在第三方应用不参与的情况下,完成对目标应用中显示内容的投屏,有利于提高设备安全性。
示例性的,图3B示出了一种本申请所适用的基于浏览器的应用投屏方法的层次模型架构图,如图所示,该架构图中可包括:主模块、解码模块和IO模块。
其中,主模块可分为控制交互层和渲染层,其中,控制交互层可用于与前端显示器进行UI交互以及一些事件的处理,该控制交互层可基于JS框架搭建。
其中,主模块中的渲染层用于渲染视频数据到前端显示器中;该渲染层可基于WebGL Player和PCM Player两个播放器来搭建。
其中,解码模块可包括自定义解码器,其中,解码算法适用于H264标准、H265标准以及AAC-LC标准等等。上述自定义解码器可为FFmpeg解码器,可在解码的过程中,对数据流的缓存进行控制,以使得FFmpeg解码器中始终有数据进行解码,以保证视频数据和音频数据的流畅运行。
其中,IO模块可包括传输层,该传输层可基于websocket内核封装一些直播协议,例如,应用投屏自定义直播协议、FLV/RTMP直播协议和ARTC/ARTP直播协议等等。
可见,在本申请实施例中,第一设备可结合WebWorker(Html5协议提供的一种Javascript单线程执行基础上开启的一个子线程)、WebAudio Api(一个专门用于音频处理的接口)、WebGL(Web Graphics Library,3D绘图协议)等浏览器技术实现对于数据流的下载和显示,并可利用WebAssembly和FFmpeg完成对于数据流的解码,其中WebAssembly是第一设备中浏览器的运行环境,最后,可通过运行基于FFmpeg的自定义解码算法完成对于数据流中音频数据和视频数据的解码,如此,用户无需安装新的应用程序以用于投屏第二设备中的第二播放内容,可直接在第一设备(当前使用的设备)的浏览器中显示上述第二播放内容,以实现应用级投屏。
第三部分,本申请实施例所公开的权要保护范围介绍如下。
请参阅图4A,图4A是本申请实施例提供了一种基于浏览器的应用投屏方法的流程示意图,应用于第一设备,如图所示,本基于浏览器的应用投屏方法包括以下操作。
S401、接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容。
其中,上述第一设备和/或第二设备可为如图1和图2所示的电子设备,该第一设备中可包括浏览器,第一设备可指用户正在操作使用的当前设备,上述第二设备可指远端设备,可指用户通过当前设备需要访问远端设备中的应用所在的设备。
其中,上述第一设备可与第二设备之间建立通信连接,可通过第一设备中的浏览器对第二设备当前运行的应用,即目标应用的第二播放内容在第一设备的浏览器中进行播放或者显示。
其中,上述目标应用可包括以下至少一种:社交类应用、新闻类应用、购物类应用、娱乐类应用、金融类应用、生活类应用、工具类应用等等,在此不做限定;该目标应用可为第二设备中当前使用的应用,可以为当前显示的应用或者后台运行的应用。
可选地,第一设备可在浏览器中生成二维码,用户可通过扫描二维码的方式建立第一设备与第二设备之间的通信连接,如此,浏览器可获取目标应用对应的地址信息,并根据该地址信息建立投屏通道,有利于后续在浏览器中实现对目标应用中第二播放内容的投屏。
S402、基于所述第二播放内容,在所述浏览器中播放第一播放内容,所述第一播放内 容与所述第二播放内容相同。
其中,上述第一播放内容与上述第二播放内容均对应于目标应用,可基于该数据流,实现第一设备对第二设备中的目标应用的当前显示内容,即第一播放内容的跨端投屏。
在一种可能的示例中,上述步骤S402,在所述浏览器中播放第一播放内容,可包括如下步骤:对数据流进行解码操作以得到所述第一播放内容,所述数据流为所述第二设备中目标应用对应的第二播放内容;在所述浏览器中播放所述第一播放内容。
其中,上述数据流可包括目标应用中正在播放或者显示的音频数据、视频数据、游戏数据等等,在此不作限定。
其中,上述数据流可指将目标应用对应的第二播放内容进行加密、压缩等处理以后得到的数据。
具体实现中,用户可通过在浏览器界面中点击或者拖拽等操作触发第一设备跨端使用第二设备中目标应用的指令,并建立第一设备与第二设备之间的通信连接,进而,第一设备可下载获取第二设备中目标应用对应的数据流,该数据流对应于第二设备中当前播放或者显示的第二播放显示内容。
其中,第一设备可通过独立的IO模块中的ebWorker通过WebSocket从服务端接收数据流,为防止投屏过程中或者下载操作中占用过多的CPU和带宽,在获取到文件码率之后,以码率一定倍数的速率下载上述数据流对应的文件信息。
同时,在建立通信连接以后,第一设备可从云端服务器下载适配于当前设备环境的程序,并装载适配于第一设备,有利于后续顺利实现将目标应用的第二播放内容投屏到浏览器中;进而,可获取第二设备中该目标应用对应的数据流,再进一步地,可对该数据流进行处理,得到播放内容,如此,不需要用户重新下载或者安装一个新的应用程序,可通过浏览器直接对该数据流进行处理,以实现应用级的投屏。
可选地,在接收用户在浏览器中进行操作的操作指令以后,可初始化该浏览器的运行环境,即初始化WebAssembly以及FFmpeg解码器;并构建解码内存环,由于浏览器的内存是有限的,通过解码内存环可重复使用该浏览器的内存,避免数据流占用浏览器内存过大,有利于提高对数据流的解码效率。
其中,可通过FFmpeg解码器对上述数据流进行解码操作,其中,上述FFmpeg是一套可以用来记录、转换数字音频、视频,并能将其转化为流的开源计算机程序。
在一种可能的示例中,所述对所述数据流进行解码操作以得到所述第一播放内容,可包括如下步骤:获取针对所述数据流进行解码的参数集;启动一个定时任务,所述定时任务对应一个时间区间;在所述时间区间内,基于所述参数集,以预设速率对所述数据流进行解码操作得到所述第一播放内容。
其中,上述参数集可包括音频参数和视频参数;其中,音频参数可包括以下至少一种:通道数、采样率、采样大小、数据格式等等,在此不作限定;视频参数可包括以下至少一种:分辨率、视频时长、颜色空间等等,在此不作限定。
其中,可在上述对运行环境进行初始化的过程中获取上述参数集。
其中,可通过上述FFmpeg解码器中的预设算法,基于上述参数集对数据流进行解码操作,该预设算法可由用户自行设置或者系统默认,在此不作限定。该预设算法可依据于数据流的编码方式而设定。
其中,上述定时任务对应的时间区间可由用户自行设置或者系统默认,在此不作限定。
其中,上述预设速率可由用户自行设置或者系统默认,在此不作限定;例如,可依据于用户播放上述数据流中的视频数据的速率来设定该预设速率。
可选地,在所述对所述数据流进行解码操作以得到所述第一播放内容之前,上述方法 还可包括如下步骤:对所述数据流进行缓存处理;确定当前缓存数据量;若所述当前缓存数据量大于或等于预设阈值,则执行所述启动一个定时任务的步骤;若所述当前缓存数据量小于所述预设阈值,则暂停所述以预设速率对所述数据流进行解码操作得到所述第一播放内容,在所述当前缓存数据量大于或等于所述预设阈值时,执行所述启动一个定时任务的步骤。
其中,在实际应用中,上述对数据流的获取是实时进行的,那么在数据流的不断获取中,而浏览器的内存是有限的,因此,可以对该数据流边缓存边解码的过程,以保证后续对数据流的渲染过程的顺利进行,并在浏览器中播放第一播放内容。
可选地,可在对数据流进行缓存处理以后,通过上述解码内存环,在对上述数据流进行解码的过程中实现对数据流的数据量的控制。
举例来说,如图4B所示,为解码模块中解码过程的流程示意图,可在对上述数据流解码之前,初始化FFmpeg解码器,并通过解码内存环将数据流缓存到解码器中的缓存数据进行处理,以保证浏览器中可利用内存的大小在合理范围内,保证后续FFmpeg解码器在对缓存数据进行解码时,始终能够读到数据;如此,可通过控制缓存数据的大小,以使得解码内存环缓冲上述缓存数据,以避免浏览器中的内存占用过大。
其中,上述当前缓存数据量可指用于渲染数据流中视频数据的数据量,也就是用来解码的数据量。
其中,上述预设阈值可由用户自行设置或者系统默认,在此不作限定。
其中,由于在通过FFmpeg解码器对数据流进行解码操作时,是对视频数据进行一帧一帧的解码的,若上述预设算法是对64K的数据帧进行解码,那么,若当前只下载了32K,不足以解码得到渲染数据,那么,会等待缓存的数据量达到解码数据量以后,才启动上述FFmpeg对数据流进行解码;当数据量不足时,可暂停解码操作。
具体实现中,当上述当前缓存数据量大于或等于预设阈值时,表明当前缓存的数据量足以用来解码,可启动或者调用上述FFmpeg解码器,并启动一个定时任务对该数据流进行解码操作,即在该定时任务对应的时间区域内,基于上述参数集,以预设数量对该数据流进行解码操作。
进一步地,若上述当前缓存数据量小于预设阈值,则表明该数据量不足以用来解码,则可进入缓冲区状态,此时,如图4B中的IO模块仍然在缓存后续的数据流,则可在当前缓存数据量大于或等于上述预设阈值时,基于解码内存环,实现对该数据流的解码。
可见,在本申请实施例中,可在上述解码内存环读满缓存以后,并等待对当前缓存数据量足以实现页面渲染完成以后,再去对解码内存环中的当前缓存数据量对应的数据进行解码操作,如此,可控制缓存在浏览器内存中的大小在合理范围内,以保证FFmpeg解码器在做解码时,始终能够读到数据,以保证第二播放内容的准确播放。
可选地,上述方法还可包括如下步骤:在未获取到文件元信息之前,则执行所述对所述数据流进行缓存处理的步骤;在获取到所述文件元信息之后,则执行所述对所述数据流进行解码操作以得到所述第一播放内容的步骤。
其中,上述文件元信息可包括以下至少一种:数据流的帧格式、编解码的约定信息、音频采样的频率等等,在此不作限定;由于获取上述数据流是在同一线程中实现的,因此,不同数据的获取时间可能会有差异性;上述文件元信息可用于帮助FFmpeg解码器实现对数据流中的音频数据和视频数据的解码。
其中,在未获取到文件元信息之前,不足以解码上述数据流中的音视频数据,可保持对数据流进行缓存的步骤,
可见,在本申请实施例中,通过控制在获取到文件元信息之前和解码之前两个数据的 缓存量,可保证在任何时间,在上述FFmpeg解码器需要读取数据以用于解码时都能够返回数据到该解码器中,并在数据量不足以解码时,可停止解码,以保证整个解码过程的流畅进行,如此,用户在通过浏览器观看数据流中的音视频数据时,不会出现卡顿的情况。
可选地,所述第一播放内容包括:视频数据和音频数据;上述方法还可包括如下步骤:在所述浏览器中同步播放所述第一播放内容的音频数据和视频数据。
其中,在对数据流进行解码操作以后,可将音频数据和视频数据同步以后,在上述浏览器中正确显示,以完成对第一播放内容的显示。
在一种可能的示例中,在所述浏览器中同步播放所述第一播放内容的音频数据和视频数据,可包括如下步骤:接收所述用户在所述浏览器对应的界面中进行操作的播放操作指令,确定所述音频数据对应的时间戳;以所述时间戳为基准,对所述视频数据的渲染时间进行调整,以使所述视频数据的渲染步骤和所述音频数据的播放步骤在所述浏览器中同步进行。
具体实现中,上述音频数据可通过Web Audio的Api(接口)获得当前播放的音频的时间戳,以该时间戳为时间基准来同步视频帧,如果当前视频帧的对应的渲染时间已经落后则立刻渲染,如果比较早,则需要延迟,如此,可保证音频数据和视频数据的同步进行,有利于提高用户体验,以完成对第二设备中第二播放内容的显示。
如图4C所示,为一种基于浏览器的应用投屏方法的流程示意图,该图中可包括:主模块、解码模块和IO模块。
其中,可通过主模块初始化WebAssembly和FFmpeg解码器,在本申请实施例中,为不影响界面交互,数据流的获取或者下载和数据流的解码都需要在单独的线程中完成,可通过浏览器的WebWorker来实现。
其中,可通过主模块使用Canvas来绘图,以实现显示第一播放内容,具体可将FFmpeg解码器对数据流解码得到的视频数据进行颜色空间转换,以得到第一播放内容。
其中,可在解码模块和IO模块中使用上述方法步骤中对数据流的缓存的控制,以保证FFmpeg解码器在任何时候都可以读取到数据,并都能够返回数据;在当前缓存数据小于预设阈值时,可停止解码,进入缓存区状态,等待当前缓存数据大于或等于预设阈值时,继续解码。
其中,若当前缓存数据量足够启动或者调用FFmpeg解码器,则启动一个定时任务来执行解码任务,以一定的速率解码,并在渲染缓存满后暂停解码。
可见,在本申请实施例中,第一设备接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;基于第二播放内容,在浏览器中播放第一播放内容,第一播放内容与第二播放内容相同。如此,用户可不重新下载新的应用程序,把其他设备中的目标应用在当前使用的设备上进行适配,有利于提高用户体验;可在第三方应用不参与的情况下,完成对目标应用中显示内容的投屏,有利于提高设备安全性。
与上述图4A所示的实施例一致地,请参阅图5,图5是本申请实施例提供的一种第一设备的结构示意图,如图所示,该第一设备包括处理器、存储器、通信接口以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行以下步骤的指令:
接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;
基于所述第二播放内容,在所述浏览器中播放第一播放内容,所述第一播放内容与所述第二播放内容相同。
可以看出,本申请实施例中所描述的第一设备,可接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;基于第二播放内容,在浏览器中播放第一播放内容,第一播放内容与第二播放内容相同。如此,用户可不重新下载新的应用程序,把其他设备中的目标应用在当前使用的设备上进行适配,有利于提高用户体验;可在第三方应用不参与的情况下,完成对目标应用中显示内容的投屏,有利于提高设备安全性。
在一个可能的示例中,在所述浏览器中播放第一播放内容方面,上述程序还包括用于执行以下步骤的指令:
对数据流进行解码操作以得到所述第一播放内容,所述数据流为所述第二设备中目标应用对应的第二播放内容;
在所述浏览器中播放所述第一播放内容。
在一个可能的示例中,在所述对所述数据流进行解码操作以得到所述第一播放内容方面,上述程序还包括用于执行以下步骤的指令:
获取针对所述数据流进行解码的参数集;
启动一个定时任务,所述定时任务对应一个时间区间;
在所述时间区间内,基于所述参数集,以预设速率对所述数据流进行解码操作得到所述第一播放内容。
在一个可能的示例中,在所述对所述数据流进行解码操作以得到所述第一播放内容之前,上述程序还包括用于执行以下步骤的指令:
对所述数据流进行缓存处理;
确定当前缓存数据量;
若所述当前缓存数据量大于或等于预设阈值,则执行所述启动一个定时任务的步骤;
若所述当前缓存数据量小于所述预设阈值,则暂停所述以预设速率对所述数据流进行解码操作得到所述第一播放内容,在所述当前缓存数据量大于或等于所述预设阈值时,执行所述启动一个定时任务的步骤。
在一个可能的示例中,上述程序还包括用于执行以下步骤的指令:
在未获取到文件元信息之前,则执行所述对所述数据流进行缓存处理的步骤;
在获取到所述文件元信息之后,则执行所述对所述数据流进行解码操作以得到所述第一播放内容的步骤。
在一个可能的示例中,所述第一播放内容包括:视频数据和音频数据;上述程序还包括用于执行以下步骤的指令:
在所述浏览器中同步播放所述第一播放内容的音频数据和视频数据。
在一个可能的示例中,在所述浏览器中同步播放所述第一播放内容的音频数据和视频数据方面,上述程序还包括用于执行以下步骤的指令:
接收所述用户在所述浏览器对应的界面中进行操作的播放操作指令,确定所述音频数据对应的时间戳;
以所述时间戳为基准,对所述视频数据的渲染时间进行调整,以使所述视频数据的渲染步骤和所述音频数据的播放步骤在所述浏览器中同步进行。
上述主要从方法侧执行过程的角度对本申请实施例的方案进行了介绍。可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所提供的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技 术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对电子设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图6示出了基于浏览器的应用投屏装置的示意图,如图6所示,该基于浏览器的应用投屏装置600应用于第一设备,该基于浏览器的应用投屏装置600可以包括:获取单元601和播放单元602,其中,
其中,获取单元601可以用于支持电子设备执行上述步骤401,和/或用于本文所描述的技术的其他过程。
播放单元602可以用于支持电子设备执行上述步骤402,和/或用于本文所描述的技术的其他过程。
可见,在本申请实施例提供的基于浏览器的应用投屏装置,可接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;基于第二播放内容,在浏览器中播放第一播放内容,第一播放内容与第二播放内容相同。如此,用户可不重新下载新的应用程序,把其他设备中的目标应用在当前使用的设备上进行适配,有利于提高用户体验;可在第三方应用不参与的情况下,完成对目标应用中显示内容的投屏,有利于提高设备安全性。
在一种可能的示例中,所述在所述浏览器中播放第一播放内容方面,上述播放单元602具体用于:
对数据流进行解码操作以得到所述第一播放内容,所述数据流为所述第二设备中目标应用对应的第二播放内容;
在所述浏览器中播放所述第一播放内容。
在一种可能的示例中,在所述对所述数据流进行解码操作以得到所述第一播放内容方面,上述播放单元602具体用于:
获取针对所述数据流进行解码的参数集;
启动一个定时任务,所述定时任务对应一个时间区间;
在所述时间区间内,基于所述参数集,以预设速率对所述数据流进行解码操作得到所述第一播放内容。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例提供的电子设备,用于执行上述基于浏览器的应用投屏方法,因此可以达到与上述实现方法相同的效果。
在采用集成的单元的情况下,电子设备可以包括处理模块、存储模块和通信模块。其中,处理模块可以用于对电子设备的动作进行控制管理,例如,可以用于支持电子设备执行上述获取单元601和播放单元602执行的步骤。存储模块可以用于支持电子设备执行存储程序代码和数据等。通信模块,可以用于支持电子设备与其他设备的通信。
其中,处理模块可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包 含一个或多个微处理器组合,数字信号处理(digital signal processing,DSP)和微处理器的组合等等。存储模块可以是存储器。通信模块具体可以为射频电路、蓝牙芯片、Wi-Fi芯片等与其他电子设备交互的设备。
在一个实施例中,当处理模块为处理器,存储模块为存储器时,本实施例所涉及的电子设备可以为具有图1所示结构的设备。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤,上述计算机包括电子设备。
本申请实施例还提供一种计算机程序产品,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如上述方法实施例中记载的任一方法的部分或全部步骤。该计算机程序产品可以为一个软件安装包,上述计算机包括电子设备。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例上述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
Claims (20)
- 一种基于浏览器的应用投屏方法,应用于第一设备,所述方法包括:接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;基于所述第二播放内容,在所述浏览器中播放第一播放内容,所述第一播放内容与所述第二播放内容相同。
- 根据权利要求1所述的方法,其中,所述在所述浏览器中播放第一播放内容,包括:对数据流进行解码操作以得到所述第一播放内容,所述数据流为所述第二设备中目标应用对应的第二播放内容;在所述浏览器中播放所述第一播放内容。
- 根据权利要求2所述的方法,其中,所述对所述数据流进行解码操作以得到所述第一播放内容,包括:获取针对所述数据流进行解码的参数集;启动一个定时任务,所述定时任务对应一个时间区间;在所述时间区间内,基于所述参数集,以预设速率对所述数据流进行解码操作得到所述第一播放内容。
- 根据权利要求3所述的方法,其中,在所述对所述数据流进行解码操作以得到所述第一播放内容之前,所述方法还包括:对所述数据流进行缓存处理;确定当前缓存数据量;若所述当前缓存数据量大于或等于预设阈值,则执行所述启动一个定时任务的步骤;若所述当前缓存数据量小于所述预设阈值,则暂停所述以预设速率对所述数据流进行解码操作得到所述第一播放内容,在所述当前缓存数据量大于或等于所述预设阈值时,执行所述启动一个定时任务的步骤。
- 根据权利要求2所述的方法,其中,所述方法还包括:在未获取到文件元信息之前,则执行所述对所述数据流进行缓存处理的步骤;在获取到所述文件元信息之后,则执行所述对所述数据流进行解码操作以得到所述第一播放内容的步骤。
- 根据权利要求2-5任一项所述的方法,其中,所述方法还包括:在所述浏览器中同步播放所述第一播放内容的音频数据和视频数据。
- 根据权利要求6所述的方法,其中,所述在所述浏览器中同步播放所述第一播放内容的音频数据和视频数据,包括:接收所述用户在所述浏览器对应的界面中进行操作的播放操作指令,确定所述音频数据对应的时间戳;以所述时间戳为基准,对所述视频数据的渲染时间进行调整,以使所述视频数据的渲染步骤和所述音频数据的播放步骤在所述浏览器中同步进行。
- 一种基于浏览器的应用投屏装置,应用于第一设备,所述装置包括:获取单元和播放单元,其中,所述获取单元,用于接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;所述播放单元,用于基于所述第二播放内容,在所述浏览器中播放第一播放内容,所述第一播放内容与所述第二播放内容相同。
- 根据权利要求8所述的装置,其中,所述在所述浏览器中播放第一播放内容方面, 所述播放单元具体用于:对数据流进行解码操作以得到所述第一播放内容,所述数据流为所述第二设备中目标应用对应的第二播放内容;在所述浏览器中播放所述第一播放内容。
- 根据权利要求9所述的装置,其中,在所述对所述数据流进行解码操作以得到所述第一播放内容方面,所述播放单元具体用于:获取针对所述数据流进行解码的参数集;启动一个定时任务,所述定时任务对应一个时间区间;在所述时间区间内,基于所述参数集,以预设速率对所述数据流进行解码操作得到所述第一播放内容。
- 根据权利要求9所述的装置,其中,所述装置还用于:在未获取到文件元信息之前,则执行所述对所述数据流进行缓存处理的步骤;在获取到所述文件元信息之后,则执行所述对所述数据流进行解码操作以得到所述第一播放内容的步骤。
- 根据权利要求9-11任一项所述的装置,其中,所述装置还用于:在所述浏览器中同步播放所述第一播放内容的音频数据和视频数据。
- 一种电子设备,包括处理器、存储器、通信接口,以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行以下步骤的指令:接收用户在浏览器中进行操作的操作指令,获取第二设备中目标应用对应的第二播放内容;基于所述第二播放内容,在所述浏览器中播放第一播放内容,所述第一播放内容与所述第二播放内容相同。
- 根据权利要求13所述的电子设备,其中,在所述浏览器中播放第一播放内容方面,所述程序还包括用于执行以下步骤的指令:对数据流进行解码操作以得到所述第一播放内容,所述数据流为所述第二设备中目标应用对应的第二播放内容;在所述浏览器中播放所述第一播放内容。
- 根据权利要求14所述的电子设备,其中,在所述对所述数据流进行解码操作以得到所述第一播放内容方面,上述程序还包括用于执行以下步骤的指令:获取针对所述数据流进行解码的参数集;启动一个定时任务,所述定时任务对应一个时间区间;在所述时间区间内,基于所述参数集,以预设速率对所述数据流进行解码操作得到所述第一播放内容。
- 根据权利要求15所述的电子设备,其中,在所述对所述数据流进行解码操作以得到所述第一播放内容方面,上述程序还包括用于执行以下步骤的指令:在所述对所述数据流进行解码操作以得到所述第一播放内容之前,上述程序还包括用于执行以下步骤的指令:对所述数据流进行缓存处理;确定当前缓存数据量;若所述当前缓存数据量大于或等于预设阈值,则执行所述启动一个定时任务的步骤;若所述当前缓存数据量小于所述预设阈值,则暂停所述以预设速率对所述数据流进行解码操作得到所述第一播放内容,在所述当前缓存数据量大于或等于所述预设阈值时,执 行所述启动一个定时任务的步骤。
- 根据权利要求14所述的电子设备,其中,上述程序还包括用于执行以下步骤的指令:在未获取到文件元信息之前,则执行所述对所述数据流进行缓存处理的步骤;在获取到所述文件元信息之后,则执行所述对所述数据流进行解码操作以得到所述第一播放内容的步骤。
- 根据权利要求14-17任一项所述的电子设备,其中,所述第一播放内容包括:视频数据和音频数据;上述程序还包括用于执行以下步骤的指令:在所述浏览器中同步播放所述第一播放内容的音频数据和视频数据。
- 根据权利要求18所述的电子设备,其中,在所述浏览器中同步播放所述第一播放内容的音频数据和视频数据方面,上述程序还包括用于执行以下步骤的指令:接收所述用户在所述浏览器对应的界面中进行操作的播放操作指令,确定所述音频数据对应的时间戳;以所述时间戳为基准,对所述视频数据的渲染时间进行调整,以使所述视频数据的渲染步骤和所述音频数据的播放步骤在所述浏览器中同步进行。
- 一种计算机可读存储介质,存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-7任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011306827.7 | 2020-11-19 | ||
CN202011306827.7A CN112328941A (zh) | 2020-11-19 | 2020-11-19 | 基于浏览器的应用投屏方法及相关装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022105445A1 true WO2022105445A1 (zh) | 2022-05-27 |
Family
ID=74321714
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/121331 WO2022105445A1 (zh) | 2020-11-19 | 2021-09-28 | 基于浏览器的应用投屏方法及相关装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112328941A (zh) |
WO (1) | WO2022105445A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116170622A (zh) * | 2023-02-21 | 2023-05-26 | 阿波罗智联(北京)科技有限公司 | 音视频播放方法、装置、设备及介质 |
CN116668773A (zh) * | 2022-11-22 | 2023-08-29 | 荣耀终端有限公司 | 增强视频画质的方法与电子设备 |
CN116679900A (zh) * | 2022-12-23 | 2023-09-01 | 荣耀终端有限公司 | 一种音频业务处理方法、固件去加载方法及相关装置 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112328941A (zh) * | 2020-11-19 | 2021-02-05 | Oppo广东移动通信有限公司 | 基于浏览器的应用投屏方法及相关装置 |
CN112905289A (zh) * | 2021-03-10 | 2021-06-04 | Oppo广东移动通信有限公司 | 应用画面的显示方法、装置、终端、投屏系统及介质 |
CN117193697A (zh) * | 2022-05-30 | 2023-12-08 | 华为技术有限公司 | 音频播放方法、装置及电子设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106792055A (zh) * | 2016-12-28 | 2017-05-31 | 福建星网视易信息系统有限公司 | 实时投屏方法、设备及系统 |
CN109660842A (zh) * | 2018-11-14 | 2019-04-19 | 华为技术有限公司 | 一种播放多媒体数据的方法及电子设备 |
CN111432070A (zh) * | 2020-03-17 | 2020-07-17 | 北京百度网讯科技有限公司 | 应用投屏控制方法、装置、设备和介质 |
CN112328941A (zh) * | 2020-11-19 | 2021-02-05 | Oppo广东移动通信有限公司 | 基于浏览器的应用投屏方法及相关装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102723086B (zh) * | 2011-05-05 | 2017-04-12 | 新奥特(北京)视频技术有限公司 | 一种智能图文动画更新播放的方法 |
CN104753989B (zh) * | 2013-12-27 | 2018-09-14 | 阿里巴巴集团控股有限公司 | 基于Web-based OS运行环境的屏幕影像传输播放方法及装置 |
CN106603667B (zh) * | 2016-12-16 | 2020-09-29 | 北京小米移动软件有限公司 | 屏幕信息共享方法及装置 |
CN108628681A (zh) * | 2018-04-13 | 2018-10-09 | 电信科学技术第五研究所有限公司 | 多用户环境下流式数据处理方法 |
CN111405220B (zh) * | 2019-09-30 | 2022-07-05 | 杭州海康威视系统技术有限公司 | 视频预录方法及云存储系统 |
CN111124529A (zh) * | 2019-11-21 | 2020-05-08 | 杭州米络星科技(集团)有限公司 | 一种基于浏览器ppapi插件技术的视频投屏方法 |
CN111083167A (zh) * | 2019-12-31 | 2020-04-28 | 深圳市思博慧数据科技有限公司 | 一种跨浏览器的h.265视频播放方法 |
-
2020
- 2020-11-19 CN CN202011306827.7A patent/CN112328941A/zh active Pending
-
2021
- 2021-09-28 WO PCT/CN2021/121331 patent/WO2022105445A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106792055A (zh) * | 2016-12-28 | 2017-05-31 | 福建星网视易信息系统有限公司 | 实时投屏方法、设备及系统 |
CN109660842A (zh) * | 2018-11-14 | 2019-04-19 | 华为技术有限公司 | 一种播放多媒体数据的方法及电子设备 |
CN111432070A (zh) * | 2020-03-17 | 2020-07-17 | 北京百度网讯科技有限公司 | 应用投屏控制方法、装置、设备和介质 |
CN112328941A (zh) * | 2020-11-19 | 2021-02-05 | Oppo广东移动通信有限公司 | 基于浏览器的应用投屏方法及相关装置 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116668773A (zh) * | 2022-11-22 | 2023-08-29 | 荣耀终端有限公司 | 增强视频画质的方法与电子设备 |
CN116668773B (zh) * | 2022-11-22 | 2023-12-22 | 荣耀终端有限公司 | 增强视频画质的方法与电子设备 |
CN116679900A (zh) * | 2022-12-23 | 2023-09-01 | 荣耀终端有限公司 | 一种音频业务处理方法、固件去加载方法及相关装置 |
CN116679900B (zh) * | 2022-12-23 | 2024-04-09 | 荣耀终端有限公司 | 一种音频业务处理方法、固件去加载方法及相关装置 |
CN116170622A (zh) * | 2023-02-21 | 2023-05-26 | 阿波罗智联(北京)科技有限公司 | 音视频播放方法、装置、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112328941A (zh) | 2021-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020259452A1 (zh) | 一种移动终端的全屏显示方法及设备 | |
US11989482B2 (en) | Split-screen projection of an image including multiple application interfaces | |
CN111316598B (zh) | 一种多屏互动方法及设备 | |
WO2020221039A1 (zh) | 投屏方法、电子设备以及系统 | |
WO2022105445A1 (zh) | 基于浏览器的应用投屏方法及相关装置 | |
WO2022100315A1 (zh) | 应用界面的生成方法及相关装置 | |
CN113726950B (zh) | 一种图像处理方法和电子设备 | |
WO2022052773A1 (zh) | 多窗口投屏方法及电子设备 | |
WO2022257977A1 (zh) | 电子设备的投屏方法和电子设备 | |
WO2021129253A1 (zh) | 显示多窗口的方法、电子设备和系统 | |
WO2020093988A1 (zh) | 一种图像处理方法及电子设备 | |
CN112398855B (zh) | 应用内容跨设备流转方法与装置、电子设备 | |
WO2021036770A1 (zh) | 一种分屏处理方法及终端设备 | |
US20230305864A1 (en) | Method for Displaying Plurality of Windows and Electronic Device | |
CN112527174B (zh) | 一种信息处理方法及电子设备 | |
WO2022017393A1 (zh) | 显示交互系统、显示方法及设备 | |
WO2021190524A1 (zh) | 截屏处理的方法、图形用户接口及终端 | |
CN112527222A (zh) | 一种信息处理方法及电子设备 | |
CN114040242A (zh) | 投屏方法和电子设备 | |
WO2023030099A1 (zh) | 跨设备交互的方法、装置、投屏系统及终端 | |
WO2023005900A1 (zh) | 一种投屏方法、电子设备及系统 | |
WO2022135157A1 (zh) | 页面显示的方法、装置、电子设备以及可读存储介质 | |
WO2022222924A1 (zh) | 一种投屏显示参数调节方法 | |
WO2022179275A1 (zh) | 终端应用控制的方法、终端设备及芯片系统 | |
WO2023030168A1 (zh) | 界面显示方法和电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21893596 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21893596 Country of ref document: EP Kind code of ref document: A1 |