CN116033209A - Screen projection method and electronic equipment - Google Patents

Screen projection method and electronic equipment Download PDF

Info

Publication number
CN116033209A
CN116033209A CN202211047595.7A CN202211047595A CN116033209A CN 116033209 A CN116033209 A CN 116033209A CN 202211047595 A CN202211047595 A CN 202211047595A CN 116033209 A CN116033209 A CN 116033209A
Authority
CN
China
Prior art keywords
screen
output power
sensor
current
throwing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211047595.7A
Other languages
Chinese (zh)
Other versions
CN116033209B (en
Inventor
肖应诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211047595.7A priority Critical patent/CN116033209B/en
Publication of CN116033209A publication Critical patent/CN116033209A/en
Application granted granted Critical
Publication of CN116033209B publication Critical patent/CN116033209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Transforming Electric Information Into Light Information (AREA)

Abstract

The embodiment of the application provides a screen projection method and electronic equipment, wherein the method is executed by first electronic equipment and comprises the following steps: after the second electronic equipment is used for screen casting to the first electronic equipment, the first electronic equipment determines a first parameter corresponding to the current screen casting scene according to the acquired first sensor data, and the first parameter is used for controlling the display effect of a screen casting interface corresponding to the screen casting data from the second electronic equipment; and after the screen projection data are processed by adopting the first parameter, displaying a screen projection interface corresponding to the screen projection data. The method can adapt the screen throwing effect to the current screen throwing scene, and improves the screen throwing experience.

Description

Screen projection method and electronic equipment
Technical Field
The application relates to the technical field of screen projection, in particular to a screen projection method and electronic equipment.
Background
Currently, many electronic devices support a wireless screen-throwing technology, that is, a display interface of an electronic device a is displayed on a screen of another electronic device B, and a user can watch display content through the electronic device B, for example, the display interface of a mobile phone is displayed on a smart television.
Because the types of the current electronic devices are more, the capabilities of transmitting data and processing data in the screen throwing process are different, and therefore, the phenomena of screen throwing interface screen display, blocking, picture lag and the like can occur.
Disclosure of Invention
The application provides a screen projection method and electronic equipment, which can adaptively select corresponding decision parameters according to the current screen projection scene, so that the screen projection effect is adapted to the current screen projection scene, and the screen projection experience is improved.
In a first aspect, the present application provides a screen projection method, which is executed by a first electronic device, including: after the second electronic equipment is used for screen casting to the first electronic equipment, the first electronic equipment determines a first parameter corresponding to the current screen casting scene according to the acquired first sensor data, and the first parameter is used for controlling the display effect of a screen casting interface corresponding to the screen casting data from the second electronic equipment; and after the screen projection data are processed by adopting the first parameter, displaying a screen projection interface corresponding to the screen projection data.
The first electronic device may be an intelligent television, the second electronic device may be a mobile phone, and the first sensor data may be current data collected by a current sensor. After the mobile phone is used for screen casting to the intelligent television, the intelligent television can determine the control parameters (namely the first parameters) in the current screen casting scene according to the current data of the current sensor, and then the control parameters are issued to the actuators, so that the actuators execute corresponding operations according to the control parameters and then display the screen casting interface. The first parameter may be used to control a display effect of the screen-throwing interface, for example, control a time delay for transmitting screen-throwing data and/or displaying screen-throwing data, and an audio-video quality of the screen-throwing interface, if a current screen-throwing scene has a higher requirement for audio synchronization, the corresponding time delay is larger, and if the current screen-throwing scene has a higher requirement for the time delay, the corresponding time delay is smaller.
Optionally, the current sensor includes, but is not limited to, a screen-side current sensor and a Power Amplifier (PA) -side current sensor, and the corresponding current data includes, but is not limited to, current data of a display screen side and current data of a PA side.
Optionally, the first electronic device may store a correspondence between the plurality of sensor data and the plurality of control parameters, or store a correspondence between the plurality of sensor data intervals and the plurality of control parameters, and then, after acquiring the current first sensor data, find the corresponding control parameters from the correspondence.
Optionally, a screen projection channel between the first electronic device and the second electronic device is a 2.4GHz data channel.
In the implementation manner, the first electronic device can identify the current screen-throwing scene according to the current data of the current sensor, analyze whether the current screen-throwing scene has higher requirements on audio synchronization or delay, and automatically adopt control parameters corresponding to different requirements to schedule the executor to execute corresponding operations, so that the screen-throwing effect is adapted to the current screen-throwing scene, and the screen-throwing experience is improved.
With reference to the first aspect, in some implementations of the first aspect, determining, according to the first sensor data, a first parameter corresponding to a current screen-projection scene includes: determining output power of a sensor corresponding to the first sensor data according to the first sensor data; and determining a first parameter corresponding to the current screen-throwing scene according to the output power of the sensor.
The first electronic device may determine the output power of the sensor according to the first sensor data, and then determine the first parameter according to the output power of the sensor. Optionally, the first electronic device may perform integral calculation on the first sensor data, and determine output power of a sensor corresponding to the first sensor data.
Optionally, the first electronic device may store a correspondence between output powers of the plurality of sensors and the plurality of first parameters, or store a correspondence between output power intervals of the plurality of sensors and the plurality of first parameters, and then find the corresponding first parameters from the correspondence after calculating the current output power of the sensor.
In the implementation manner, the first electronic device may determine the output power of the corresponding sensor according to the sensor data, and since the output power of the sensor may represent the type of the current screen-throwing scene (for example, an audio/video scene or a common operation scene) to a greater extent, the accuracy of the determined first parameter may be improved when determining the first parameter corresponding to the current screen-throwing scene according to the output power of the sensor.
With reference to the first aspect, in some implementations of the first aspect, determining, according to the output power of the sensor, a first parameter corresponding to a current screen-projection scene includes: and determining a first parameter corresponding to the current screen-throwing scene according to the output power of the sensor and a preset threshold value.
Optionally, the first electronic device may compare the output power of the sensor with a preset threshold, and find the corresponding first parameter according to the magnitude relationship between the output power of the sensor and the preset threshold. For example, when the output power is greater than a preset threshold, a first set of parameters is corresponding, when the output power is less than the preset threshold, a second set of parameters is corresponding, and so on.
In one implementation, determining, according to the output power of the sensor and a preset threshold, a first parameter corresponding to a current screen-projection scene includes: determining the category of the current screen-throwing scene according to the output power of the sensor and a preset threshold value; according to the category of the current screen-throwing scene and the association relation, determining a first parameter corresponding to the current screen-throwing scene, wherein the association relation comprises the correspondence relation between the categories of the plurality of screen-throwing scenes and the plurality of parameters, the categories of the plurality of screen-throwing scenes comprise the category of the current screen-throwing scene, and the plurality of parameters comprise the first parameter.
When the first electronic device determines the corresponding first parameter according to the output power of the sensor, the first electronic device may determine the type of the current screen-throwing scene according to the output power of the sensor, and then determine the first parameter according to the type of the current screen-throwing scene and the correspondence between the types including the plurality of screen-throwing scenes and the plurality of parameters.
In one implementation manner, the sensor includes a screen-side current sensor and a PA-side current sensor of the first electronic device, and determining, according to an output power of the sensor and a preset threshold, a category of a current screen-throwing scene includes: under the condition that the output power of the current sensor at the PA end is smaller than or equal to a first threshold value, determining the category of the current screen-throwing scene as a first category; under the condition that the output power of the current sensor at the PA end is larger than or equal to a second threshold value, determining the category of the current screen-throwing scene as a second category; and under the condition that the output power of the current sensor at the PA end is larger than a first threshold value and smaller than a second threshold value, determining the category of the current screen-throwing scene according to the output power of the current sensor at the screen end and the output power of the current sensor at the PA end.
Under the condition that the output power of the current sensor is different, the pictures displayed on the screen-throwing interface of the intelligent television are different, and the screen-throwing interface of the mobile phone is an interface of different applications or an interface of different functions in the same application, namely, the screen-throwing scene is different. For example, when the output power of the PA end current sensor is larger, it can be considered that the audio output by the speaker end is larger when a video is played or a game is played on the screen-throwing interface of the smart television. For another example, when the output power of the PA-side current sensor is smaller, it may be considered that a web page or other audio-free page is displayed on the screen-throwing interface of the smart television. Then, the first electronic device (i.e. the smart tv) may divide the output power of the PA-side current sensor, so as to determine the current screen-throwing scenario.
For example, when the output power of the PA-side current sensor is less than or equal to the first threshold (i.e., threshold 1), the smart tv determines that the current screen-casting scene is the first screen-casting scene (i.e., the first category). When the output power of the current sensor at the PA end is greater than or equal to a second threshold (namely, threshold 2), the intelligent television determines that the current screen-throwing scene is a tenth screen-throwing scene (namely, a second class). When the output power of the current sensor at the PA end is larger than the first threshold and smaller than the second threshold, the intelligent television can further determine the category of the current screen-throwing scene according to the output power of the current sensor at the screen end and the output power of the current sensor at the PA end.
In one implementation manner, when the output power of the PA-end current sensor is greater than the first threshold and less than the second threshold, determining the category of the current screen-throwing scene according to the output power of the screen-end current sensor and the output power of the PA-end current sensor includes: determining first power according to the output power of the screen-end current sensor and the output power of the PA-end current sensor; and when the first power is located in the first interval, determining the category of the current screen-throwing scene as a third category.
The implementation mode can comprehensively consider the output power of the screen-side current sensor and the output power of the PA-side current sensor so as to improve the accuracy of the determined category of the current screen-throwing scene. It should be noted that, the first interval may be a set of multiple sub-intervals, and the third category may include multiple sub-categories, where each sub-interval corresponds to one sub-category, and the number of sub-intervals and sub-categories is not limited in the present application.
Optionally, the manner in which the first electronic device determines the first power may be: according to the inclusion of a x W 2 +b×W 1 Determining a first power, wherein a and b are weight factors, W 1 For the output power of the screen-side current sensor, W 2 Is the output power of the current sensor at the PA end. Illustratively, a may take 10 and b may take 1.
With reference to the first aspect, in some implementations of the first aspect, the first parameter includes at least one parameter of a delay of transmitting the screen-cast data, a number of buffer packets in the data buffer, a first proportional threshold, a second proportional threshold, and an effect processor switching parameter.
When data is transmitted between the first electronic device and the second electronic device through the Wi-Fi link channel, the delay of the transmission of the screen-throwing data may be an operating parameter of a Wi-Fi driving chip in the first electronic device, the number of buffer packets in the data buffer area may be an operating parameter of a main control chip in the first electronic device, the first proportional threshold and the second proportional threshold may be operating parameters of a chip player in the first electronic device, and the on-off parameter of the effect processor may be an operating parameter of an effect processor in the first electronic device.
The value of the time delay for transmitting the screen-throwing data refers to the duration that the data link corresponding to the Wi-Fi driving chip can exceed the expected transmission time at most when transmitting the data packet; the number of the buffer packets of the data buffer zone refers to a data packet number threshold value of the buffer zone, and when the number of the data packets of the actual buffer zone reaches the set TS buffer packet number, the main control chip reorders the data packets in the buffer zone; the first proportion refers to the proportion of the number of pixels corresponding to the screen pattern to the total number of pixels in one frame of image, and the second proportion refers to the proportion of the number of pixels to be repaired to the total number of pixels in one frame of image, namely the pixels corresponding to the screen pattern need to be repaired; the effect processor switch parameter is the switch state of the effect processor, for example, a value of 0 indicates that the effect processor is turned off, and a value of 1 indicates that the effect processor is turned on.
In the implementation manner, the first electronic device can identify the current screen-throwing scene, and the control parameters corresponding to different requirements are automatically adopted to schedule the executor to execute corresponding operations, so that the screen-throwing effect is adapted to the current screen-throwing scene, and the screen-throwing experience is improved.
In a second aspect, the present application provides an apparatus, which is included in an electronic device, and which has a function of implementing the electronic device behavior in the first aspect and possible implementations of the first aspect. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a receiving module or unit, a processing module or unit, etc.
In a third aspect, the present application provides an electronic device, the electronic device comprising: a processor, a memory, and an interface; the processor, the memory and the interface cooperate with each other such that the electronic device performs any one of the methods of the technical solutions of the first aspect.
In a fourth aspect, the present application provides a chip comprising a processor. The processor is configured to read and execute a computer program stored in the memory to perform the method of the first aspect and any possible implementation thereof.
Optionally, the chip further comprises a memory, and the memory is connected with the processor through a circuit or a wire.
Further optionally, the chip further comprises a communication interface.
In a fifth aspect, the present application provides a computer readable storage medium, in which a computer program is stored, which when executed by a processor causes the processor to perform any one of the methods of the first aspect.
In a sixth aspect, the present application provides a computer program product comprising: computer program code which, when run on an electronic device, causes the electronic device to perform any one of the methods of the solutions of the first aspect.
Drawings
Fig. 1 (a) is an application scenario diagram of an example of a screen projection method provided in an embodiment of the present application;
fig. 1 (b) is an application scenario diagram of another example of a screen projection method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an example of an electronic device according to an embodiment of the present application;
FIG. 3 is a block diagram of a software architecture of an example electronic device according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a desktop interface of a mobile phone according to an embodiment of the present disclosure;
Fig. 5 is a schematic diagram of an opening interface of a mobile phone assistant APP according to an embodiment of the present application;
fig. 6 (a) is a schematic diagram of a connection interface of a mobile phone assistant APP according to an embodiment of the present application;
fig. 6 (b) is a schematic diagram of a connection interface of an intelligent tv according to an embodiment of the present application;
fig. 7 is a schematic diagram of an example of a screen of a mobile phone to an intelligent television according to an embodiment of the present application;
fig. 8 is a schematic diagram of an example of a wireless screen-throwing interface of a mobile phone according to an embodiment of the present application;
FIG. 9 is a flowchart of an exemplary screen projection method according to an embodiment of the present disclosure;
fig. 10 is a flowchart of another exemplary screen projection method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first," "second," "third," and the like, are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or a third "may explicitly or implicitly include one or more such feature.
In the embodiment of the present application, the "screen projection" refers to transmitting the interface data on one electronic device to another electronic device for display, where for convenience of understanding, the "another electronic device" is referred to as a screen projection device, and an interface displayed on the screen projection device is referred to as a screen projection interface. The embodiment of the application can realize the processes of the mobile phone screen-throwing to the intelligent television, the mobile phone screen-throwing to the personal computer (personal computer, PC), the tablet personal computer screen-throwing to the intelligent television, the tablet personal computer screen-throwing to the PC and the like, and the electronic equipment is taken as the mobile phone and the screen-throwing equipment is taken as the intelligent electricity for illustration.
At present, as the types of mobile phones and intelligent televisions are more and more, the performances of hardware (such as chips) of the mobile phones and the intelligent televisions are also uneven, and the mobile phones and the intelligent televisions can be connected into a 2.4GHz or 5GHz wireless network under the condition that the hardware can be supported. The 2.4GHz penetrating power is strong, the propagation distance is long, communication in the 2400M-2483M frequency band range is supported, but the 2.4G frequency band has weak anti-interference capability in an indoor environment; the 5GHz is stable in output, supports large data transmission, and has strong anti-interference capability when communicating in a 5GHz radio wave frequency band.
Considering the factor of hardware cost, the smart television at present uses 2.4G chips, namely, only a wireless network of 2.4GHz can be accessed; as can be seen from the above description, the 2.4G frequency band has weak anti-interference capability in indoor environment, and is easily interfered by spatial environment during signal transmission.
Then, in a process of the mobile phone projecting onto the smart television, as shown in the (a) diagram in fig. 1, if the mobile phone is connected to a wireless network of 2.4GHz, a data channel of the mobile phone projecting onto the smart television is a 2.4G channel, a data channel when the mobile phone accesses the network is a 2.4G channel, the channel is easily interfered by a space environment when the channel transmits data, and the quality of the projecting data received by the smart television may be affected, so that the projecting picture of the smart television has the phenomenon of screen display and the like.
In another scenario, as shown in fig. 1 (b), if the mobile phone accesses a 5GHz wireless network, the data channel of the screen of the mobile phone to the smart television is a 2.4G channel, but the data channel when the mobile phone accesses the network is a 5G channel, which is called dual-frequency adaptive concurrency (Dual Band Adaptive Concurrent, DBAC), the DBAC can support time-sharing operation on the same-frequency different channel or different-frequency channel, and different channels adopt a time-division multiplexing mode, that is, only one frequency band and one channel can work at the same time. Therefore, the working mode may cause that the internet access channel affects the data transmission time of the screen throwing channel, so that the screen throwing picture of the intelligent television is subjected to screen display and other phenomena.
In general, when a mobile phone performs screen-switching on to an intelligent television, screen-switching data transmitted by the mobile phone is a TS Stream (Transport Stream), which is a file format of a DVD, and video of the screen-switching TS Stream is composed of a frame image and a set of audio. If the mobile phone directly transmits the data of the screen-switching TS stream to the intelligent television, the volume of the transmitted data is large, and the cost requirement on network transmission or video storage is high, so that the mobile phone usually performs compression coding processing on the screen-switching TS stream. The current popular coding frames can be divided into I frame key frames, P frame forward reference frames and B frame bidirectional reference frames, and after receiving the coding frames, the intelligent television needs to decode the coding frames to display the picture content. The I frame is a complete picture, and can be directly decoded, the decoding result of the previous frame is needed to be depended on by the decoding P frame, the decoding result of the previous frame and the following frame is needed to be depended on by the decoding B frame, and if the I frame is not present, the P frame and the B frame cannot be decoded. For example, for the I, B, P sequence of encoded frames, I frames are decoded first, while B frames need to rely on the decoding results of the previous frame (I frame) and the next frame (P frame), and since P frames are not yet decoded, B frames are suspended from decoding, P frames are decoded first; when decoding the P frame, the decoding result of the previous frame (I frame) is needed, and after the P frame is successfully decoded, the B frame is decoded according to the decoding results of the I frame and the P frame.
In the scenario described in fig. 1, for the screen-splash phenomenon that occurs in the screen-projection screen of the smart tv, the related art is as follows: the method has the advantages that the frame images corresponding to the screen are discarded, the screen throwing picture is directly displayed after the frame is lost, the screen throwing time delay can be reduced, but the phenomena of audio jamming, sound and picture asynchronism and the like can be caused; particularly, when the I frame is lost, it can be known from the above that if the I frame is absent, the P frame and the B frame cannot be decoded, and the images corresponding to the P frame and the B frame cannot be displayed, and the images can be displayed normally only after waiting for the next I frame to be decoded, which aggravates the audio stuck phenomenon (for example, the I frames are 3 seconds apart when some mobile phones are encoded, and the I frames are 15 seconds apart when some mobile phones are encoded). The other is to increase the frame supplementing logic of frame loss after the frame image corresponding to the screen is discarded, so that the audio and video quality can be improved, but the added algorithm processing process can increase the screen throwing time delay, and the display image hysteresis phenomenon is caused.
That is, the related art method cannot balance between the audio jamming and the screen throwing time delay, and the audio jamming phenomenon is severe or the screen throwing time delay is high. In view of this, the embodiment of the application provides a screen projection method, which is applied to an intelligent television, the intelligent television can identify the current screen projection scene, analyze whether the current screen projection scene has higher requirements on audio synchronization or delay, and automatically adopt decision scheduling executors corresponding to different requirements to execute corresponding operations, so that the screen projection effect is adapted to the current screen projection scene, and the screen projection experience is improved. It should be noted that, the screen projection method provided in the embodiment of the present application may be applied to an electronic device capable of implementing a screen projection function, such as a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), an intelligent television, etc., and the embodiment of the present application does not limit the specific type of the electronic device.
Fig. 2 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. Taking the electronic device 100 as an example of smart electricity, the electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, wireless communication module 150, display screen 160, etc.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an I2C interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulsecode modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, and/or a USB interface, among others.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display screen 160, the wireless communication module 150, and the like. In some embodiments, the power management module 141 and the charge management module 140 may also be provided in the same device.
The wireless communication module 150 may provide solutions for wireless communication including WLAN (e.g., wi-Fi), bluetooth, global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied to the electronic device 100. For example, in the embodiment of the present application, the electronic device 100 may establish a connection with other electronic devices (such as a mobile phone) through the wireless communication module 150.
The wireless communication module 150 may be one or more devices that integrate at least one communication processing module. The wireless communication module 150 receives electromagnetic waves via an antenna, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 150 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via an antenna.
The electronic device 100 implements display functions through a GPU, a display screen 160, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 160 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 160 is used to display images, videos, and the like. The display 160 includes a display panel.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. For example, in an embodiment of the present application, the processor 110 may include a storage program area and a storage data area by executing instructions stored in the internal memory 121, and the internal memory 121 may include a storage program area and a storage data area.
The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. Illustratively, fig. 3 is a block diagram of a software architecture of the electronic device 100 according to an embodiment of the present application, taking a layered architecture as an example, the layered architecture divides the software into several layers, each layer has a clear role and division, and the layers communicate with each other through a software interface. In some embodiments, taking a layered architecture Linux system as an example, the Linux system is divided into a framework layer, a kernel layer and a hardware layer, and the framework layer and the kernel layer communicate with each other through a HAL interface before.
As shown in fig. 3, the frame layer may include a screen-throwing control subsystem, which may include at least a power calculator, a logic module, a screen-flower frame loss controller, a transmission delay controller, and a display effect controller. The HAL interface may include at least a screen-display frame loss interface, an effect switch interface, a current reading interface, and a transmission delay interface. The kernel layer may include at least an audio/video decoder (chip player), an effect processor (including but not limited to a Picture Quality (PQ) processor, an Audio Quality (AQ) processor), a Wi-Fi driver chip, a main control chip, and a current sensor. The hardware layers may include at least a display screen, a speaker (horn), and a short-range chip.
The power calculator can acquire data of the current sensor through the current reading interface, including but not limited to screen end current data and PA end current data, and then calculates screen end power and PA end power according to the acquired data. After the logic module obtains the power of the screen end and the power of the PA end calculated by the power calculator, the processing parameters of each actuator in the kernel layer can be determined according to a preset logic rule (the specific logic rule is described in the following embodiments), and each processing parameter is sent to the corresponding controller. For example, sending the display effect parameters to a display effect controller, which sends the display effect parameters to an effect processor through an effect switch interface; transmitting Wi-Fi link transmission delay and TS buffer packet number to a transmission delay controller, transmitting the Wi-Fi link transmission delay to a Wi-Fi driving chip through a transmission delay interface by the transmission delay controller, and transmitting the TS buffer packet number to a main control chip; and the screen display proportion parameter and the key frame restoration proportion are sent to a screen display frame loss controller, and the screen display proportion and the key frame restoration proportion are sent to the chip player through a screen display frame loss interface by the screen display frame loss controller. And then, each executor executes corresponding actions according to the received parameters so as to carry out the screen throwing process of the mobile phone to the intelligent television.
It should be noted that, the short-distance chip used in the mobile phone and the smart tv in the embodiments of the present application may be a single-input single-output (simple input simple output, SISO) chip, or may be a single-input multiple-output (simple input multiple output, SIMO) chip, a multiple-input single-output (multiple input simple output, MISO) chip, or a multiple-input multiple-output (multiple input multiple output, MIMO) chip.
For easy understanding, the following embodiments of the present application will take an electronic device having a structure shown in fig. 2 and fig. 3 as an example, and specifically describe a screen projection method provided in the embodiments of the present application with reference to the accompanying drawings and application scenarios.
First, a procedure of a user to screen a smart television using a mobile phone is described, in one implementation manner, an Application (APP) for the user to perform a screen-casting operation may be installed on the mobile phone, for example, the APP may be a mobile phone assistant APP, and as shown in fig. 4, the mobile phone assistant APP may be displayed on a desktop of the mobile phone in a form of a desktop icon. The user can start the mobile phone assistant APP by clicking the desktop icon. In the interface shown in fig. 4, the starting interface of the mobile phone assistant APP includes an "immediate connection" control and a description of a screen-throwing mode supported by the mobile phone, for example, a mirror mode is currently supported by the mobile phone. On the interface, after clicking the "connect immediately" control, the user jumps to the display interface shown in fig. 5, and the mobile phone starts searching for available screen-throwing devices nearby, and these available screen-throwing devices can perform short-distance communication with the mobile phone, such as bluetooth communication (it should be noted that at this time, both the mobile phone and the searched available screen-throwing device have turned on the bluetooth switch). Assuming that the currently available screen-casting devices have device 1 and device 2, the mobile phone may display basic information (e.g., device names) of device 1 and device 2 in the available device list shown in fig. 5. Then, the user can click to select one device in the available device list (assuming that device 2, i.e., the smart tv, is selected), and a connection interface as shown in the diagram (a) in fig. 6 is displayed on the mobile phone; at the same time, the handset sends a connection request to the device 2, and the device 2 displays a confirmation interface as shown in fig. 6 (b). After clicking the "agree" control on the interface shown in the diagram (b) in fig. 6, the user can complete the screen-throwing connection process of the mobile phone and the smart television, and then the mobile phone can send the data of the current display interface to the smart television, so that the current display interface of the mobile phone is displayed on the smart television. When the mobile phone establishes screen-throwing connection with the intelligent television, the interface displayed by the mobile phone at present is still the interface of the mobile phone assistant APP, and then the interface is displayed on the intelligent television; after the video playing interface is opened on the mobile phone, the video playing interface is displayed on the intelligent television at the same time. For example, a schematic diagram of the effect of the mobile phone on the smart tv may be seen in fig. 7.
In another implementation manner, the mobile phone may have a wireless screen-throwing function, for example, as shown in fig. 8, a user may click on a "wireless screen-throwing" control through a pull-down system menu of the mobile phone, and the mobile phone starts searching available screen-throwing devices nearby in response to a clicking operation of the user, and displays a list of available devices in a blank area of a display interface of the mobile phone. When the available screen-casting device is searched, the mobile phone can display the basic information (such as the device name) of the device 1 and the device 2 in the available device list. Then, the user clicks and selects one device in the available device list (assuming that the device 2 is selected, namely the smart television), the mobile phone sends a connection request to the device 2, and after the user confirms connection on the smart television, the screen-throwing connection process of the mobile phone and the smart television can be completed.
In the process of screen projection of the mobile phone to the intelligent television, namely, the process of decoding and displaying the received coded data by the intelligent television after the mobile phone continuously codes the data of the display interface to the intelligent television, the intelligent television can control the screen projection process according to the working principle of the screen projection control subsystem in the software architecture, and the control process of the intelligent television is described in detail below. Fig. 9 is a schematic flow chart of an example of a screen projection method provided in the embodiment of the present application, where the process is executed by a smart tv, and may specifically include:
S101, after the intelligent television displays a screen projection interface, the intelligent television acquires current data of the current sensor.
The current sensor may include, but is not limited to, a screen-end current sensor and a PA-end current sensor, where the screen-end current sensor is used to obtain current data of a display screen end of the smart television, and the PA-end current sensor is used to obtain current data of a speaker (loudspeaker) end of the smart television. Because the mobile phone only sends the screen-throwing data stream to the intelligent television in the screen-throwing process of the intelligent television, and does not send information about the screen-throwing scene or the screen-throwing application to the intelligent television, the intelligent television can determine the current screen-throwing scene by acquiring the current data of the current sensor, and then determine the control parameters under the current screen-throwing scene.
S102, the intelligent television calculates the output power of the current sensor according to the current data of the current sensor.
The output power of the current sensor can be obtained by sectionally integrating current data of the current sensor. For the output power of the screen-end current sensor, the current data of the display screen end can be subjected to sectional integration; for the output power of the current sensor at the PA end, the current data at the horn end can be integrated in a segmented manner.
Illustratively, assume that the acquired current at the display screen end is I 1 The voltage is U 1 The current at the horn end is I 2 The voltage is U 2 The voltage here is usually a fixed value, e.g. U 1 110V (volts), U 2 Either 12V or 24V. Then, the screen end current is transmittedOutput power W of sensor 1 =∫ t U 1 I 1 Output power W of PA end current sensor 2 =∫ t U 2 I 2 . In practical application, through experiments, W 1 Is generally in the range of 0-180W (watts), W 2 Is typically in the range of 0-20W.
And S103, the intelligent television determines control parameters in the current screen-throwing scene according to the output power of the current sensor.
In the embodiment of the application, under the condition that the output power of the current sensor is different, the pictures displayed on the screen-throwing interface of the smart television are different, and the screen-throwing interface of the mobile phone is also understood to be an interface of different applications or an interface of different functions in the same application, namely, the screen-throwing scene is different. For example, when the output power of the PA end current sensor is larger, it can be considered that the audio output by the speaker end is larger when a video is played or a game is played on the screen-throwing interface of the smart television. For another example, when the output power of the PA-side current sensor is smaller, it may be considered that a web page or other audio-free page is displayed on the screen-throwing interface of the smart television. It can be understood that under the condition of different screen projection scenes, the requirements of the intelligent television on the execution parameters of the executors (such as a chip player, an effect processor, a Wi-Fi driving chip and a main control chip) are also different, namely, the control parameters of the intelligent television in different screen projection scenes are different.
Then, the smart television needs to divide the output power of the current sensor to judge the current screen-throwing scene, and then determine the control parameters corresponding to the current screen-throwing scene.
As an achievable manner, the manner in which the smart tv divides the output power of the current sensor may be as follows:
when W is 2 When the threshold value is less than or equal to 1, the intelligent television determines that the current screen-throwing scene is the first screen-throwing scene, which can also be called a mute scene, for example, the threshold value 1 can be 1W, particularly when W 2 And when the current screen projection scene is 0W, determining that the current screen projection scene is the first screen projection scene.
When W is 2 When the threshold value is not less than 2, intelligence is improvedThe television can determine that the current screen-throwing scene is a tenth screen-throwing scene, and the scene can be called as an audio-video scene. Wherein the threshold 2 here may be at W 2 Between 20% and 40% of the maximum, e.g. W 2 In the case of a calculated value in the range of 0-20W, 20% -40% of 20W, i.e. 4-8W, then the threshold 2 may take on a value between 4-8W, e.g. the threshold 2 is 5W.
When the threshold value is 1<W 2 <When the threshold value is 2, the smart television determines that the current screen-throwing scene is a middle scene between the first screen-throwing scene and the tenth screen-throwing scene, and can be divided into a second screen-throwing scene to a ninth screen-throwing scene.
When the second screen-throwing scene to the ninth screen-throwing scene are divided, the screen-throwing scene which can be a vertical screen-throwing scene or a horizontal screen-throwing scene is considered when the screen is thrown by the mobile phone, so that the factors of the output power of the current sensor at the screen end can be added besides the output power of the current sensor at the PA end of the intelligent television, and different screen-throwing scenes can be better distinguished. It will be appreciated that the output power of the PA side current sensor may be determined as a primary influencing factor and the output power of the panel side current sensor as a secondary influencing factor; the output power of the screen-side current sensor may also be determined as a primary influencing factor, and the output power of the PA-side current sensor may be determined as a secondary influencing factor.
Under the condition that the output power of the current sensor at the PA end is determined to be a main influence factor and the output power of the current sensor at the screen end is determined to be a secondary influence factor, the intelligent television can weight the output power of the current sensor at the screen end and the output power of the current sensor at the screen end to obtain a total power, and then a corresponding screen throwing scene is determined according to the total power.
Alternatively, the smart tv may calculate the total power according to the following relation: total power R s =a×W 2 +b×W 1 . Where a and b may be weight factors, for example, a=10, b=1, and of course, a and b may take other values, which are not limited in this embodiment of the present application. Then, the intelligent television sets according to R s And determining the corresponding screen scene.
Wherein R is s The size of the first projection screen may have a corresponding relationship with the second projection screen to the ninth projection screen. For example, the correspondence may be shown in table 1 below.
TABLE 1
Screen throwing scene R s (Unit W)
Second kind of screen projection scene R s ≤60
Third kind of screen projection scene 60<R s ≤80
Fourth Screen-throwing scene 80<R s ≤100
Fifth screen-throwing scene 100<R s ≤120
Sixth screen scene 120<R s ≤140
Seventh Screen-throwing scene 140<R s ≤160
Eighth screen projection scene 160<R s ≤180
Ninth screen-throwing scene 180<R s
That is, when R is calculated as described above s When the current screen projection scene is smaller than or equal to 60W, the intelligent television determines that the current screen projection scene is a second screen projection scene; when the current screen projection scene is larger than 60W and smaller than or equal to 80W, the intelligent television determines that the current screen projection scene is a third screen projection scene; and so on.
Aiming at the ten different screen projection scenes, the screen projection scene has different parameter requirements for each actuator. For example, the output power of the PA-end current sensor corresponding to the first screen-throwing scene is 0, that is, there is no audio/video output at present, so that it can be deduced that the user is browsing a web page or other audio-free operation, and the requirement of the operation on audio/video quality is lower. Therefore, the intelligent television can not perform excessive operations on the frame images corresponding to the screen, and therefore the screen throwing time delay can be reduced.
For another example, the PA end current sensor corresponding to the tenth screen-throwing scene has larger output power, that is, more audio output, so that the user can be supposed to watch the video and other operations, and the operation has higher requirements on the audio and video quality. Therefore, when the intelligent television discards the frame image corresponding to the screen, the discarded frame image can be repaired and supplemented, so that the sound and picture quality can be improved, but the screen throwing time delay is relatively high.
For another example, for the second to ninth screen scenes, it is between the first and tenth screen scenes, that is, the requirement for the audio and video quality is between the first and tenth screen scenes. Therefore, when the intelligent television discards the frame image corresponding to the screen, the discarded frame image can be appropriately repaired, so that the sound and picture quality is appropriately improved, and the screen throwing time delay is appropriately reduced.
It should be noted that, in the embodiment of the present application, ten screen projection scenes are taken as an example for illustration, more or fewer screen projection scenes can be divided in practical application, and each power interval in table 1 is only an example when each screen projection scene is divided, and the power interval for judgment is not limited in particular in the embodiment of the present application.
In combination with the above description, the embodiment of the application sets control parameters corresponding to different screen projection scenes. For example, the correspondence between different screen scenes and control parameters can be seen in table 2 below.
In table 2, the value of the transmission delay of the Wi-Fi link refers to a duration that the data link corresponding to the Wi-Fi driver chip can exceed the expected transmission time at most when transmitting the data packet, and may be generally set to 50 milliseconds (ms) to 120ms, where the data packet may be a data packet transmitted based on the user datagram protocol (user datagram protocol, UDP), and is simply referred to as a UDP packet. When the data packet is transmitted to the Miracast protocol stack of the main control chip, the UDP packet can be converted into the RTP packet first, and then the RTP packet is combined into a TS stream; because the data packets may be disordered in the transmission process, when the data packets are received by the main control chip, the data packets can be cached to the buffer area firstly, and when the number of the data packets in the actual buffer area reaches the set number of TS (transport stream) buffer packets, the data packets in the buffer area are reordered by the main control chip, so that the original transmission sequence of each data packet is restored, and the audio-video quality of the final screen-throwing interface is improved; for example, the number of TS buffer packets may be set to 0 to 9. Therefore, in the process of transmitting the data packet, two control parameters, namely Wi-Fi link transmission delay and TS buffer packet number, are corresponding, and the two parameters also have different delay effects under different values. For example, for the first screen-drop scenario, the corresponding Wi-Fi link transmission delay is 50ms, the number of ts buffer packets is 0, and the delay caused by the first screen-drop scenario is about 0 seconds.
The screen display proportion refers to the proportion of the number of pixels corresponding to the screen display to the total number of pixels in one frame of image, and the key frame restoration proportion refers to the proportion of the number of pixels to be restored to the total number of pixels in one frame of image, namely the pixels corresponding to the screen display need to be restored. When the chip player detects that the actual screen display proportion is larger than the set screen display proportion, the currently detected frame image can be discarded. For example, if the set screen ratio is 10%, when the actual screen ratio is detected to be greater than 10%, discarding the frame image; it will be appreciated that the lower the set screen scale, the greater the number of frame images that are discarded. Under the condition that the number of the discarded frame images is larger, the probability that the key frames (I frames) are discarded is larger, so that after the frames are discarded, the chip player also needs to judge whether the discarded frame images contain the key frames or not, if so, the key frames need to be repaired, and a larger time delay is caused in the repairing process; meanwhile, in order to ensure the quality of the audio and video, other discarded frame images are subjected to frame supplementing operation correspondingly. It can be deduced that the lower the set screen proportion is, the more frame images are discarded, and the higher the corresponding time delay is. Therefore, in the process of the screen display frame loss, two control parameters, namely a screen display proportion and a key frame restoration proportion, are correspondingly arranged, and the two parameters have different time delay influences under different values. For example, for the tenth screen scene, the corresponding screen proportion is 10%, the key frame repair proportion is 10%, and the time delay is about 6 seconds.
The display effect switch value is the switch state of the effect processor, for example, when the value is 0, the effect processor is turned off, and when the value is 1, the effect processor is turned on, and it can be understood that the time delay corresponding to the switch of the effect processor is larger than the time delay corresponding to the effect processor is turned off.
Therefore, as can be seen from table 2, for different screen-throwing scenes, the corresponding Wi-Fi link transmission delay, the number of TS buffer packets, the screen-pattern ratio, the key frame repair ratio and the parameter values of the display effect switch are different, and these parameters are combined into the control parameters. Further, the last column in table 2 also illustrates the expected delay and the quality of the audio and video under different screen scenes.
TABLE 2
Figure BDA0003821043780000121
Therefore, in step S103, the smart tv may determine the current screen-throwing scene according to the output power of the current sensor and the content of table 1, and then determine the control parameters under the current screen-throwing scene according to the current screen-throwing scene and the content of table 2.
As another implementation manner, aiming at the scene that the mobile phone and the smart television log in the same user account and throw the screen, a trust ring can be automatically formed between the mobile phone and the smart television, so that the mobile phone can send the scene information identified by the current application to the smart television, and further the smart television correspondingly searches the control parameters from the table 2 according to the scene of throwing the screen.
And S104, the intelligent television transmits the control parameters to each actuator, so that each actuator executes corresponding operation according to the control parameters.
S105, refreshing a screen-throwing interface of the intelligent television.
The smart television issues the determined control parameters, for example, issues Wi-Fi link transmission delay to a Wi-Fi driving chip, issues the number of TS buffer packets to a main control chip, issues a screen display proportion and a key frame repair proportion to a chip player, and issues a display effect switch value to an effect processor, and after each executor performs corresponding actions, the smart television can display the processed TS stream, namely, a screen throwing interface.
For the above screen-projection control process, an example will be described: for example, after the smart tv starts to display the screen-throwing interface (i.e., the mobile phone starts to throw the screen), the mobile phone may continuously send the screen-throwing data to the smart tv, and the smart tv receives the screen-throwing data, so as to control the display process of the screen-throwing data. Assuming that the intelligent television calculates the output power W of the current sensor at the PA end through the acquired current data 2 At 6W, the threshold 2 is set to 5W, i.e. W is satisfied 2 And the current screen-throwing scene is determined to be a tenth screen-throwing scene by the intelligent television. Through the corresponding relation of the table 2, in the scene, the transmission delay of the Wi-Fi link can be set to be 120ms, namely, when the data link corresponding to the Wi-Fi driving chip transmits the data packet, the transmission time can exceed the expected transmission time by 120ms at most; setting the number of TS buffer packets to 9, namely when the number of data packets in the buffer zone reaches 9, the main control chip can send the data packets Re-ordering and then combining the TS streams; setting the splash screen proportion and the key frame restoration proportion as 10%, namely for each frame of image, discarding the frame of image when the chip player detects that the proportion of the number of pixels corresponding to the splash screen to the total number of pixels is more than 10%, and correspondingly, when restoring the discarded key frame, the proportion of the number of restored pixels to the total number of pixels is also more than 10%; setting the display effect switch value to 1, namely turning on the effect processor. After the parameters of the executors are set, the screen projection data flowing through the executors are correspondingly processed and then sent to the television screen of the intelligent television to display a new screen projection interface.
According to the screen projection method, the intelligent television can identify the current screen projection scene according to the current data of the current sensor, analyze whether the current screen projection scene has higher requirements on audio synchronization or delay, and automatically adopt control parameters corresponding to different requirements to schedule the executor to execute corresponding operations, so that the screen projection effect is adapted to the current screen projection scene, and the screen projection experience is improved.
For the implementation process of the foregoing embodiment, the implementation process may also be implemented in combination with the software structure shown in fig. 3, and the description of the screen projection method provided in the embodiment of the present application will be described below in combination with the software structure shown in fig. 3, as shown in fig. 10, which is a timing flow diagram of an example of the screen projection method provided in the embodiment of the present application, and may specifically include:
S1, after the intelligent television displays a screen-throwing interface, a current sensor collects current data.
The current sensor can collect current data according to a preset time interval, and can collect current data when the intelligent television receives a new TS stream, so that the change of a screen-throwing scene can be perceived in time. Current data collected by the current sensor includes, but is not limited to, a panel side current and a PA side current.
S2, the power calculator acquires the acquired current data from the current sensor.
The power calculator can acquire current data acquired by the current sensor through a current reading interface in the HAL interface.
And S3, the power calculator calculates the output power of the current sensor according to the current data of the current sensor.
In one implementation manner, the power calculator may perform a piecewise integration on the current data to obtain the output power, and a specific integration manner may be referred to the description of the foregoing embodiment, which is not described herein.
And S4, the power calculator sends the calculated output power of the current sensor to the logic module.
S5, the logic module determines control parameters in the current screen-throwing scene according to the output power of the current sensor.
In one implementation, this step may be divided into two steps: a, determining a current screen throwing scene by a logic module according to the output power of a current sensor; and b, determining corresponding control parameters by the logic module according to the current screen-throwing scene. Optionally, the current screen scene determined by the logic module may be a scene type, for example, one of the first screen scene to the tenth screen scene in the above embodiment; the control parameters may include, but are not limited to, wi-Fi link transmission delay, TS buffer packet count, screen scale, key frame repair scale, and parameter values for display effect switch.
S6, the logic module issues the control parameters to each actuator, so that each actuator executes corresponding operation according to the control parameters.
Specifically, this step may include:
s61, the logic module sends Wi-Fi link transmission delay and TS buffer packet number to the transmission delay controller.
S62, the transmission delay controller sends Wi-Fi link transmission delay to the Wi-Fi driving chip, and sends TS buffer packet number to the main control chip.
The transmission delay controller can send Wi-Fi link transmission delay to the Wi-Fi driving chip through a transmission delay interface in the HAL interface, and send the number of TS buffer packets to the main control chip.
And S63, the logic module sends the screen display proportion and the key frame restoration proportion to the screen display frame loss controller.
S64, the screen display frame loss controller sends the screen display proportion and the key frame restoration proportion to the chip player.
The screen display frame loss controller can send the screen display proportion and the key frame restoration proportion to the chip player through the screen display frame loss interface in the HAL interface.
S65, the logic module sends the display effect parameters to the display effect controller.
And S66, the display effect controller sends the display effect parameters to the effect processor.
The display effect controller can send the display effect parameters to the effect processor through an effect switch interface in the HAL interface.
It should be noted that, in the embodiment of the present application, the execution sequence of S61-S62, S63-S64, and S65-S66 is not limited, and may be executed simultaneously or sequentially; the control is preferably performed simultaneously in consideration of the improvement in control efficiency.
Through the steps, each module of the intelligent television can carry out adaptive control on the screen throwing process. It can be understood that after the current sensor acquires new current data, through the determining process from S2 to S5, if it is determined that the current screen-throwing scene is the same as the previous screen-throwing scene, each actuator can continue to use the control parameters that have been used before; if it is determined that the current screen-throwing scene is different from the previous screen-throwing scene, the logic module may issue new control parameters to each actuator, so that each actuator updates the control parameters, and then the screen-throwing effect is adapted to the current screen-throwing scene.
Examples of the screen projection method provided by the embodiment of the application are described in detail above. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
The embodiment of the present application may divide the functional modules of the electronic device according to the above method examples, for example, may divide each function into each functional module corresponding to each function, for example, a detection unit, a processing unit, a display unit, or the like, or may integrate two or more functions into one module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The electronic device provided in this embodiment is configured to execute the screen projection method, so that the same effect as that of the implementation method can be achieved.
In case an integrated unit is employed, the electronic device may further comprise a processing module, a storage module and a communication module. The processing module can be used for controlling and managing the actions of the electronic equipment. The memory module may be used to support the electronic device to execute stored program code, data, etc. And the communication module can be used for supporting the communication between the electronic device and other devices.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, digital signal processing (digital signal processing, DSP) and microprocessor combinations, and the like. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip and other equipment which interact with other electronic equipment.
In one embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 2.
The embodiment of the application also provides a computer readable storage medium, in which a computer program is stored, which when executed by a processor, causes the processor to execute the screen projection method of any of the above embodiments.
The present application also provides a computer program product, which when run on a computer, causes the computer to perform the above-mentioned related steps to implement the screen projection method in the above-mentioned embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component, or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer-executed instructions, and when the device is operated, the processor can execute the computer-executed instructions stored in the memory, so that the chip executes the screen projection method in each method embodiment.
The electronic device, the computer readable storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A method of screening, the method performed by a first electronic device, comprising:
after a second electronic device drops to the first electronic device, the first electronic device determines a first parameter corresponding to a current screen dropping scene according to the acquired first sensor data, wherein the first parameter is used for controlling a display effect of a screen dropping interface corresponding to the screen dropping data from the second electronic device;
and after the first parameters are adopted to process the screen projection data, displaying a screen projection interface corresponding to the screen projection data.
2. The method according to claim 1, wherein determining the first parameter corresponding to the current screen scene according to the acquired first sensor data includes:
determining output power of a sensor corresponding to the first sensor data according to the first sensor data;
And determining a first parameter corresponding to the current screen-throwing scene according to the output power of the sensor.
3. The method according to claim 2, wherein determining the first parameter corresponding to the current screen scene according to the output power of the sensor includes:
and determining a first parameter corresponding to the current screen-throwing scene according to the output power of the sensor and a preset threshold value.
4. The method of claim 3, wherein determining the first parameter corresponding to the current screen scene according to the output power of the sensor and a preset threshold value comprises:
determining the category of the current screen-throwing scene according to the output power of the sensor and a preset threshold value;
determining a first parameter corresponding to the current screen-throwing scene according to the category of the current screen-throwing scene and the association relation, wherein the association relation comprises the correspondence relation between the categories of a plurality of screen-throwing scenes and a plurality of parameters, the categories of the plurality of screen-throwing scenes comprise the category of the current screen-throwing scene, and the plurality of parameters comprise the first parameter.
5. The method of claim 4, wherein the sensor comprises a screen side current sensor and a power amplifier PA side current sensor of the first electronic device, and wherein determining the category of the current screen-cast scene based on the output power of the sensor and a preset threshold value comprises:
Under the condition that the output power of the current sensor at the PA end is smaller than or equal to a first threshold value, determining the category of the current screen-throwing scene as a first category;
under the condition that the output power of the PA end current sensor is larger than or equal to a second threshold value, determining the category of the current screen-throwing scene as a second category;
and under the condition that the output power of the PA end current sensor is larger than a first threshold value and smaller than a second threshold value, determining the category of the current screen-throwing scene according to the output power of the screen end current sensor and the output power of the PA end current sensor.
6. The method of claim 5, wherein determining the category of the current screen-cast scene based on the output power of the screen-side current sensor and the output power of the PA-side current sensor comprises:
determining a first power according to the output power of the screen end current sensor and the output power of the PA end current sensor;
and when the first power is located in the first interval, determining the category of the current screen-throwing scene as a third category.
7. The method of claim 6, wherein determining the first power based on the output power of the on-screen current sensor and the output power of the PA-side current sensor comprises:
According to the inclusion of a x W 2 +b×W 1 Determining the first power, wherein a and b are weight factors, and W 1 For the output power of the screen end current sensor, the W 2 And the output power of the current sensor at the PA end.
8. The method according to any one of claims 2 to 7, wherein determining, from the first sensor data, an output power of a sensor to which the first sensor data corresponds comprises:
and carrying out integral calculation on the first sensor data, and determining the output power of the sensor corresponding to the first sensor data.
9. The method of any one of claims 1 to 8, wherein the first parameter comprises at least one of a delay in transmitting the screen-put data, a number of buffer packets of a data buffer, a first proportional threshold, a second proportional threshold, and an effect processor switching parameter;
the first proportional threshold is a proportional threshold of the number of pixels corresponding to the screen pattern in the frame of image, and the second proportional threshold is a proportional threshold of the number of pixels to be repaired in the total number of pixels when the key frame is repaired.
10. The method of claim 9, wherein the screen-cast channel between the first electronic device and the second electronic device is a 2.4GHz Wi-Fi link channel.
11. The method of claim 10, wherein the delay of transmitting the screen-casting data is an operating parameter of a Wi-Fi driver chip in the first electronic device, the number of buffer packets in the data buffer area is an operating parameter of a main control chip in the first electronic device, the first proportional threshold and the second proportional threshold are operating parameters of a chip player in the first electronic device, and the effect processor switch parameter is an operating parameter of an effect processor in the first electronic device.
12. An electronic device, comprising:
one or more processors;
one or more memories;
the memory stores one or more programs that, when executed by the processor, cause the electronic device to perform the method of any of claims 1-11.
13. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, causes the processor to perform the method of any of claims 1 to 11.
CN202211047595.7A 2022-08-29 2022-08-29 Screen projection method and electronic equipment Active CN116033209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211047595.7A CN116033209B (en) 2022-08-29 2022-08-29 Screen projection method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211047595.7A CN116033209B (en) 2022-08-29 2022-08-29 Screen projection method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116033209A true CN116033209A (en) 2023-04-28
CN116033209B CN116033209B (en) 2023-10-20

Family

ID=86072895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211047595.7A Active CN116033209B (en) 2022-08-29 2022-08-29 Screen projection method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116033209B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200260376A1 (en) * 2019-02-11 2020-08-13 Qualcomm Incorporated Network control and signaling for power circuitry configuration
CN111682614A (en) * 2020-06-16 2020-09-18 广州视源电子科技股份有限公司 Charging control method and device, data transmission equipment and storage medium
CN112000024A (en) * 2020-09-01 2020-11-27 青岛海尔科技有限公司 Method, device and equipment for controlling household appliance
CN112367542A (en) * 2020-10-21 2021-02-12 四川长虹电器股份有限公司 Terminal playing system and method for mirror image screen projection
CN112601207A (en) * 2020-12-11 2021-04-02 Oppo广东移动通信有限公司 Communication method, communication device, terminal device and storage medium
CN113691846A (en) * 2020-09-10 2021-11-23 华为终端有限公司 Multi-window screen projection method and electronic equipment
WO2021233218A1 (en) * 2020-05-19 2021-11-25 华为技术有限公司 Screen casting method, screen casting source end, screen casting destination end, screen casting system and storage medium
CN113778663A (en) * 2021-07-28 2021-12-10 荣耀终端有限公司 Scheduling method of multi-core processor and electronic equipment
CN114915834A (en) * 2021-02-08 2022-08-16 华为技术有限公司 Screen projection method and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200260376A1 (en) * 2019-02-11 2020-08-13 Qualcomm Incorporated Network control and signaling for power circuitry configuration
WO2021233218A1 (en) * 2020-05-19 2021-11-25 华为技术有限公司 Screen casting method, screen casting source end, screen casting destination end, screen casting system and storage medium
CN111682614A (en) * 2020-06-16 2020-09-18 广州视源电子科技股份有限公司 Charging control method and device, data transmission equipment and storage medium
CN112000024A (en) * 2020-09-01 2020-11-27 青岛海尔科技有限公司 Method, device and equipment for controlling household appliance
CN113691846A (en) * 2020-09-10 2021-11-23 华为终端有限公司 Multi-window screen projection method and electronic equipment
CN112367542A (en) * 2020-10-21 2021-02-12 四川长虹电器股份有限公司 Terminal playing system and method for mirror image screen projection
CN112601207A (en) * 2020-12-11 2021-04-02 Oppo广东移动通信有限公司 Communication method, communication device, terminal device and storage medium
CN114915834A (en) * 2021-02-08 2022-08-16 华为技术有限公司 Screen projection method and electronic equipment
CN113778663A (en) * 2021-07-28 2021-12-10 荣耀终端有限公司 Scheduling method of multi-core processor and electronic equipment

Also Published As

Publication number Publication date
CN116033209B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
US11606564B2 (en) Video encoding code rate control method, apparatus, and device, and storage medium
CN108966008B (en) Live video playback method and device
CN107231159B (en) Radio frequency interference processing method, device, storage medium and terminal
KR102148001B1 (en) Display apparatus and method for controlling of display apparatus
CN115016885B (en) Virtual machine garbage recycling operation method and electronic equipment
US10911514B2 (en) Synchronized wireless and video operations
CN103488450A (en) Method, device and terminal equipment for projecting picture
CN112019929A (en) Volume adjusting method and device
CN110121084A (en) The methods, devices and systems of port switching
CN110933763A (en) Transmission method and related equipment
WO2014103363A1 (en) Electronic device, wireless communication apparatus and communication control method
KR20150051776A (en) Display apparatus and method for controlling of display apparatus
CN110290277B (en) Antenna control method and terminal
CN114205336A (en) Cross-device audio playing method, mobile terminal, electronic device and storage medium
CN109474833B (en) Network live broadcast method, related device and system
CN116795753A (en) Audio data transmission processing method and electronic equipment
CN116033209B (en) Screen projection method and electronic equipment
CN116155874A (en) Audio transmission method, electronic device and storage medium
CN109473117B (en) Audio special effect superposition method and device and terminal thereof
CN113271681A (en) PUSCH transmission method, terminal and network equipment
CN116939212A (en) Video processing method, device, computer readable storage medium and computer equipment
CN115412931A (en) Communication method and device
JP2012208044A (en) Electronic device
JP2009130857A (en) Mobile device, semiconductor integrated circuit and infrared ray communication system
US20230236843A1 (en) Electronic apparatus and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant