WO2020108060A1 - 视频处理方法、装置、电子设备以及存储介质 - Google Patents
视频处理方法、装置、电子设备以及存储介质 Download PDFInfo
- Publication number
- WO2020108060A1 WO2020108060A1 PCT/CN2019/107932 CN2019107932W WO2020108060A1 WO 2020108060 A1 WO2020108060 A1 WO 2020108060A1 CN 2019107932 W CN2019107932 W CN 2019107932W WO 2020108060 A1 WO2020108060 A1 WO 2020108060A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- display
- video content
- video
- area
- electronic device
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 146
- 238000000034 method Methods 0.000 claims abstract description 81
- 230000008569 process Effects 0.000 claims description 33
- 230000004044 response Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 3
- 238000001035 drying Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 22
- 238000005457 optimization Methods 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 5
- 238000009877 rendering Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- IBBLRJGOOANPTQ-JKVLGAQCSA-N quinapril hydrochloride Chemical compound Cl.C([C@@H](C(=O)OCC)N[C@@H](C)C(=O)N1[C@@H](CC2=CC=CC=C2C1)C(O)=O)CC1=CC=CC=C1 IBBLRJGOOANPTQ-JKVLGAQCSA-N 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000002311 subsequent effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
Definitions
- the present application relates to the technical field of electronic equipment, and more specifically, to a video processing method, device, electronic equipment, and storage medium.
- the present application proposes a video processing method, device, electronic device, and storage medium to solve the above problems.
- an embodiment of the present application provides a video processing method, which is applied to an electronic device, where the electronic device includes a display screen, and the method includes: performing a regional processing on the displayable area of the display screen to form at least Two areas; determining the target area from the at least two areas; performing display enhancement processing on the video content in the target area, wherein the display enhancement processing is performed by optimizing the parameters to process the image in the video content Describe the quality of the video content.
- an embodiment of the present application provides a video processing device, which is applied to an electronic device, the electronic device includes a display screen, and the device includes: a processing module configured to divide a displayable area of the display screen Area processing to form at least two areas; a determination module to determine a target area from the at least two areas; a display enhancement module to perform display enhancement processing on the video content in the target area, wherein The display enhancement process processes images in the video content by optimizing parameters to improve the picture quality of the video content.
- an embodiment of the present application provides an electronic device, including a memory and a processor.
- the memory is coupled to the processor.
- the memory stores instructions that are executed when the instructions are executed by the processor.
- the processor executes the above method.
- an embodiment of the present application provides a computer-readable storage medium.
- the computer-readable storage medium stores program code, and the program code can be called by a processor to execute the above method.
- FIG. 1 shows a schematic flowchart of video playback provided by an embodiment of the present application
- FIG. 2 shows a schematic flowchart of a video processing method provided by an embodiment of the present application
- FIG. 3 shows a schematic flowchart of a video processing method provided by another embodiment of the present application.
- FIG. 4 shows a schematic diagram of an interface of an electronic device provided by an embodiment of the present application.
- FIG. 5 shows another schematic diagram of an interface of an electronic device provided by an embodiment of the present application
- FIG. 6 shows a schematic flowchart of step S240 of the video processing method shown in FIG. 3 of the present application
- FIG. 7 shows a schematic flowchart of a video processing method provided by another embodiment of the present application.
- FIG. 8 shows another schematic diagram of the interface of the electronic device provided by the embodiment of the present application.
- step S350 of the video processing method shown in FIG. 7 of the present application shows a schematic flowchart of step S350 of the video processing method shown in FIG. 7 of the present application.
- FIG. 10 is a schematic flowchart of a video processing method provided by another embodiment of the present application.
- FIG. 11 shows a schematic flowchart of a video processing method provided by yet another embodiment of the present application.
- FIG. 12 shows a block diagram of a video processing device provided by an embodiment of the present application.
- FIG. 13 shows a block diagram of an electronic device for performing a video processing method according to an embodiment of the present application
- FIG. 14 shows a storage unit for storing or carrying program code for implementing a video processing method according to an embodiment of the present application.
- FIG. 1 shows a video playback process.
- the next job is to analyze the audio and video data.
- General video files are composed of video stream and audio stream. Different video formats have different audio and video packaging formats.
- the process of synthesizing audio and video streams into files is called muxer, while the process of separating audio and video streams from media files is called demuxer.
- muxer the process of synthesizing audio and video streams into files
- demuxer the process of separating audio and video streams from media files.
- the decoded video frame can be directly rendered, and the audio frame can be sent to the buffer of the audio output device for playback.
- video rendering The timestamp of the audio playback needs to be synchronized.
- video decoding may include hard decoding and soft decoding.
- hardware decoding a part of the video data that is originally handed over to the Central Processing Unit (CPU) for processing is handed over to the graphics processor (Graphics Processing Unit, GPU) To do it, and the parallel computing capability of the GPU is much higher than that of the CPU, which can greatly reduce the load on the CPU. After the CPU occupancy rate is low, you can run some other programs at the same time.
- the processor such as i5 2320, or any quad-core processor of AMD, it can perform both hard decoding and soft decoding.
- the multimedia framework obtains the video file to be played by the client through the API interface with the client, and hands it to the video codec (Video Decode), where the Media Framework is in the Android
- the multimedia framework, MediaPlayer, MediaPlayerService and Stagefrightplayer constitute the basic framework of Android multimedia.
- the multimedia framework part adopts the C/S structure.
- MediaPlayer serves as the client terminal of the C/S structure.
- MediaPlayerService and Stagefrightplayer serve as the server terminal of the C/S structure. They assume the responsibility of playing multimedia files.
- Video Decode is a super decoder that integrates the most commonly used audio and video decoding and playback to decode video data.
- soft decoding that is, let the CPU decode the video through software.
- Hard decoding means that the video decoding task can be completed independently through a dedicated daughter card device without resorting to the CPU.
- the decoded video data will be sent to the layer transfer module (SurfaceFlinger), and SurfaceFlinger will render and synthesize the decoded video data on the display screen.
- SurfaceFlinger is an independent Service, which receives all Window’s Surface as input, calculates the position of each Surface in the final composite image according to ZOrder, transparency, size, position and other parameters, and then hands it to HWComposer or OpenGL to generate the final The display buffer is then displayed on a specific display device.
- the CPU decodes the video data to SurfaceFlinger for rendering and synthesis, and after the hard decoding is decoded by the GPU, it is handed over to SurfaceFlinger for rendering and synthesis.
- the SurfaceFlinger will call the GPU to render and synthesize the image and display it on the display.
- the current electronic device processing method for video content is fixed.
- the current electronic device performs display enhancement processing on the entire video content that it plays, although this method improves the effect of the entire video playback.
- the current electronic device will cause excessive power consumption of the electronic device; as another way, the current electronic device does not perform display enhancement processing on the entire video content played by it.
- this method reduces the power consumption of the electronic device, it will The result is poor video content played by electronic devices.
- the inventor discovered after long-term research and proposed the video processing method, device, electronic device, and storage medium provided in the embodiments of the present application.
- the video content of the area with the largest display area in each area is subjected to display enhancement processing to improve the display effect of the video content on the basis of not causing excessive power consumption of the electronic device.
- display enhancement processing to improve the display effect of the video content on the basis of not causing excessive power consumption of the electronic device.
- FIG. 2 shows a schematic flowchart of a video processing method provided by an embodiment of the present application.
- the video processing method is used to perform display enhancement processing on the video content of the area with the largest display area among the multiple areas when the display screen of the electronic device is divided into multiple areas, so as not to cause excessive power consumption of the electronic device Improve the display effect of video content on the basis.
- the video processing method is applied to the video processing device 200 shown in FIG. 12 and the electronic device 100 (FIG. 13) equipped with the video processing device 200. The following will take the electronic device as an example to describe the specific process of this embodiment.
- the electronic device applied in this embodiment may be a smart phone, a tablet computer, a wearable electronic device, a vehicle-mounted device, a gateway, etc.
- the electronic device includes a display screen, and the flow shown in FIG. 2 will be described in detail below.
- the video processing method may specifically include the following steps:
- Step S110 Perform sub-region processing on the displayable region of the display screen to form at least two regions.
- the electronic device includes a display screen, wherein the displayable area of the display screen can be used to display content such as text, pictures, icons, or videos.
- the displayable area of the display screen can be used to display content such as text, pictures, icons, or videos.
- more and more electronic devices can also be equipped with a touch screen. In the case of setting the touch screen, when it is detected that the user drags, clicks, double-clicks, slides, etc. on the touch screen During touch operation, the electronic device can respond to the user's touch operation.
- the displayable area of the display screen of the electronic device can be divided into regions, that is to say, the electronic device can have a split-screen function, the split-screen processing based on the split-screen function can
- the display area is divided into at least two areas to display the content displayed by the electronic device in the at least two areas, wherein the content displayed in the at least two areas may be the same or different, which is not limited herein.
- the electronic device may divide the displayable area of the display screen into divided areas when receiving the instruction information indicating the split screen, thereby obtaining at least two areas, for example, performing the divided area processing on the displayable area to obtain two Areas, four areas, five areas, etc.
- the instruction information may be triggered by the user on the electronic device or a message sent by other electronic devices, which is not limited herein.
- the user's trigger on the electronic device may include the user's touch operation on the electronic device and the user input voice information of the electronic device.
- instruction information indicating that the displayable area can be divided into regions can be generated, for example, when it is detected that the user touches a physical button or a virtual button used to start a video call or start a live broadcast,
- the displayable area may be divided into regions in response.
- Step S120 Determine a target area from the at least two areas.
- the target area determined from the at least two areas may include one area or multiple areas, which is not limited herein.
- the target area may be determined randomly from at least two areas, the target area may be determined from at least two areas according to a preset rule, or the target area may be determined from at least two areas according to a preset condition, for example, based on The display content of the target area, the area based on the target area, the position based on the target area, the size based on the target area, etc., are not limited here,
- Step S130 Perform display enhancement processing on the video content in the target area, where the display enhancement processing processes images in the video content by optimizing parameters to improve the image quality of the video content.
- multiple areas of the displayable area may display still images, dynamic images, or video images.
- at least two areas of the displayable area display video Image, and each of the at least two areas may display the same video image or different video images.
- the electronic device obtains the video content of the video image displayed in the target area, and then performs display enhancement processing on the video content, where the display enhancement processing processes the image in the video content by optimizing parameters to improve the image quality of the video content .
- the image quality includes sharpness, sharpness, lens distortion, color, resolution, color gamut range, purity, etc., and its different combinations can have different display enhancement effects.
- the display enhancement processing of the video content can also be understood as a series of operations performed before the formal processing of the video content, including image enhancement and image restoration, etc.
- the image enhancement is to the original image by certain means Add some information or transform data, selectively highlight the features of interest in the image or suppress some unwanted features in the image, match the image with the target optimization parameters, thereby improving the image quality and enhancing the visual effect.
- performing display enhancement processing on the video content may include at least one of enhancing the exposure of the video content, desiccation, edge sharpening, increased contrast, or increased saturation.
- the video content displayed by the electronic device is the decoded image content.
- the decoded image content is RGBA format data
- the RGBA format data needs to be converted to the HSV format. Specifically, Obtain the histogram of the image content, count the histogram to obtain the parameters for converting the RGBA format data to the HSV format, and then convert the RGBA format data to the HSV format according to the parameters.
- the exposure enhancement is used to increase the brightness of the image.
- the histogram of the image can be used to increase the brightness value of the area where the brightness value is passed.
- the brightness of the image can be increased by nonlinear superposition.
- T and I are images with values of [0,1]. The algorithm can iterate multiple times if the effect is not good.
- denoising the image content is used to remove the noise of the image.
- the image is often degraded due to the interference and influence of various noises during the generation and transmission of the image, which has a subsequent effect on the processing of the image and image vision. Will have an adverse effect.
- noise such as: electrical noise, mechanical noise, channel noise and other noise. Therefore, in order to suppress noise, improve image quality, and facilitate higher-level processing, it is necessary to perform denoising preprocessing on the image. From the perspective of the probability distribution of noise, it can be divided into Gaussian noise, Rayleigh noise, gamma noise, exponential noise and uniform noise.
- the image can be denoised by a Gaussian filter, where the Gaussian filter is a linear filter that can effectively suppress noise and smooth the image. Its operating principle is similar to the average filter, and the average value of the pixels in the filter window is taken as the output.
- the coefficient of the window template is different from the mean filter.
- the template coefficient of the mean filter is the same as 1.
- the template coefficient of the Gaussian filter decreases as the distance from the center of the template increases. Therefore, the Gaussian filter is less blurry than the mean filter.
- a 5 ⁇ 5 Gaussian filter window is generated, and the center position of the template is used as the coordinate origin for sampling. Bring the coordinates of each position of the template into the Gaussian function, and the value obtained is the coefficient of the template. Then convolving the Gaussian filter window with the image can denoise the image.
- the edge sharpening is used to make the blurred image clearer.
- image sharpening There are generally two methods for image sharpening: one is the differential method, and the other is the high-pass filtering method.
- contrast stretching is a method of image enhancement, and also belongs to the grayscale conversion operation. Through grayscale transformation, the grayscale value is stretched to the entire range of 0-255, then its contrast is obviously greatly enhanced. You can use the following formula to map the gray value of a pixel to a larger gray space:
- I(x,y) [(I(x,y)-Imin)/(Imax-Imin)](MAX-MIN)+MIN;
- Imin and Imax are the minimum gray value and maximum gray value of the original image
- MIN and MAX are the minimum and maximum gray values of the gray space to be stretched.
- the video processing method provided by an embodiment of the present application performs sub-region processing on the displayable area of the display screen of the electronic device to form at least two areas, determine the target area from the at least two areas, and determine the video content in the target area Perform display enhancement processing, in which the display enhancement processing processes images in the video content by optimizing parameters to improve the image quality of the video content, so that when the display screen of the electronic device is divided into multiple areas, the display area in the multiple areas is displayed The video content in the largest area is subjected to display enhancement processing to improve the display effect of the video content without causing excessive power consumption of the electronic device.
- FIG. 3 is a schematic flowchart of a video processing method according to another embodiment of the present application. The following describes the process shown in FIG. 3 in detail.
- the video processing method may specifically include the following steps:
- Step S210 When the electronic device enters the video call mode, the displayable area of the display screen is divided into areas to form the at least two areas.
- the current mode of the electronic device is monitored, where the current mode of the electronic device may include a video call mode, a voice call mode, a phone call mode, etc., which is not limited herein.
- the displayable area of the display screen of the electronic device is divided into regions to obtain at least two regions.
- the electronic device may initiate a video call request to other electronic devices, and the other electronic device enters the video call mode when accepting the video call request; or the other electronic device may initiate the video call request, and the electronic device Enter the video call mode when accepting the video call request, which is not limited here.
- the video call mode may include a two-party video call mode or a multi-party video call mode, that is, a conference mode, which is not limited herein. It can be understood that when the video call mode is a two-party video call mode, the displayable area is divided into two areas, which are used to display the opposite party user and the own party user of the two parties of the call; when the video call mode is the multi-party video call mode The displayable area is divided into a plurality of areas, which are used to display the own user of multiple parties and a plurality of counterpart users, respectively.
- Step S220 Obtain the display area of each of the at least two areas separately.
- the display area of the at least two regions is calculated respectively.
- a coordinate system can be established in the displayable region to obtain at least two regions The abscissa and ordinate of each area in the coordinate system of each area, then obtain the absolute value of the abscissa and the absolute value of the ordinate, and finally calculate the area based on the absolute value of the abscissa and the absolute value of the ordinate Corresponding display area.
- Step S230 Determine the area with the largest display area among the at least two areas as the target area.
- the display area of each area is compared, and the area with the largest display area is determined as the target area.
- the displayable area of the electronic device is divided into three areas, namely a first area, a second area and a third area, wherein the display area of the first area is larger than the display area of the second area and larger than the third area.
- the first area has the largest display area. Therefore, the first area can be determined as the target area.
- Step S240 When the video call mode is a two-party call mode, identify the video content in the target area.
- the displayable area includes two areas, and one of the two areas is used to display own users, and the other one of the two areas is used to Show the other party's users.
- a region with a larger display area can be determined as a target region from the two regions, and then the video content in the target region can be identified.
- the video content in the target region includes the own user or the opposite user,
- the user displayed in the target area can be judged to be the own user or the opposite user through image recognition, that is, the user displayed in the larger display area can be judged to be the own user or the opposite user through image recognition.
- Step S250 When the video content includes a counterpart user in both parties to the call, display enhancement processing is performed on the video content in the target area.
- the larger display area in the displayable area is displayed as the other party's user. If you see the other user clearly in a larger area, then the electronic device can respond to the video content in the target area for display enhancement processing, that is, for the other user and the background of the other user's background. In order to allow the user of the other party to have a clearer display on the electronic device, the user experience is improved.
- the display object in the larger display area of the displayable area is the own user. It can be considered that the own user does not expect to see the other party clearly.
- the display content of the entire display area is subjected to display enhancement processing to reduce the power consumption of the electronic device and increase the use time of the electronic device.
- FIG. 4 shows a schematic diagram of an interface of an electronic device provided by an embodiment of the present application
- FIG. 5 shows the implementation of the present application.
- Example provides another schematic diagram of the interface of the electronic device. Specifically, in the interface shown in FIG. 4, the entire displayable area of the electronic device has not undergone display enhancement processing, while in the interface shown in FIG. 5, the target area of the electronic device including the counterpart user has undergone display enhancement processing. Therefore, the display effect is better than the area where the user is located.
- FIG. 6 shows a schematic flowchart of step S250 of the video processing method shown in FIG. 3 of the present application.
- the process shown in FIG. 6 will be described in detail below.
- the method may specifically include the following steps:
- Step S251 When the video content includes the other party's users on both sides of the call, the current network status is detected.
- the display object of the larger display area in the displayable area is the other party's user. View the other party's user in a large area, and display enhancement processing can be performed on the video content, so that the other party's user can be displayed on the target area more clearly.
- the network status is not good, loading a video resource will occupy the GPU for a long time. If you perform display enhancement processing, there will be a very flashing screen and a stuck problem. Therefore, in this embodiment, It can detect the current network state, for example, detect the current signal strength, detect the current wireless environment parameters, etc., to determine whether the video content display enhancement process under the current network state will cause the problem of flashing and freezing of the video call.
- Step S252 determine whether the current network state meets the specified condition.
- the electronic device is set with a specified condition, which is used as a basis for judging the current network state.
- the specified condition may be stored locally by the electronic device in advance, or may be set during judgment, which is not limited herein.
- the specified condition may be automatically configured by the electronic device, may be manually set by the user, or may be transmitted to the electronic device after configuration by the server, which is not limited herein. Further, after acquiring the current network state, the current network state is compared with the specified condition to determine whether the current network state meets the specified condition.
- the specified condition as the specified signal strength as an example
- extract the current signal strength in the current network state and then compare the current signal strength with the specified signal strength, when the current signal strength is less than When the signal strength is specified, it indicates that the current signal strength does not meet the specified condition, and when the current signal strength is not less than the specified signal strength, it indicates that the current signal strength meets the specified condition.
- Step S253 When the current network state meets the specified condition, display enhancement processing is performed on the video content in the target area.
- the electronic device can respond to the target in response The video content in the area is processed for display enhancement.
- the displayable area of the display screen is divided into areas to form at least two areas, and each of the at least two areas is acquired separately
- the display area of the at least two areas is determined as the target area.
- the video call mode is the two-party call mode
- the video content in the target area is identified.
- the video content includes the other party’s user
- display enhancement processing is performed on the video content in the target area.
- this embodiment performs display enhancement processing on the video content in the video call mode to improve the video call effect, and when the video content in the target area includes the counterpart user, the video content is processed Display enhancement processing to reduce the power consumption of electronic devices while improving the effect of video calls.
- FIG. 7 is a schematic flowchart of a video processing method according to another embodiment of the present application. The process shown in FIG. 7 will be described in detail below.
- the video processing method may specifically include the following steps:
- Step S310 When the electronic device enters the video call mode, perform a sub-region processing on the displayable area of the display screen to form the at least two areas.
- Step S320 Obtain the display area of each of the at least two areas separately.
- Step S330 Determine the area with the largest display area among the at least two areas as the target area.
- Step S340 When the video call mode is a two-party call mode, identify the video content in the target area.
- steps S310-S340 please refer to steps S210-S240, which will not be repeated here.
- Step S350 When the video content includes a counterpart user in both parties to the call, perform first display enhancement processing on the video content in the target area.
- the display enhancement process is performed by the first display enhancement process, where the first display enhancement process may be exposure enhancement to the video content, desiccation, edge sharpening, contrast increase, or saturation Some parameters in the degree increase are optimized.
- Step S360 When the video content includes own users of both parties to the call, perform second display enhancement processing on the video content in the target area, wherein the video content optimization quality corresponding to the first display enhancement processing is higher than The optimized content of the video content corresponding to the second display enhancement process.
- the electronic device can perform display enhancement processing on the video content of the target area, that is, perform display enhancement processing on the user corresponding to the electronic device, wherein the method of performing display enhancement processing is the second Display enhancement processing, wherein the optimized quality of video content corresponding to the first display enhancement processing is higher than the optimized quality of video content corresponding to the second display enhancement processing, that is, the display effect of the video content after the first display enhancement processing is compared The display effect of the video content after the second display enhancement processing.
- the second display enhancement process may optimize some parameters of the video content such as exposure enhancement, desiccation, edge sharpening, contrast increase, or saturation increase, and the number of optimized parameters is less than that of the first display
- the optimized parameters of the enhanced processing are used to reduce the power consumption of the electronic device compared to the first display enhanced processing and improve the duration of use of the electronic device.
- FIG. 8 shows another schematic diagram of the interface of the electronic device provided by the embodiment of the present application.
- the target area of the electronic device including the counterpart user is subjected to the first display enhancement processing
- the area including the own user is subjected to the second display enhancement processing
- the video display optimization quality corresponding to the first display enhancement processing is higher than that corresponding to the second display enhancement processing
- the optimized quality of the video content of the video that is, the display effect of the area where the other user is located is better than the display effect of the area where the other user is located.
- FIG. 9 shows a schematic flowchart of step S360 of the video processing method shown in FIG. 7 of the present application.
- the process shown in FIG. 9 will be described in detail below.
- the method may specifically include the following steps:
- Step S361 When the video content includes own users of both parties to the call, the current load rate of the image processor is detected.
- the electronic device can perform display enhancement processing on the video content of the target area, that is, display enhancement processing for the user corresponding to the electronic device, so that the user can know more clearly It is displayed on the target area.
- display enhancement processing will occupy more graphics processor (Graphics Processing Unit, GPU) resources. Therefore, if the video content is subjected to display enhancement processing under the condition that the current load rate of the image processor is high, it may be The problem of flashing screen or stuck is caused. Therefore, in this embodiment, the current load rate of the image processor can be detected to determine whether the display enhancement processing of the video content when the image processor is at the current load rate will cause The problem of splash screen and stuck video call.
- Step S362 Determine whether the current load rate is lower than the specified load rate.
- the electronic device is provided with a specified load rate, which is used as a basis for determining the current load rate.
- the specified load rate may be stored locally by the electronic device in advance, or may be set at the time of judgment, which is not limited herein.
- the specified condition may be automatically configured by the electronic device, may be manually set by the user, or may be transmitted to the electronic device after configuration by the server, which is not limited herein. Further, after acquiring the current load rate of the image processor, the current load rate is compared with the specified load rate to determine whether the current load rate is lower than the specified load rate.
- Step S363 When the current load rate is lower than the specified load rate, perform second display enhancement processing on the video content in the target area.
- the video content is subjected to display enhancement processing at the current load rate, which will not cause the flashing and jam of the video call.
- the video content in the area is subjected to second display enhancement processing.
- the displayable area of the display screen is divided into areas to form at least two areas, and each of the at least two areas is acquired separately
- the display area of the at least two areas is determined as the target area.
- the video call mode is the two-party call mode
- the video content in the target area is identified.
- the video content includes the other party's user
- the second display enhancement process is performed on the video content in the target area, wherein the first display enhancement process
- the corresponding video content optimization quality is higher than the video content optimization quality corresponding to the second display enhancement processing.
- this embodiment performs display enhancement processing when the video content includes the opposite party user and the own party user of both parties in the call, wherein, when the video content includes the opposite party user, the display enhanced processing video is performed.
- the display effect of the content because the video content includes the display effect of the video content after the display enhancement processing is performed when the own user is included to meet different call forms.
- FIG. 10 is a schematic flowchart of a video processing method according to another embodiment of the present application. The process shown in FIG. 10 will be described in detail below.
- the video processing method may specifically include the following steps:
- Step S410 Perform sub-region processing on the displayable region of the display screen to form at least two regions.
- Step S420 Determine a target area from the at least two areas.
- steps S410-S420 please refer to steps S110-S120, which will not be repeated here.
- Step S430 Identify the video content in the target area.
- Step S440 determine whether the video content includes a person image.
- the electronic device recognizes the video content in the target area, and determines whether the video content includes a person image according to the recognition result. Understandably, the recognition result may not include a person image, include a person image, or include multiple Character images and the like are not limited herein, and when the recognition result characterization includes at least one character image, it may be determined that the video content includes a character image.
- Step S450 When the video content includes the character image, display enhancement processing is performed on the video content in the target area, where the display enhancement processing improves the image by processing the image in the video content through optimization parameters The quality of the video content.
- the electronic device may perform display enhancement processing on the video content, so that the person in the video content can be more clearly displayed in the target area, improving the display effect.
- the video processing method provided by another embodiment of the present application divides the displayable area of the display screen into areas to form at least two areas, respectively obtains the display area of at least two areas, and determines the maximum display area from the at least two areas
- the target area of the video, the video content in the video target area determines whether the video content includes a person image, and when the video content includes a person image, performs display enhancement processing on the video content in the target area.
- this embodiment performs display enhancement processing on the video content when the video content includes person images, so as to improve the display effect of the video content while reducing the power consumption of the electronic device.
- FIG. 11 is a schematic flowchart of a video processing method according to yet another embodiment of the present application. The following describes the process shown in FIG. 11 in detail.
- the video processing method may specifically include the following steps:
- Step S510 Perform sub-region processing on the displayable area of the display screen to form at least two areas.
- step S510 For the specific description of step S510, please refer to step S110, which will not be repeated here.
- Step S520 Obtain the display content of each of the at least two areas separately.
- the display content of each of the at least two regions is detected respectively.
- the source of the display content can be detected, for example, it can be detected that the display content is local to the electronic device or the source and the network.
- the interface for obtaining the display content can be detected to determine the content of the display content.
- Source when it is detected that the display content is read from the specified file path, the source of the display content can be determined to be local; when the display content is obtained from the specified network address, the source of the display content can be determined to be the network, the specific method No limitation here.
- Step S530 Determine whether the displayed content is a local resource.
- the source of the displayed content can be judged based on the above detection result.
- Step S540 When the display content is a non-local resource, determine the area where the display content is located as the target area.
- the display content when the display content is a local resource, the display content may be considered to be collected and displayed in real time by the camera of the electronic device, and when the display content is a non-local resource, the display content may be considered Other electronic devices connected to the electronic device are transmitted and displayed.
- the displayed content when the displayed content is transmitted and displayed by other electronic devices connected to the electronic device, the displayed content may be regarded as a counterpart user of the users on both sides of the call Therefore, the area where the display content is located can be determined as the target area to perform display enhancement processing on the display content in the area to improve the display effect.
- the display content When the display content is collected by the camera of the electronic device, the display content can be regarded as the own user of the two users of the calling party. Therefore, the area where the display content is located can be determined as a non-target area, and the display in the area is not displayed.
- the content is processed for display enhancement to reduce power consumption.
- Step S550 Perform display enhancement processing on the video content in the target area, where the display enhancement processing processes images in the video content by optimizing parameters to improve the image quality of the video content.
- step S550 please refer to step S130, which will not be repeated here.
- the video processing method provided by yet another embodiment of the present application divides the displayable area of the display screen into areas to form at least two areas, respectively obtains the display content of each of the at least two areas, and recognizes the display Whether the content is a local resource, when the display content is a non-local resource, the area where the display content is located is taken as the target area, and the video content in the target area is subjected to display enhancement processing.
- this embodiment can determine the target area according to the source of the display content, and improve the display effect.
- FIG. 12 illustrates a block diagram of a video processing apparatus 200 provided by an embodiment of the present application.
- the video processing device 200 is applied to the above-mentioned electronic device, the electronic device includes a display screen, and the block diagram shown in FIG. 12 will be described below.
- the video processing device 200 includes: a processing module 210, a determination module 220, and a display enhancement Module 230, in which:
- the processing module 210 is configured to perform area-by-area processing on the displayable area of the display screen to form at least two areas. Further, the processing module 210 includes: a processing submodule, wherein:
- the processing sub-module is used for performing sub-region processing on the displayable area of the display screen when the electronic device enters the video call mode to form the at least two areas.
- the determining module 220 is configured to determine a target area from the at least two areas. Further, the determination module 220 includes: a display area acquisition submodule, a first determination submodule, a display content determination submodule, and a second determination submodule, wherein:
- the display area acquisition sub-module is configured to separately obtain the display area of each of the at least two areas.
- the first determining submodule is configured to determine the area with the largest display area among the at least two areas as the target area.
- the display content obtaining sub-module is used to obtain the display content of each of the at least two areas separately.
- the display content judgment sub-module is used to judge whether the display content is a local resource.
- the second determining submodule is configured to determine the area where the display content is located as the target area when the display content is a non-local resource.
- the display enhancement module 230 is configured to perform display enhancement processing on the video content in the target area. Further, the display enhancement module 230 includes: an identification submodule, a first display enhancement submodule, a second display enhancement submodule, a third display enhancement submodule, a video content recognition module, a video content judgment module, and a fourth display enhancement Submodules, where:
- the identification sub-module is used to identify the video content in the target area.
- the first display enhancement sub-module is used for performing display enhancement processing on the video content in the target area when the video content includes the counterpart user in both parties of the call. Further, the first display enhancement sub-module includes: a network status detection unit, a network status judgment unit, and a first display enhancement unit, wherein:
- the network state detection unit is configured to detect the current network state when the video content includes the other party's users on both sides of the call.
- the network state judging unit is used to judge whether the current network state meets a specified condition.
- the first display enhancement unit is configured to perform display enhancement processing on the video content in the target area when the current network state meets the specified condition.
- the second display enhancement submodule is configured to perform first display enhancement processing on the video content in the target area when the video content includes the counterpart user in both parties of the call.
- a third display enhancement submodule configured to perform second display enhancement processing on the video content in the target area when the video content includes the own users of both parties in the call, wherein the first display enhancement processing corresponds to The video content optimization quality is higher than the video content optimization quality corresponding to the second display enhancement processing.
- the third display enhancement sub-module includes a load rate detection unit, a load rate determination unit, and a second display enhancement unit, wherein:
- the load factor detection unit is configured to detect the current load factor of the image processor when the video content includes own users in both parties to the call.
- the load rate judging unit is used to judge whether the current load rate is lower than the specified load rate.
- the second display enhancement unit is configured to perform second display enhancement processing on the video content in the target area when the current load rate is lower than the specified load rate.
- the video content identification sub-module is used to identify the video content in the target area.
- the video content judgment sub-module is used to judge whether the video content includes a person image.
- the fourth display enhancement submodule is configured to perform display enhancement processing on the video content in the target area when the video content includes the person image.
- the coupling between the modules may be electrical, mechanical, or other forms of coupling.
- each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
- the above integrated modules may be implemented in the form of hardware or software function modules.
- FIG. 13 shows a structural block diagram of an electronic device 100 provided by an embodiment of the present application.
- the electronic device 100 may be an electronic device capable of running an application program such as a smart phone, a tablet computer, an e-book.
- the electronic device 100 in this application may include one or more of the following components: a processor 110, a memory 120, a display screen 130, a codec 140, and one or more application programs, one or more of which may be stored in
- the memory 120 is configured to be executed by one or more processors 110, and one or more programs are configured to perform the method as described in the foregoing method embodiments.
- the processor 110 may include one or more processing cores.
- the processor 110 connects various parts of the entire electronic device 100 by using various interfaces and lines, executes or executes instructions, programs, code sets or instruction sets stored in the memory 120, and calls data stored in the memory 120 to execute Various functions and processing data of the electronic device 100.
- the processor 110 may use at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA).
- DSP Digital Signal Processing
- FPGA Field-Programmable Gate Array
- PLA programmable logic array
- the processor 110 may integrate one or a combination of one of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), and a modem.
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- modem modem
- CPU mainly deals with operating system, user interface and application program, etc.
- GPU is used for rendering and rendering of display content
- modem is used for handling wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 110, and may be implemented by a communication chip alone.
- the memory 120 may include random access memory (RAM) or read-only memory (Read-Only Memory).
- the memory 120 may be used to store instructions, programs, codes, code sets, or instruction sets.
- the memory 120 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing an operating system and instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , Instructions for implementing the following method embodiments.
- the storage data area may also store data created by the terminal 100 in use (such as a phone book, audio and video data, and chat history data).
- the codec 140 can be used to encode or decode video data, and then transmit the decoded video data to the display screen 130 for display, wherein the codec 140 can be a GPU, a dedicated DSP, FPGA, ASIG Chips etc.
- FIG. 14 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
- the computer readable medium 300 stores program codes, and the program codes can be called by a processor to execute the method described in the above method embodiments.
- the computer-readable storage medium 300 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
- the computer-readable storage medium 300 includes a non-transitory computer-readable storage medium.
- the computer-readable storage medium 300 has a storage space for the program code 310 that performs any of the method steps described above. These program codes can be read from or written into one or more computer program products.
- the program code 310 may be compressed in an appropriate form, for example.
- the video processing method, device, electronic device, and storage medium provided in the embodiments of the present application perform regional processing on the displayable area of the display screen of the electronic device to form at least two areas, and obtain the at least two areas respectively.
- the display area of the area determine the target area with the largest display area from at least two areas, and perform display enhancement processing on the video content in the target area, so that when the display screen of the electronic device is divided into multiple areas,
- the video content of the area with the largest display area in each area is subjected to display enhancement processing to improve the display effect of the video content on the basis of not causing excessive power consumption of the electronic device.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
本申请实施例公开了一种视频处理方法、装置、电子设备以及存储介质,涉及电子设备技术领域。该方法应用于电子设备,电子设备包括显示屏,所述方法包括:对该显示屏的可显示区域进行分区域处理,形成至少两个区域,从至少两个区域中确定目标区域,对该目标区域中的视频内容进行显示增强处理,其中,该显示增强处理通过优化参数处理视频内容中的图像提高视频内容的画质。本申请实施例提供的视频处理方法、装置、电子设备以及存储介质通过在电子设备的显示屏分为多个区域显示时,对多个区域中显示面积最大的区域的视频内容进行显示增强处理,以在不造成电子设备过多功耗的基础上提升视频内容的显示效果。
Description
相关申请的交叉引用
本申请要求于2018年11月27日提交的申请号为CN201811428039.8的中国申请的优先权,其在此出于所有目的通过引用将其全部内容并入本文。
本申请涉及电子设备技术领域,更具体地,涉及一种视频处理方法、装置、电子设备以及存储介质。
随着科学技术的发展,电子设备已经成为人们日常生活中最常用的电子产品之一。并且,用户经常会通过电子设备直播、视频通话等。
发明内容
鉴于上述问题,本申请提出了一种视频处理方法、装置、电子设备以及存储介质,以解决上述问题。
第一方面,本申请实施例提供了一种视频处理方法,应用于电子设备,所述电子设备包括显示屏,所述方法包括:对所述显示屏的可显示区域进行分区域处理,形成至少两个区域;从所述至少两个区域中确定目标区域;对所述目标区域中的视频内容进行显示增强处理,其中,所述显示增强处理通过优化参数处理所述视频内容中的图像提高所述视频内容的画质。
第二方面,本申请实施例提供了一种视频处理装置,应用于电子设备,所述电子设备包括显示屏,所述装置包括:处理模块,用于对所述显示屏的可显示区域进行分区域处理,形成至少两个区域;确定模块,用于从所述至少两个区域中确定目标区域;显示增强模块,用于对所述目标区域中的视频内容进行显示增强处理,其中,所述显示增强处理通过优化参数处理所述视频内容中的图像提高所述视频内容的画质。
第三方面,本申请实施例提供了一种电子设备,包括存储器和处理器,所述存储器耦接到所述处理器,所述存储器存储指令,当所述指令由所述处理器执行时所述处理器执行上述方法。
第四方面,本申请实施例提供了一种计算机可读取存储介质,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述方法。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1示出了本申请实施例提供的视频播放的流程示意图;
图2示出了本申请一个实施例提供的视频处理方法的流程示意图;
图3示出了本申请又一个实施例提供的视频处理方法的流程示意图;
图4示出了本申请实施例提供的电子设备的一种界面示意图;
图5示出了本申请实施例提供的电子设备的又一种界面示意图;
图6示出了本申请的图3所示的视频处理方法的步骤S240的流程示意图;
图7示出了本申请再一个实施例提供的视频处理方法的流程示意图;
图8示出了本申请实施例提供的电子设备的再一种界面示意图;
图9示出了本申请的图7所示的视频处理方法的步骤S350的流程示意图;
图10示出了本申请另一个实施例提供的视频处理方法的流程示意图;
图11示出了本申请又再一个实施例提供的视频处理方法的流程示意图;
图12示出了本申请实施例提供的视频处理装置的模块框图;
图13示出了本申请实施例用于执行根据本申请实施例的视频处理方法的电子设备的框图;
图14示出了本申请实施例的用于保存或者携带实现根据本申请实施例的视频处理方法的程序代码的存储单元。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
请参阅图1,图1示出了视频播放的流程。具体地,操作系统在获取到待播放的数据的时候,接下来的工作就是解析音视频数据。一般的视频文件都由视频流和音频流两部分组成,不同的视频格式音视频的封装格式不一样。将音频流和视频流合成文件的过程称为muxer,反之从媒体文件中分离音频流和视频流的过程称为demuxer。播放视频文件就需要从文件流中分离出音频流和视频流,分别对其进行解码,解码后的视频帧可以直接渲染,音频帧可以送到音频输出设备的缓冲区进行播放,当然,视频渲染和音频播放的时间戳需要控制同步。
具体地,视频解码可以包括硬解码和软解码,硬件解码是将原来全部交由中央处理器(Central Processing Unit,CPU)来处理的视频数据的一部分交由图形处理器(Graphics Processing Unit,GPU)来做,而GPU的并行运算能力要远远高于CPU,这样可以大大的降低对CPU的负载,CPU的占用率较低了之后就可以同时运行一些其他的程序了,当然,对于较好的处理器来说,比如i5 2320,或者AMD任何一款四核心处理器来说,既可以进行硬解码,也可以进行软解码。
具体地,如图1所示,多媒体框架(Media Framework)通过与客户端的API接口获取客户端待播放的视频文件,并交由视频编解码器(Video Decode),其中,Media Framework为Android系统中多媒体框架,MediaPlayer、MediaPlayerService和Stagefrightplayer三个部分构成了Android多媒体的基本框架。多媒体框架部分采用了C/S的结构,MediaPlayer作为C/S结构的Client端,MediaPlayerService和Stagefrightplayer作为C/S结构Server端,承担着播放多媒体文件的责任,通过Stagefrightplayer,Server端完成Client端的请求并作出响应。Video Decode是一款集成了最常用的音频和视频解码与播放的超级解码器,用于将视频数据解码。
其中,软解码,即通过软件让CPU来对视频进行解码处理。而硬解码,指不借助于CPU,而通过专用的子卡设备来独立完成视频解码任务。
不论是硬解码还是软解码,在将视频数据解码之后,会将解码后的视频数据发送至图层传递模块(SurfaceFlinger),由SurfaceFlinger将解码后的视频数据渲染和合成之后,在显示屏上显示。其中,SurfaceFlinger是一个独立的Service,它接收所有Window的Surface作为输入,根据ZOrder、透明度、大小、位置等参数,计算出每个Surface在最终合成图像中的位置,然后交由HWComposer或OpenGL生成最终的显示Buffer,然后显示到特定的显示设备上。如图1所示,软解码中,CPU将视频数据解码之后交给SurfaceFlinger进行渲染和合成,而硬解码由GPU解码之后,交由SurfaceFlinger进行渲染和合成。而SurfaceFlinger会调用GPU实现图像的渲染和合成,并在显示屏上显示。
其中,目前电子设备对视频内容的处理方式固定,例如,作为一种方式,目前的电子设备对其所播放的整个视频内容均进行显示增强处理,虽然这种方式提升了整个视频播放的效果,但是会造成电子设备的功耗过大;作为另一种方式,目前的电子设备对其所播放的整个视频内容均不进行显示增强处理,虽然这种方式降低了电子设备的功耗,但是会造成电子设备所播放的视频内容的效果不佳。针对上述问题,发明人经过长期的研究发现,并提出了本申请实施例提供的视频处理方法、装置、电子设备以及存储介质,通过在电子设备的显示屏分为多个区域显示时,对多个区域中显示面积最大的区域的视频内容进行显示增强处理,以在不造成电子设备过多功耗的基础上提升视频内容的显示 效果。其中,具体的视频处理方法在后续的实施例中进行详细的说明。
实施例
请参阅图2,图2示出了本申请一个实施例提供的视频处理方法的流程示意图。所述视频处理方法用于在电子设备的显示屏分为多个区域显示时,对多个区域中显示面积最大的区域的视频内容进行显示增强处理,以在不造成电子设备过多功耗的基础上提升视频内容的显示效果。在具体的实施例中,所述视频处理方法应用于如图12所示的视频处理装置200以及配置有所述视频处理装置200的电子设备100(图13)。下面将以电子设备为例,说明本实施例的具体流程,当然,可以理解的,本实施例所应用的电子设备可以为智能手机、平板电脑、穿戴式电子设备、车载设备、网关等,在此不做具体的限定。其中,于本实施例中,该电子设备包括显示屏,下面将针对图2所示的流程进行详细的阐述,所述视频处理方法具体可以包括以下步骤:
步骤S110:对所述显示屏的可显示区域进行分区域处理,形成至少两个区域。
在本实施例中,电子设备包括显示屏,其中,该显示屏的可显示区域可以用于显示文本、图片、图标或者视频等内容。而伴随着触控技术的发展,越来越多的电子设备所设置的显示屏也可以为触摸屏,在设置触摸屏的情况下,当检测到用户在触摸屏上进行拖曳、单击、双击、滑动等触控操作时,该电子设备可以对用户的触控操作进行响应。
其中,作为一种方式,电子设备的显示屏的可显示区域可以进行分区域处理,也就是说,电子设备可以具有分屏功能,基于分屏功能进行的分屏处理,可以将电子设备的可显示区域划分为至少两个区域,以在至少两个区域显示电子设备所显示的内容,其中,至少两个区域显示的内容可以相同,也可以不相同,在此不做限定。
作为一种方式,电子设备可以在接收到指示分屏的指令信息时,对显示屏的可显示区域进行分区域处理,从而得到至少两个区域,例如,将可显示区域进行分区域处理得到两个区域、四个区域、五个区域等。其中,该指令信息可以由用户在电子设备触发,也由其他电子设备发送的消息触发,在此不做限定。其中,用户在电子设备的触发可以包括用户作用于电子设备的触控操作和用户输入电子设备的语音信息,以用户作用于电子设备的触控操作为例,当电子设备检测到用户在电子设备触控指定实体按键或指定虚拟按键时,可以生成指示对可显示区域进行分区域处理的指令信息,例如,当检测到用户触控用于启动视频通话、启动直播的实体按键或虚拟按键时,可以作为响应将所述可显示区域进行分区域处理。
步骤S120:从所述至少两个区域中确定目标区域。
进一步地,在本实施例中,从至少两个区域中确定的目标区域可以包括一个区域、也可以包括多个区域,在此不做限定。其中,可以随机从至少两个区域中确定目标区域、可以按照预先设置的规律从至少两个区域中确定目标区域、也可以按照预先设置的条件从至少两个区域中确定目标区域,例如,基于目标区域的显示内容、基于目标区域的面积、基于目标区域的位置、基于目标区域的大小等,在此不做限定,
步骤S130:对所述目标区域中的视频内容进行显示增强处理,其中,所述显示增强处理通过优化参数处理所述视频内容中的图像提高所述视频内容的画质。
可以理解的,可显示区域的多个区域可以显示静态图像、可以显示动态图像、也可以显示视频图像,可选的,在本实施例中,所述可显示区域的至少两个区域均显示视频图像,且至少两个区域中的每个区域可以均显示相同的视频图像,也可以显示不同的视频图像。进一步地,电子设备获取该目标区域显示的视频图像的视频内容,然后对视频内容进行显示增强处理,其中,该显示增强处理通过优化参数对视频内容中的图像进行处理,提高视频内容的画质。其中,该画质包括清晰度、锐度、镜头畸变、色彩、解析度、色域范围、纯度等,其不同的组合方式可以有不同的显示增强效果。其中,需要说明的是,对视频内容的显示增强处理还可以理解为在对视频内容进行正式处理之前所做的一系列操作,包括图像增强和图像复原等,图像增强是通过一定手段对原图像附加一些信息或者变换数据,有选择地突出图像中感兴趣的特征或者抑制图像中某些不需要的特征,使图像与目标优化参数相匹配,从而改 善图像质量,加强视觉效果。
其中,对视频内容进行显示增强处理可以包括对视频内容的曝光度增强、去燥、边缘锐化、对比度增加或饱和度增加中的至少一种。
具体地,电子设备显示的视频内容为经过解码后的图像内容,由于经过解码之后的图像内容为RGBA格式的数据,为了对图像内容优化,需要将RGBA格式的数据转换为HSV格式,具体地,获取图像内容的直方图,对直方图统计从而获取将RGBA格式的数据转换为HSV格式的参数,在根据该参数将RGBA格式的数据转换为HSV格式。
其中,曝光度增强,用于提高图像的亮度,则可以通过图像的直方图,将亮度值交底的区域增加亮度值,另外,也可以是通过非线性叠加,增加图像亮度,具体地,I表示要处理的较暗图像,T表示处理后的比较亮的图像,则曝光度增强的方式为T(x)=I(x)+(1-I(x))*I(x)。其中,T和I都是[0,1]取值的图像。如果一次效果不好算法可以多次迭代。
其中,对图像内容去噪用于去除图像的噪声,具体地,图像在生成和传输过程中常常因受到各种噪声的干扰和影响而是图像降质,这对后续图像的处理和图像视觉效应将产生不利影响。噪声种类很多,比如:电噪声,机械噪声,信道噪声和其他噪声。因此,为了抑制噪声,改善图像质量,便于更高层次的处理,必须对图像进行去噪预处理。从噪声的概率分布情况来看,可分为高斯噪声、瑞利噪声、伽马噪声、指数噪声和均匀噪声。
具体地,可以通过高斯滤波器对图像去噪,其中,高斯滤波器是一种线性滤波器,能够有效的抑制噪声,平滑图像。其作用原理和均值滤波器类似,都是取滤波器窗口内的像素的均值作为输出。其窗口模板的系数和均值滤波器不同,均值滤波器的模板系数都是相同的为1;而高斯滤波器的模板系数,则随着距离模板中心的增大而系数减小。所以,高斯滤波器相比于均值滤波器对图像模糊程度较小。
例如,产生一个5×5的高斯滤波窗口,以模板的中心位置为坐标原点进行取样。将模板各个位置的坐标带入高斯函数,得到的值就是模板的系数。再将该高斯滤波窗口与图像卷积就能够对图像去噪。
其中,边缘锐化用于使模糊的图像变得更加清晰起来。图像锐化一般有两种方法:一种是微分法,另外一种是高通滤波法。
其中,对比度增加用于增强图像的画质,使得图像内的颜色更加鲜明,具体地,对比度拉伸是图像增强的一种方法,也属于灰度变换操作。通过灰度变换,将灰度值拉伸到整个0-255的区间,那么其对比度显然是大幅增强的。可以用如下的公式来将某个像素的灰度值映射到更大的灰度空间:
I(x,y)=[(I(x,y)-Imin)/(Imax-Imin)](MAX-MIN)+MIN;
其中Imin,Imax是原始图像的最小灰度值和最大灰度值,MIN和MAX是要拉伸到的灰度空间的灰度最小值和最大值。
本申请一个实施例提供的视频处理方法,对电子设备的显示屏的可显示区域进行分区域处理,形成至少两个区域,从至少两个区域中确定目标区域,对该目标区域中的视频内容进行显示增强处理,其中,该显示增强处理通过优化参数处理视频内容中的图像提高视频内容的画质,从而通过在电子设备的显示屏分为多个区域显示时,对多个区域中显示面积最大的区域的视频内容进行显示增强处理,以在不造成电子设备过多功耗的基础上提升视频内容的显示效果。
请参阅图3,图3示出了本申请又一个实施例提供的视频处理方法的流程示意图。下面将针对图3所示的流程进行详细的阐述,所述视频处理方法具体可以包括以下步骤:
步骤S210:当所述电子设备进入视频通话模式时,对所述显示屏的可显示区域进行分区域处理,形成所述至少两个区域。
在本实施例中,对电子设备的当前模式进行监测,其中,电子设备的当前模式可以包括视频通话模式、语音通话模式、电话通话模式等,在此不做限定。其中,当检测到电子设备进入视频通话模式时,对该电子设备的显示屏的可显示区域进行分区域处理,以得到至少两 个区域。可以理解的,所述电子设备可以向其他电子设备发起视频通话请求,并且其他电子设备接受该视频通话请求时进入视频通话模式;或者可以由其他电子设备发起视频通话请求,并且在所述电子设备接受该视频通话请求时进入视频通话模式,在此不做限定。
作为一种方式,该视频通话模式可以包括双方视频通话模式,也可以包括多方视频通话模式,即会议模式,在此不做限定。可以理解的,当视频通话模式为双方视频通话模式时,所述可显示区域被分为两个区域,分别用于显示通话双方的对方用户和己方用户;当视频通话模式为多方视频通话模式时,所述可显示区域被分为多个区域,分别用于显示通话多方的己方用户和多个对方用户。
步骤S220:分别获取所述至少两个区域中的每个区域的显示面积。
作为一种方式,在对可显示区域进行分区域处理得到至少两个区域后,分别计算该至少两个区域的显示面积,作为一种方式,可以在可显示区域建立坐标系,分别获取至少两个区域中的每个区域在坐标系中的横坐标和纵坐标,然后获取该横坐标的绝对值以及纵坐标的绝对值,最后基于横坐标的绝对值和纵坐标的绝对值计算该区域所对应的显示面积。
步骤S230:将所述至少两个区域中显示面积最大的区域确定为所述目标区域。
在本实施例中,计算得到至少两个区域中每个区域的显示面积后,将每个区域的显示面积进行比较,并从中确定显示面积最大的区域作为目标区域。例如,所述电子设备的可显示区域分三个区域,分别为第一区域、第二区域以及第三区域,其中,第一区域的显示面积大于第二区域的显示面积以及大于第三区域的显示面积,那么,可以知道的,在电子设备的可显示区域中,第一区域的显示面积最大,因此,可以将所述第一区域确定为所述目标区域。
步骤S240:当所述视频通话模式为双方通话模式时,识别所述目标区域中的视频内容。
作为一种方式,若视频通话模式为双方通话模式,那么,在可显示区域包括两个区域,且两个区域中的其中一个区域用于显示己方用户,两个区域中的另外一个区域用于显示对方用户。在本实施例中,可以从两个区域中确定显示面积更大区域的作为目标区域,然后识别该目标区域中的视频内容,可以理解的,目标区域中的视频内容包括己方用户或者对方用户,可以通过图像识别的方式,判断目标区域显示的用户为己方用户或者对方用户,也就是说,可以通过图像识别的方式,判断更大的显示区域显示的用户为己方用户或者为对方用户。
步骤S250:当所述视频内容包括通话双方中的对方用户时,对所述目标区域中的视频内容进行显示增强处理。
其中,在本实施例中,若识别结果表征视频内容包括通话双方中的对方用户,此时,可显示区域中更大的显示区域的显示对象为对方用户,可以认为己方用户期望通过将对方用户显示在更大区域的方式看清楚对方用户,那么,电子设备作为响应,可以对目标区域中的视频内容进行显示增强处理,也就是说,对对方用户以及对方用户所在的背景进行显示增强处理,以使对方用户可以在所述电子设备上有更加清楚的显示,提升用户体验。
而若识别结果表征视频内容包括通话双方中的己方用户,此时,可显示区域中更大的显示区域的显示对象为己方用户,可以认为己方用户没有期望看清楚对方,那么,电子设备可以不对整个显示区域的显示内容进行显示增强处理,以降低电子设备的功耗,提升电子设备的使用时长。
例如,以A表示对方用户,B表示己方用户,请参阅图4和图5,其中,图4示出了本申请实施例提供的电子设备的一种界面示意图,图5示出了本申请实施例提供的电子设备的又一种界面示意图。具体地,在图4所示的界面中,电子设备的整个可显示区域均没有经过显示增强处理,而在图5所示的界面中,电子设备的包括对方用户的目标区域经过显示增强处理,从而相较于己方用户所在的区域显示效果更好。
请参阅图6,图6示出了本申请的图3所示的视频处理方法的步骤S250的流程示意图。下面将针对图6所示的流程进行详细的阐述,所述方法具体可以包括以下步骤:
步骤S251:当所述视频内容包括通话双方中的对方用户时,检测当前网络状态。
作为一种方式,若识别结果表征视频内容包括通话双方中的对方用户,此时,可显示区 域中更大的显示区域的显示对象为对方用户,可以认为己方用户期望通过将对方用户显示在更大区域的方式看清楚对方用户,可以对视频内容进行显示增强处理,以使对方用户可以更清楚的显示在目标区域上。但是,如果在网络状态不佳的情况下,加载一个视频资源会占据GPU的时长较长,如果再进行显示增强处理,会有非常闪屏,卡死的问题,因此,在本实施例中,可以检测当前网络状态,例如,检测当前信号强度、检测当前无线环境参数等,以判断在当前网络状态下进行视频内容的显示增强处理是否会造成视频通话的闪屏、卡顿的问题。
步骤S252:判断所述当前网络状态是否满足指定条件。
在本实施例中,电子设备设置有指定条件,用于作为当前网络状态的判断依据。其中,可以理解的,该指定条件可以由电子设备预先存储在本地,也可以是在判断时再进行设置,在此不做限定。另外,该指定条件可以由电子设备自动配置、可以由用户手动设置、也可以由服务器配置完成后传输至电子设备,在此不做限定。进一步地,在获取所述当前网络状态后,将该当前网络状态与指定条件进行比较,以判断该当前网络状态是否满足指定条件。
例如,以指定条件为指定信号强度为例,在检测到所述当前网络状态后,提取当前网络状态中的当前信号强度,然后将当前信号强度与指定信号强度进行比较,当该当前信号强度小于指定信号强度时,表征该当前信号强度不满足指定条件,当该当前信号强度不小于指定信号强度时,表征该当前信号强度满足指定条件。
步骤S253:当所述当前网络状态满足所述指定条件时,对所述目标区域中的视频内容进行显示增强处理。
其中,在确定当前网络状态满足指定条件时,表征在当前网络状态下对视频内容进行显示增强处理,不会造成视频通话的闪屏、卡死,因此,电子设备作为响应,可以对所述目标区域中的视频内容进行显示增强处理。
本申请又一个实施例提供的视频处理方法,当电子设备进入视频通话模式时,对显示屏的可显示区域进行分区域处理,形成至少两个区域,分别获取至少两个区域中的每个区域的显示面积,将该至少两个区域中显示面积最大的区域确定为目标区域,当视频通话模式为双方通话模式时,识别该目标区域中的视频内容,当视频内容包括通话双方中的对方用户时,对目标区域中的视频内容进行显示增强处理。相较于图2所示的视频处理方法,本实施例对视频通话模式下的视频内容进行显示增强处理,提升视频通话效果,并且在目标区域中的视频内容包括对方用户时,对视频内容进行显示增强处理,以在提升视频通话效果的同时减小电子设备的功耗。
请参阅图7,图7示出了本申请再一个实施例提供的视频处理方法的流程示意图。下面将针对图7所示的流程进行详细的阐述,所述视频处理方法具体可以包括以下步骤:
步骤S310:当所述电子设备进入视频通话模式时,对所述显示屏的可显示区域进行分区域处理,形成所述至少两个区域。
步骤S320:分别获取所述至少两个区域中的每个区域的显示面积。
步骤S330:将所述至少两个区域中显示面积最大的区域确定为所述目标区域。
步骤S340:当所述视频通话模式为双方通话模式时,识别所述目标区域中的视频内容。
其中,步骤S310-步骤S340的具体描述请参阅步骤S210-步骤S240,在此不再赘述。
步骤S350:当所述视频内容包括通话双方中的对方用户时,对所述目标区域中的视频内容进行第一显示增强处理。
其中,在本实施例中,若识别结果表征视频内容包括通话双方中的对方用户,此时,可显示区域中更大的显示区域的显示对象为对方用户,可以认为己方用户期望通过将对方用户显示在更大区域的方式看清楚对方用户,那么,电子设备作为响应,可以对目标区域中的视频内容进行显示增强处理,也就是说,对对方用户以及对方用户所在的背景进行显示增强处理,以使对方用户可以在所述电子设备上有更加清楚的显示,提升用户体验。具体地,在本实施例中,进行显示增强处理的方式为第一显示增强处理,其中,第一显示增强处理可以为 对视频内容的曝光度增强、去燥、边缘锐化、对比度增加或饱和度增加中的部分参数进行优化。
步骤S360:当所述视频内容包括通话双方中的己方用户时,对所述目标区域中的视频内容进行第二显示增强处理,其中,所述第一显示增强处理对应的视频内容优化质量高于所述第二显示增强处理对应的视频内容优化质量。
另外,若识别结果表征视频内容包括通话双方中的己方用户,此时,可显示区域中更大的显示区域的显示对象为己方用户,可以认为己方用户没有期望看清楚对方,而是期望看清楚自己,那么,作为一种方式,电子设备可以对目标区域的视频内容进行显示增强处理,也就是说,对电子设备所对应的用户进行显示增强处理,其中,进行显示增强处理的方式为第二显示增强处理,其中,第一显示增强处理对应的视频内容优化质量高于第二显示增强处理对应的视频内容优化质量,也就是说,经过第一显示增强处理后的视频内容的显示效果相较于经过第二显示增强处理后的视频内容的显示效果。作为一种方式,第二显示增强处理可以为对视频内容的曝光度增强、去燥、边缘锐化、对比度增加或饱和度增加中的部分参数进行优化,且优化参数的数量少于第一显示增强处理的优化参数,以相较于第一显示增强处理降低电子设备的功耗,提升电子设备的使用时长。
例如,请参阅图8,图8示出了本申请实施例提供的电子设备的再一种界面示意图。其中,电子设备的包括对方用户的目标区域经过第一显示增强处理,包括己方用户的区域经过第二显示增强处理,且第一显示增强处理对应的视频内容优化质量高于第二显示增强处理对应的视频内容优化质量,即对方用户所在的区域的显示效果比己方用户所在的区域的显示效果更好。
请参阅图9,图9示出了本申请的图7所示的视频处理方法的步骤S360的流程示意图。下面将针对图9所示的流程进行详细的阐述,所述方法具体可以包括以下步骤:
步骤S361:当所述视频内容包括通话双方中的己方用户时,检测图像处理器的当前负载率。
作为一种方式,若识别结果表征视频内容包括通话双方中的己方用户,此时,可显示区域中更大的显示区域的显示对象为己方用户,可以认为己方用户没有期望看清楚对方,而是期望看清楚自己,那么,作为一种方式,电子设备可以对目标区域的视频内容进行显示增强处理,也就是说,对电子设备所对应的用户进行显示增强处理,以使己方用户可以更清楚的显示在目标区域上。但是,显示增强处理会占用较多的图像处理器(Graphics Processing Unit,GPU)资源,因此,如果在图像处理器的当前负载率较高的情况下对所述视频内容进行显示增强处理,可能会造成闪屏、卡死的问题,因此,在本实施例中,可以检测图像处理器的当前负载率,以判断在图像处理器处于当前负载率的情况下进行视频内容的显示增强处理是否会造成视频通话的闪屏、卡死的问题。
步骤S362:判断所述当前负载率是否低于指定负载率。
在本实施例中,电子设备设置有指定负载率,用于作为当前负载率的判断依据。可以理解的,该指定负载率可以由电子设备预先存储在本地,也可以是在判断时再进行设置,在此不做限定。另外,该指定条件可以由电子设备自动配置、可以由用户手动设置、也可以由服务器配置完成后传输至电子设备,在此不做限定。进一步地,在获取所述图像处理器的当前负载率后,将当前负载率与指定负载率进行比较,以判断该当前负载率是否低于指定负载率。
步骤S363:当所述当前负载率低于所述指定负载率时,对所述目标区域中的视频内容进行第二显示增强处理。
其中,在确定当前负载率低于指定负载率时,表征在当前负载率下对视频内容进行显示增强处理,不会造成视频通话的闪屏、卡死,因此,电子设备作为响应,可以对目标区域中的视频内容进行第二显示增强处理。
本申请再一个实施例提供的视频处理方法,当电子设备进入视频通话模式时,对显示屏的可显示区域进行分区域处理,形成至少两个区域,分别获取至少两个区域中的每 个区域的显示面积,将该至少两个区域中显示面积最大的区域确定为目标区域,当视频通话模式为双方通话模式时,识别目标区域中的视频内容,当该视频内容包括通话双方中的对方用户时,对该目标区域中的视频内容进行第一显示增强处理,当视频内容包括通话双方的己方用户时,对目标区域中的视频内容进行第二显示增强处理,其中,该第一显示增强处理对应的视频内容优化质量高于第二显示增强处理对应的视频内容优化质量。相较于图3所示的视频处理方法,本实施例在视频内容包括通话双方的对方用户和己方用户时均进行显示增强处理,其中,在视频内容包括对方用户时进行显示增强处理后的视频内容的显示效果由于视频内容包括己方用户时进行显示增强处理后的视频内容的显示效果,以满足不同的通话形式。
请参阅图10,图10示出了本申请另一个实施例提供的视频处理方法的流程示意图。下面将针对图10所示的流程进行详细的阐述,所述视频处理方法具体可以包括以下步骤:
步骤S410:对所述显示屏的可显示区域进行分区域处理,形成至少两个区域。
步骤S420:从所述至少两个区域中确定目标区域。
其中,步骤S410-步骤S420的具体描述请参阅步骤S110-步骤S120,在此不再赘述。
步骤S430:识别所述目标区域中的视频内容。
步骤S440:判断所述视频内容是否包括人物图像。
在本实施例中,电子设备对目标区域中的视频内容进行识别,根据识别结果判断该视频内容是否包括人物图像,可以理解的,识别结果可以不包括人物图像、包括一个人物图像或者包括多个人物图像等,在此不做限定,其中,当所述识别结果表征包括至少一个人物图像时,可以判定该视频内容包括人物图像。
步骤S450:当所述视频内容包括所述人物图像时,对所述目标区域中的视频内容进行显示增强处理,其中,所述显示增强处理通过优化参数处理所述视频内容中的图像提高所述视频内容的画质。
进一步地,在电子设备确定视频内容中包括人物图像时,可以对所述视频内容进行显示增强处理,以使视频内容中的人物可以更清楚的显示在目标区域中,提升显示效果。
本申请另一个实施例提供的视频处理方法,对显示屏的可显示区域进行分区域处理,形成至少两个区域,分别获取至少两个区域的显示面积,从至少两个区域中确定显示面积最大的目标区域,视频目标区域中的视频内容,判断该视频内容是否包括人物图像,当该视频内容包括人物图像时,对该目标区域中的视频内容进行显示增强处理。相较于图2所示的视频处理方法,本实施例在视频内容包括人物图像时再对视频内容进行显示增强处理,以提升视频内容的显示效果的同时,降低电子设备的功耗。
请参阅图11,图11示出了本申请又再一个实施例提供的视频处理方法的流程示意图。下面将针对图11所示的流程进行详细的阐述,所述视频处理方法具体可以包括以下步骤:
步骤S510:对所述显示屏的可显示区域进行分区域处理,形成至少两个区域。
其中,步骤S510的具体描述请参阅步骤S110,在此不再赘述。
步骤S520:分别获取所述至少两个区域中的每个区域的显示内容。
作为另一种方式,在对可显示区域进行分区域处理得到至少两个区域后,分别对至少两个区域中的每个区域的显示内容进行检测。其中,在本实施例中,可以对显示内容的来源进行检测,例如,可以检测所述显示内容来源于电子设备本地或来源与网络,具体的,可以通过检测获取显示内容的接口判断显示内容的来源,当检测到显示内容从指定的文件路径读取时,则可以确定显示内容的来源为本地;当获取显示内容从指定的网络地址获取时,则可以确定显示内容的来源为网络,具体方式在此不做限定。
步骤S530:判断所述显示内容是否为本地资源。
进一步地,基于上述检测结果可以对显示内容的来源进行判断。
步骤S540:当所述显示内容为非本地资源时,将所述显示内容所在的区域确定为所述目 标区域。
其中,作为一种方式,当所述显示内容为本地资源时,可以认为该显示内容由所述电子设备的摄像头实时采集显示,当所述显示内容为非本地资源时,可以认为该显示内容由与电子设备连接的其他电子设备传输并显示,于本实施例中,当所述显示内容由与电子设备连接的其他电子设备传输并显示时,可以认为该显示内容为通话双方用户中的对方用户,因此,可以将该显示内容所在的区域确定为目标区域,以对该区域内的显示内容进行显示增强处理,提升显示效果。当所述显示内容由电子设备的摄像头采集时,可以认为该显示内容为通话方双用户中的己方用户,因此,可以将该显示内容所在的区域确定为非目标区域,不对该区域内的显示内容进行显示增强处理,以降低功耗。
步骤S550:对所述目标区域中的视频内容进行显示增强处理,其中,所述显示增强处理通过优化参数处理所述视频内容中的图像提高所述视频内容的画质。
其中,步骤S550的具体描述请参阅步骤S130,在此不再赘述。
本申请又再一个实施例提供的视频处理方法,对显示屏的可显示区域进行分区域处理,形成至少两个区域,分别获取该至少两个区域中的每个区域的显示内容,识别该显示内容是否为本地资源,当该显示内容为非本地资源时,将该显示内容所在的区域作为目标区域,对该目标区域中的视频内容进行显示增强处理。相较于图2所示的视频处理方法,本实施例可以根据显示内容的来源确定目标区域,提升显示效果。
请参阅图12,图12示出了本申请实施例提供的视频处理装置200的模块框图。所述视频处理装置200应用于上述电子设备,所述电子设备包括显示屏,下面将针对图12所示的框图进行阐述,所述视频处理装置200包括:处理模块210、确定模块220以及显示增强模块230,其中:
处理模块210,用于对所述显示屏的可显示区域进行分区域处理,形成至少两个区域。进一步地,所述处理模块210包括:处理子模块,其中:
处理子模块,用于当所述电子设备进入视频通话模式时,对所述显示屏的可显示区域进行分区域处理,形成所述至少两个区域。
确定模块220,用于从所述至少两个区域中确定目标区域。进一步地,所述确定模块220包括:显示面积获取子模块、第一确定子模块、显示内容判断子模块以及第二确定子模块,其中:
显示面积获取子模块,用于分别获取所述至少两个区域中的每个区域的显示面积。
第一确定子模块,用于将所述至少两个区域中显示面积最大的区域确定为所述目标区域。
显示内容获取子模块,用于分别获取所述至少两个区域中的每个区域的显示内容。
显示内容判断子模块,用于判断所述显示内容是否为本地资源。
第二确定子模块,用于当所述显示内容为非本地资源时,将所述显示内容所在的区域确定为所述目标区域。
显示增强模块230,用于对所述目标区域中的视频内容进行显示增强处理。进一步地,所述显示增强模块230包括:识别子模块、第一显示增强子模块、第二显示增强子模块、第三显示增强子模块、视频内容识别模块、视频内容判断模块以及第四显示增强子模块,其中:
识别子模块,用于识别所述目标区域中的视频内容。
第一显示增强子模块,用于当所述视频内容包括通话双方中的对方用户时,对所述目标区域中的视频内容进行显示增强处理。进一步地,所述第一显示增强子模块包括:网络状态检测单元、网络状态判断单元以及第一显示增强单元,其中:
网络状态检测单元,用于当所述视频内容包括通话双方中的对方用户时,检测当前网络状态。
网络状态判断单元,用于判断所述当前网络状态是否满足指定条件。
第一显示增强单元,用于当所述当前网络状态满足所述指定条件时,对所述目标区域中的视频内容进行显示增强处理。
第二显示增强子模块,用于当所述视频内容包括通话双方中的对方用户时,对所述目标区域中的视频内容进行第一显示增强处理。
第三显示增强子模块,用于当所述视频内容包括通话双方中的己方用户时,对所述目标区域中的视频内容进行第二显示增强处理,其中,所述第一显示增强处理对应的视频内容优化质量高于所述第二显示增强处理对应的视频内容优化质量。进一步地,所述第三显示增强子模块包括:负载率检测单元、负载率判断单元以及第二显示增强单元,其中:
负载率检测单元,用于当所述视频内容包括通话双方中的己方用户时,检测图像处理器的当前负载率。
负载率判断单元,用于判断所述当前负载率是否低于指定负载率。
第二显示增强单元,用于当所述当前负载率低于所述指定负载率时,对所述目标区域中的视频内容进行第二显示增强处理。
视频内容识别子模块,用于识别所述目标区域中的视频内容。
视频内容判断子模块,用于判断所述视频内容是否包括人物图像。
第四显示增强子模块,用于当所述视频内容包括所述人物图像时,对所述目标区域中的视频内容进行显示增强处理。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
请参阅图13,其示出了本申请实施例提供的一种电子设备100的结构框图。该电子设备100可以是智能手机、平板电脑、电子书等能够运行应用程序的电子设备。本申请中的电子设备100可以包括一个或多个如下部件:处理器110、存储器120、显示屏130、编解码器140以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器120中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。
其中,处理器110可以包括一个或者多个处理核。处理器110利用各种接口和线路连接整个电子设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行电子设备100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储终端100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
所述编解码器140可以用于对视频数据进行编码或解码,然后将解码后的视频数据 传输到显示屏130进行显示,其中,该编解码器140可以为GPU、专用的DSP、FPGA、ASIG芯片等。
请参阅图14,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质300中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质300可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质300包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质300具有执行上述方法中的任何方法步骤的程序代码310的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码310可以例如以适当形式进行压缩。
综上所述,本申请实施例提供的视频处理方法、装置、电子设备以及存储介质,对电子设备的显示屏的可显示区域进行分区域处理,形成至少两个区域,分别获取该至少两个区域的显示面积,从至少两个区域中确定显示面积最大的目标区域,对该目标区域中的视频内容进行显示增强处理,从而通过在电子设备的显示屏分为多个区域显示时,对多个区域中显示面积最大的区域的视频内容进行显示增强处理,以在不造成电子设备过多功耗的基础上提升视频内容的显示效果。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。
Claims (20)
- 一种视频处理方法,其特征在于,应用于电子设备,所述电子设备包括显示屏,所述方法包括:对所述显示屏的可显示区域进行分区域处理,形成至少两个区域;从所述至少两个区域中确定目标区域;对所述目标区域中的视频内容进行显示增强处理,其中,所述显示增强处理通过优化参数处理所述视频内容中的图像提高所述视频内容的画质。
- 根据权利要求1所述的方法,其特征在于,所述对所述显示屏的可显示区域进行分区域处理,形成至少两个区域,包括:当所述电子设备进入视频通话模式时,对所述显示屏的可显示区域进行分区域处理,形成所述至少两个区域。
- 根据权利要求2所述的方法,其特征在于,当所述视频通话模式为双方通话模式时,所述对所述目标区域中的视频内容进行显示增强处理,包括:识别所述目标区域中的视频内容;当所述视频内容包括通话双方中的对方用户时,对所述目标区域中的视频内容进行显示增强处理。
- 根据权利要求3所述的方法,其特征在于,所述方法,还包括:当所述视频内容包括通话双方中的对方用户时,对所述目标区域中的视频内容进行第一显示增强处理;当所述视频内容包括通话双方中的己方用户时,对所述目标区域中的视频内容进行第二显示增强处理,其中,所述第一显示增强处理对应的视频内容优化质量高于所述第二显示增强处理对应的视频内容优化质量。
- 根据权利要求4所述的方法,其特征在于,所述当所述视频内容包括通话双方中的己方用户时,对所述目标区域中的视频内容进行第二显示增强处理,包括:当所述视频内容包括通话双方中的己方用户时,检测图像处理器的当前负载率;判断所述当前负载率是否低于指定负载率;当所述当前负载率低于所述指定负载率时,对所述目标区域中的视频内容进行第二显示增强处理。
- 根据权利要求4所述的方法,其特征在于,所述当所述视频内容包括通话双方中的对方用户时,对所述目标区域中的视频内容进行显示增强处理,包括:当所述视频内容包括通话双方中的对方用户时,检测当前网络状态;判断所述当前网络状态是否满足指定条件;当所述当前网络状态满足所述指定条件时,对所述目标区域中的视频内容进行显示增强处理。
- 根据权利要求6所述的方法,其特征在于,所述判断所述当前网络状态是否满足指定条件,包括:提取所述当前网络状态中的当前信号强度;判断所述当前信号强度是否不小于指定信号强度;当所述当前信号强度不小于所述指定信号强度时,确定所述当前网络状态满足所述指定条件。
- 根据权利要求3-7任一项所述的方法,其特征在于,所述识别所述目标区域中的视频内容,包括:通过图像识别技术识别所述目标区域中的视频内容。
- 根据权利要求2-8任一项所述的方法,其特征在于,所述当所述电子设备进入视频通话模式时,对所述显示屏的可显示区域进行分区域处理,形成所述至少两个区域之前,还 包括:当所述电子设备向其他电子设备发起视频通话请求且所述其他电子设备接受所述视频通话请求时,所述电子设备进入所述视频通话模式;或当所述其他电子设备向所述电子设备发起视频通话请求且所述电子设备接受所述视频通话请求时,所述电子设备进入所述视频通话模式。
- 根据权利要求1-9任一项所述的方法,其特征在于,所述对所述目标区域中的视频内容进行显示增强处理,包括:识别所述目标区域中的视频内容;判断所述视频内容是否包括人物图像;当所述视频内容包括所述人物图像时,对所述目标区域中的视频内容进行显示增强处理。
- 根据权利要求1-10任一项所述的方法,其特征在于,所述从所述至少两个区域中确定目标区域,包括:分别获取所述至少两个区域中的每个区域的显示面积;将所述至少两个区域中显示面积最大的区域确定为所述目标区域。
- 根据权利要求11所述的方法,其特征在于,所述分别获取所述至少两个区域中的每个区域的显示面积,包括:分别获取所述至少两个区域中的每个区域的坐标信息;基于所述每个区域的坐标信息计算所述每个区域的显示面积。
- 根据权利要求1-12任一项所述的方法,其特征在于,所述从所述至少两个区域中确定目标区域,包括:分别获取所述至少两个区域中的每个区域的显示内容;判断所述显示内容是否为本地资源;当所述显示内容为非本地资源时,将所述显示内容所在的区域确定为所述目标区域。
- 根据权利要求13所述的方法,其特征在于,所述判断所述显示内容是否为本地资源,包括:检测获取所述显示内容的接口,并基于所述接口确定所述显示内容的来源;基于所述来源判断所述显示内容是否为本地资源。
- 根据权利要求1-14任一项所述的方法,其特征在于,所述对所述显示屏的可显示区域进行分区域处理,形成至少两个区域,包括:接收指示分屏的指令信息;响应所述指令信息对所述显示屏的可显示区域进行分区域处理,形成所述至少两个区域。
- 根据权利要求15所述的方法,其特征在于,所述响应所述指令信息对所述显示屏的可显示区域进行分区域处理,形成所述至少两个区域,包括:当检测到作用于指示启动视频通话或指示启动直播的实体按键或虚拟按键上的触控操作时,响应所述触控操作对所述显示屏的可显示区域进行分区域处理,形成所述至少两个区域。
- 根据权利要求1-16任一项所述的方法,其特征在于,所述对所述目标区域中的视频内容进行显示增强处理,包括:对所述目标区域中的视频内容进行曝光度增强、去燥、边缘锐化、对比度增加或饱和度增加中的至少一种处理。
- 一种视频处理装置,其特征在于,应用于电子设备,所述电子设备包括显示屏,所述装置包括:处理模块,用于对所述显示屏的可显示区域进行分区域处理,形成至少两个区域;确定模块,用于从所述至少两个区域中确定目标区域;显示增强模块,用于对所述目标区域中的视频内容进行显示增强处理,其中,所述显示增强处理通过优化参数处理所述视频内容中的图像提高所述视频内容的画质。
- 一种电子设备,其特征在于,包括存储器和处理器,所述存储器耦接到所述处理器,所述存储器存储指令,当所述指令由所述处理器执行时所述处理器执行如权利要求1-17任一项所述的方法。
- 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1-17任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811428039.8 | 2018-11-27 | ||
CN201811428039.8A CN109640151A (zh) | 2018-11-27 | 2018-11-27 | 视频处理方法、装置、电子设备以及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020108060A1 true WO2020108060A1 (zh) | 2020-06-04 |
Family
ID=66069370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/107932 WO2020108060A1 (zh) | 2018-11-27 | 2019-09-25 | 视频处理方法、装置、电子设备以及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109640151A (zh) |
WO (1) | WO2020108060A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109640151A (zh) * | 2018-11-27 | 2019-04-16 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备以及存储介质 |
CN112055131A (zh) * | 2019-06-05 | 2020-12-08 | 杭州吉沁文化创意有限公司 | 一种视频处理系统及方法 |
CN113132800B (zh) * | 2021-04-14 | 2022-09-02 | Oppo广东移动通信有限公司 | 视频处理方法、装置、视频播放器、电子设备及可读介质 |
CN116456124B (zh) * | 2023-06-20 | 2023-08-22 | 上海宝玖数字科技有限公司 | 高延时网络状态下的直播信息展示方法、系统及电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1849823A (zh) * | 2003-09-09 | 2006-10-18 | 英国电讯有限公司 | 视频通信方法及系统 |
US20090244256A1 (en) * | 2008-03-27 | 2009-10-01 | Motorola, Inc. | Method and Apparatus for Enhancing and Adding Context to a Video Call Image |
CN102025965A (zh) * | 2010-12-07 | 2011-04-20 | 华为终端有限公司 | 视频通话方法及可视电话 |
CN102726055A (zh) * | 2010-01-25 | 2012-10-10 | Lg电子株式会社 | 视频通信方法和使用该视频通信方法的数字电视 |
CN103310411A (zh) * | 2012-09-25 | 2013-09-18 | 中兴通讯股份有限公司 | 一种图像局部增强方法和装置 |
CN109640151A (zh) * | 2018-11-27 | 2019-04-16 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备以及存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140372921A1 (en) * | 2013-06-17 | 2014-12-18 | Vonage Network Llc | Systems and methods for display of a video call in picture in picture mode |
WO2016205990A1 (zh) * | 2015-06-23 | 2016-12-29 | 深圳市柔宇科技有限公司 | 分屏显示的方法及电子装置 |
CN105872832A (zh) * | 2015-11-30 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | 视频通话方法和系统以及显示装置 |
CN105847728A (zh) * | 2016-04-13 | 2016-08-10 | 腾讯科技(深圳)有限公司 | 一种信息处理方法及终端 |
CN108810574B (zh) * | 2017-04-27 | 2021-03-12 | 腾讯科技(深圳)有限公司 | 一种视频信息处理方法及终端 |
CN107071332A (zh) * | 2017-05-19 | 2017-08-18 | 深圳天珑无线科技有限公司 | 视频图像传输处理方法和视频图像传输处理装置 |
CN107071333A (zh) * | 2017-05-19 | 2017-08-18 | 深圳天珑无线科技有限公司 | 视频图像处理方法和视频图像处理装置 |
-
2018
- 2018-11-27 CN CN201811428039.8A patent/CN109640151A/zh active Pending
-
2019
- 2019-09-25 WO PCT/CN2019/107932 patent/WO2020108060A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1849823A (zh) * | 2003-09-09 | 2006-10-18 | 英国电讯有限公司 | 视频通信方法及系统 |
US20090244256A1 (en) * | 2008-03-27 | 2009-10-01 | Motorola, Inc. | Method and Apparatus for Enhancing and Adding Context to a Video Call Image |
CN102726055A (zh) * | 2010-01-25 | 2012-10-10 | Lg电子株式会社 | 视频通信方法和使用该视频通信方法的数字电视 |
CN102025965A (zh) * | 2010-12-07 | 2011-04-20 | 华为终端有限公司 | 视频通话方法及可视电话 |
CN103310411A (zh) * | 2012-09-25 | 2013-09-18 | 中兴通讯股份有限公司 | 一种图像局部增强方法和装置 |
CN109640151A (zh) * | 2018-11-27 | 2019-04-16 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备以及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN109640151A (zh) | 2019-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020108018A1 (zh) | 游戏场景处理方法、装置、电子设备以及存储介质 | |
WO2020107989A1 (zh) | 视频处理方法、装置、电子设备以及存储介质 | |
US11706484B2 (en) | Video processing method, electronic device and computer-readable medium | |
US20210281771A1 (en) | Video processing method, electronic device and non-transitory computer readable medium | |
US20210281718A1 (en) | Video Processing Method, Electronic Device and Storage Medium | |
WO2020108060A1 (zh) | 视频处理方法、装置、电子设备以及存储介质 | |
WO2020038128A1 (zh) | 视频处理方法、装置、电子设备及计算机可读介质 | |
CN109242802B (zh) | 图像处理方法、装置、电子设备及计算机可读介质 | |
US9661239B2 (en) | System and method for online processing of video images in real time | |
CN109379628B (zh) | 视频处理方法、装置、电子设备及计算机可读介质 | |
WO2020038130A1 (zh) | 视频处理方法、装置、电子设备及计算机可读介质 | |
JP2022528294A (ja) | 深度を利用した映像背景減算法 | |
US9639956B2 (en) | Image adjustment using texture mask | |
WO2020108061A1 (zh) | 视频处理方法、装置、电子设备以及存储介质 | |
WO2020108010A1 (zh) | 视频处理方法、装置、电子设备以及存储介质 | |
CN109120988B (zh) | 解码方法、装置、电子设备以及存储介质 | |
CN109587558B (zh) | 视频处理方法、装置、电子设备以及存储介质 | |
US11490157B2 (en) | Method for controlling video enhancement, device, electronic device and storage medium | |
US11562772B2 (en) | Video processing method, electronic device, and storage medium | |
WO2022111269A1 (zh) | 视频的细节增强方法、装置、移动终端和存储介质 | |
CN110570441B (zh) | 一种超高清低延时视频控制方法及系统 | |
CN109167946B (zh) | 视频处理方法、装置、电子设备以及存储介质 | |
WO2020038071A1 (zh) | 视频增强控制方法、装置、电子设备及存储介质 | |
CN109819318B (zh) | 一种图像处理、直播方法、装置、计算机设备及存储介质 | |
CN109712100B (zh) | 视频增强控制方法、装置以及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19889056 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19889056 Country of ref document: EP Kind code of ref document: A1 |