CN109242802B - Image processing method, image processing device, electronic equipment and computer readable medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN109242802B
CN109242802B CN201811141914.4A CN201811141914A CN109242802B CN 109242802 B CN109242802 B CN 109242802B CN 201811141914 A CN201811141914 A CN 201811141914A CN 109242802 B CN109242802 B CN 109242802B
Authority
CN
China
Prior art keywords
image
target object
optimized
target
video file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811141914.4A
Other languages
Chinese (zh)
Other versions
CN109242802A (en
Inventor
米岚
林进全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811141914.4A priority Critical patent/CN109242802B/en
Publication of CN109242802A publication Critical patent/CN109242802A/en
Application granted granted Critical
Publication of CN109242802B publication Critical patent/CN109242802B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • G06T5/70
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a computer readable medium, and relates to the technical field of video processing. The method comprises the following steps: acquiring a target frame image corresponding to a video file; determining a plurality of target object areas corresponding to the target frame images; optimizing image data in a target area meeting preset conditions in the target frame image; and displaying the optimized target frame image on a screen of the electronic equipment. Therefore, in the video playing process, the video can be optimized according to the target object area of each frame of image, the image quality of the user video file during playing is improved, an ultra-clear visual effect is obtained, and the user experience is improved.

Description

Image processing method, image processing device, electronic equipment and computer readable medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable medium.
Background
With the development of electronic technology and information technology, more and more devices can play videos. In the process of playing the video, the device needs to perform operations such as decoding, rendering, and synthesizing on the video, and then display the video on the display screen, however, in the existing video playing technology, the image quality effect of the played video cannot meet the requirements of the user, resulting in poor user experience.
Disclosure of Invention
The application provides an image processing method, an image processing device, an electronic device and a computer readable medium, so as to overcome the defects.
In a first aspect, an embodiment of the present application provides an image processing method applied to an electronic device. The method comprises the following steps: acquiring a target frame image corresponding to a video file; determining a plurality of target object areas corresponding to the target frame images; optimizing image data in a target area meeting preset conditions in the target frame image; and displaying the optimized target frame image on a screen of the electronic equipment.
In a second aspect, an embodiment of the present application further provides an image processing apparatus applied to an electronic device, where the image processing apparatus includes: the device comprises an acquisition unit, a determination unit, an optimization unit and a display unit. And the acquisition unit is used for acquiring the target frame image corresponding to the video file. And the determining unit is used for determining a plurality of target object areas corresponding to the target frame images. And the optimization unit is used for optimizing the image data in the target area meeting the preset conditions in the target frame image. And the display unit is used for displaying the optimized target frame image on the screen of the electronic equipment.
In a third aspect, an embodiment of the present application further provides an electronic device, including: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the above-described methods.
In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be called by a processor to execute the method.
Compared with the prior art, according to the scheme provided by the application, in the video playing process, a plurality of target object areas in each image to be played can be determined, the target object areas meeting preset conditions are selected, the image data corresponding to the target object areas in the frame of image is optimized, and then the optimized frame of image is displayed on the screen of the electronic equipment. Therefore, in the video playing process, the video can be optimized according to the target object area of each frame of image, the image quality of the user video file during playing is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a block diagram illustrating a video playing architecture provided by an embodiment of the present application;
FIG. 2 illustrates a block diagram of an image rendering architecture provided by an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a video list interface of a client according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a type selection interface to be optimized according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a hiding effect of a type selection interface to be optimized according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a video playing interface provided by an embodiment of the present application;
FIG. 8 is a block diagram illustrating a video playback architecture provided by another embodiment of the present application;
FIG. 9 is a flow chart of a method of image processing according to another embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a touch gesture provided by an embodiment of the present application;
FIG. 11 is a flow chart of a method of image processing provided by yet another embodiment of the present application;
FIG. 12 is a schematic diagram illustrating a touch gesture provided by another embodiment of the present application;
fig. 13 shows a block diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 14 shows a block diagram of an electronic device provided in an embodiment of the present application;
fig. 15 illustrates a storage unit for storing or carrying program codes for implementing an image processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, a block diagram of a video playback architecture is shown. Specifically, when the operating system acquires data to be played, the following work is to analyze audio/video data. The general video file is composed of two parts of video stream and audio stream, and the packaging formats of audio and video in different video formats are definitely different. The process of synthesizing the audio stream and the video stream into a file is called muxer, whereas the process of separating the audio stream and the video stream from the media file is called demux.
Specifically, the video decoding may include hard decoding and soft decoding, where the hardware decoding is performed by submitting a part of video data, which is originally completely submitted to a Central Processing Unit (CPU), to an image processor (GPU), and the GPU has a parallel operation capability much higher than that of the CPU, so that a load on the CPU can be greatly reduced, and some other programs can be run simultaneously after the CPU has a low occupancy rate, of course, for a better processor, such as i 52320 or any type of AMD four-core processor, the hard decoding and the software are selected according to requirements.
Specifically, as shown in fig. 1, the multimedia Framework obtains a video file to be played by the client through an API interface with the client, and delivers the video file to the video decoder, where the multimedia Framework (Media Framework) is a multimedia Framework in the Android system, and three parts, namely MediaPlayer, mediaplayservice and stagefrigidplaylayer, constitute a basic Framework of the Android multimedia. The multimedia frame part adopts a C/S structure, the MediaPlayer is used as a Client terminal of the C/S structure, the mediaplayservice and the stagefrigtheyer are used as a C/S structure Server terminal, the responsibility of playing the multimedia file is born, and the Server terminal completes the request of the Client terminal and responds through the stagefrigtheyer. Video Decode is a super decoder that integrates the most common audio and Video decoding and playback for decoding Video data.
And soft decoding, namely enabling the CPU to decode the video through software, and calling the GPU to render and combine the video after decoding and then displaying the video on a screen. And hard decoding means that the video decoding task is independently completed through a special daughter card device without the aid of a CPU.
Whether the decoding is hard decoding or soft decoding, after the video data is decoded, the decoded video data is sent to a layer delivery module (surfefinger), and the decoded video data is rendered and synthesized by the surfefinger and then displayed on a display screen. The Surface flunger is an independent Service, receives all the Surface of windows as input, calculates the position of each Surface in a final composite image according to parameters such as ZOrder, transparency, size and position, and then sends the position to HWComposer or OpenGL to generate a final display Buffer, and then displays the final display Buffer on a specific display device.
As shown in fig. 1, in the soft decoding, the CPU decodes the video data and then gives it to the surface flag rendering and compositing, and in the hard decoding, the CPU decodes the video data and then gives it to the surface flag rendering and compositing. And the SurfaceFlinger calls the GPU to render and synthesize the image, and the image is displayed on the display screen.
Specifically, as shown in fig. 2, the image rendering process includes that a CPU obtains a video file to be played, which is sent by a client, decodes the video file to obtain decoded video data, sends the video data to a GPU, puts a rendering result into a frame buffer (such as FrameBuffer in fig. 2) after the GPU completes rendering, and then a video controller reads data in the frame buffer line by line according to an HSync signal and transmits the data to a display for display through digital-to-analog conversion.
However, in the conventional video playback, the image quality of the played video is not good, and the inventors studied the reason for the poor image quality because of the lack of enhancement optimization of the video data. Therefore, in order to solve the technical problem, an embodiment of the present application provides an image processing method applied to an image processor of an electronic device for improving an image quality effect when a video is played, and in particular, please refer to the image processing method shown in fig. 3, the method includes: s301 to S303.
S301: and acquiring a target frame image corresponding to the video file.
Specifically, when a client of the electronic device plays a video, the electronic device can acquire a video file to be played and then decode the video file, specifically, the above soft decoding or hard decoding can be adopted to decode the video file, multi-frame image data to be rendered corresponding to the video file can be acquired after decoding, and then the multi-frame image data can be displayed on a display screen after being rendered.
Specifically, the electronic device includes a central processing unit and an image processor, and acquires a specific implementation manner of multi-frame image data to be rendered corresponding to a video file, where the central processing unit acquires the video file to be played sent by a client, and as an implementation manner, the central processing unit acquires a video playing request sent by the client, where the video playing request includes the video file to be played, specifically, the video playing request may include identity information of the video file to be played, where the identity information may be a name of the video file, and the video file may be found in a storage space where the video file is stored based on the identity information of the video file.
Specifically, the video playing request may be obtained for the touch state of the playing button corresponding to different video files on the interface of the client, specifically, as shown in fig. 4, the display content corresponding to multiple videos is displayed in the video list interface of the client, as shown in fig. 4, the display content corresponding to multiple videos includes a thumbnail corresponding to each video, the thumbnail may be used as a touch key, and when a user clicks the thumbnail, the client may detect the thumbnail clicked by the user, and may also determine the video file to be played.
The client responds to the video selected by the user in the video list, enters a video playing interface, clicks a playing button of the playing interface, can detect the video file currently clicked by the user through monitoring the touch operation of the user, then sends the video file to the CPU, and the CPU selects hard decoding or soft decoding to decode the video file. After being decoded, the video file to be played is parsed into a plurality of frames of image data.
In the embodiment of the application, a central processing unit acquires a video file to be played, and processes the video file according to a soft decoding algorithm to acquire multi-frame image data corresponding to the video file.
The specific implementation of the image processor acquiring the multiple frames of image data corresponding to the video file and storing the multiple frames of image data in the off-screen rendering buffer area may be as follows: intercepting the multi-frame image data which is sent to the frame buffer area by the central processing unit and corresponds to the video file, and storing the intercepted multi-frame image data to an off-screen rendering buffer area.
Specifically, a program plug-in may be provided in the image processor, and the program plug-in detects a video file to be rendered, which is sent to the image processor by the central processor. And when the central processing unit decodes the video file to obtain the image data to be rendered, sending the image data to be rendered to the GPU, intercepting the image data by the program plug-in, and storing the image data in an off-screen rendering buffer area. The method is performed on the image in the off-screen rendering buffer to optimize the image before playback.
Specifically, a certain frame image in the video file is taken as an example, and specifically, a target frame image is taken as an example, and the target frame image is a certain frame image in the multi-frame images corresponding to the video file. After the central processing unit of the electronic device acquires the video file requested to be played by the client, the video file is decoded to acquire a plurality of frames of images, and then the image to be processed currently is selected as the target frame of image.
S302: and determining a plurality of target object areas corresponding to the target frame images.
Specifically, the target object in the image acquired by the image acquisition device is identified and classified, and specifically, the target object may be acquired by using a target detection algorithm or a target extraction algorithm. Specifically, all contour line information in the image acquired by the image acquisition device is extracted through a target extraction or clustering algorithm, and then the category of the object corresponding to each contour line is found in a pre-learned model, wherein the learning model uses a matching database, and a plurality of contour line information and the category corresponding to each contour line information are stored in the matching database, wherein the categories include human bodies, animals, mountains, rivers, lake surfaces, buildings, roads and the like.
For example, when the object is an animal, the contour of the object and characteristic information such as the ear, corners, ears and limbs can be collected. When the target object is a human body, the human face feature extraction can be performed on the target object, wherein the method for extracting the human face feature can include a knowledge-based characterization algorithm or a characterization method based on algebraic features or statistical learning. In addition, when the target object is a wide landscape such as a lake or a continuous mountain, grassland, or the like, it is possible to determine whether or not the target object has a long horizontal line, that is, a horizon line, and if the target object has a horizontal line, it is determined that the target object has a wide landscape. It is needless to say that whether or not the object is a landscape may be determined by color, and for example, when green or khaki is detected in a relatively concentrated area, it is determined that the object is a landscape or a desert. Similarly, the detection of other objects such as rivers, buildings, roads, etc. can also be performed by the above detection algorithm, and is not described herein again.
In addition, a plurality of target object regions in the target frame image can be determined through the neural network, specifically, the mobile visual neural network is trained through the image including at least one target object, and the mobile visual neural network corresponding to the target object is acquired.
The mobile visual neural network is trained in advance through each target object region, for example, the mobile visual neural network is trained through a large number of images with corresponding target object regions, and the trained mobile visual neural network can identify the target object region in the preview picture corresponding to the trained target object region. For example, the mobile visual neural network is trained through a large number of images with blue sky, and the trained mobile visual neural network corresponding to the target object region where the blue sky is located can identify the target object region corresponding to the blue sky in the preview picture. The training of the visual neural network can be completed in the mobile terminal in advance, and the mobile terminal can also acquire the trained mobile visual neural network related data from the server for acquiring the corresponding target object region in the preview picture.
S303: and optimizing the image data in the target area meeting the preset condition in the target frame image.
After a plurality of target object areas corresponding to the target frame image are acquired, determining a target object area meeting a preset condition from the plurality of target object areas, specifically, the target object area may be determined according to the type of the target object area or according to a target object to be optimized by a user,
as an embodiment, the manner of determining the target object region satisfying the preset condition is: obtaining the type of each target object area in the target frame image; acquiring a type to be optimized corresponding to the video file; and taking the target object area corresponding to the type matched with the type to be optimized as the target object area meeting the preset condition.
Specifically, the user may set a type to be optimized for a video file to be played in the electronic device, where the type to be optimized may be a type of a target object, for example, a male, a female, a sky, a mountain, a river, a signboard, or the like. Specifically, it may be that, in the video playing interface, the user inputs the type to be optimized, as shown in fig. 5, on the video interface, a video enhancement main switch 501 and sub-switches 502 of various object types are displayed, and specifically, the video enhancement main switch 501 is used for turning on or off the video enhancement function, wherein, the video enhancement function is used for optimizing the image data of the video file, when the main switch 501 of the video enhancement is opened, the user can select to open the sub-switch 502 of a certain object type or a certain object type, as shown in fig. 5, type 1 corresponds to one object type, for example, male, type 2 corresponds to another object type, for example, female, the type 1 and the type 2 are exemplary texts, and specifically, the texts may be changed according to a specific target object type in actual use, for example, the type 1 is changed into a male role.
When the video enhancement main switch 501 is turned on, the user selects to turn on the type of the target object to be optimized, which needs to be optimized, that is, to turn on the sub-switch 502 of the type which needs to be optimized, and then the electronic device can obtain the type to be optimized corresponding to the video file.
When the main switch 501 for video enhancement is turned off, the sub-switches 502 corresponding to each type in the type selection window to be optimized are gray, that is, cannot be selectively turned on or off, that is, do not respond to the operation of the application on the sub-switch.
In addition, the type selection interface to be optimized shown in fig. 5 may be hidden, specifically, as shown in fig. 6, a sliding button 503 is disposed at a side of the type selection window to be optimized, and the type selection window to be optimized may be hidden and slid out through the sliding button 503.
In addition, considering that different video files require different optimized types, for example, a video file of an indoor anchor does not need to be optimized for the sky, the river and the mountains, because the video rarely relates to scenes of the sky, the river and the mountains, a type corresponding relationship between a video type and a target object type can be preset in the electronic device, when a user plays the video file, the target object type corresponding to the video file can be determined in the corresponding relationship according to the type label of the video file, and the target object type can be displayed in the interface shown in fig. 5 for the user to select. Specifically, the type correspondence may be as shown in table 1:
TABLE 1
Type of video Type of object
Anchor (R) For male and female
Competitive game Sign board for male and female
Others Male, female, sky, mountain, river, sign, etc
As another embodiment, the target object area satisfying the preset condition is determined by: acquiring a target object to be optimized corresponding to the video file; and taking the target object area matched with the target object to be optimized as the target object area meeting preset conditions.
The difference between video enhancement according to the target object to be optimized and video enhancement according to the type to be optimized is that, in video enhancement by the target object to be optimized, the target object to be optimized is a target object in a video file, for example, the target object in the video file includes a passerby a, a passerby b and a passerby c, and the target object to be optimized may be at least one of the passerby a, the passerby b and the passerby c.
In some embodiments, the video file includes a character introduction, for example, some movies or episodes, an actor list is shown in the content of the video introduction, when a user selects a certain actor list, the target object to be optimized corresponding to the video file is the actor list, and the electronic device may search for a picture of a face of the actor, either in the actor list or on the internet according to the name of the actor. According to the picture, the area corresponding to the actor can be found in each frame of image of the video file.
Specifically, as shown in fig. 7, a content introduction is set on a playing interface of a video file to introduce the approximate content of the video file and the relationship between each character, an actor list is also set below, names, head portraits of a plurality of actors and a start switch corresponding to the name of each actor are displayed in the actor list, a user selects the name of an actor to be optimized in the actor list, an object to be optimized is input for the video file, and then, the electronic device scans each frame of image of the video file to find a target object area matching the object to be optimized, which is used as the target object area to be optimized this time.
In other embodiments, the interface shown in FIG. 5 may also be used, except that the name of each object is displayed within the interface rather than the type of object.
After the target object region meeting the preset condition is found in the target frame image, optimizing image data corresponding to the target object region in the target frame image, specifically, optimizing image parameters of the image data in the target object region meeting the preset condition in the target frame image, wherein the image parameter optimization includes at least one of exposure enhancement, denoising, edge sharpening, contrast increase or saturation increase.
Specifically, since the decoded image data is data in an RGBA format, in order to optimize the image data, the data in the RGBA format needs to be converted into an HSV format, specifically, a histogram of the image data is obtained, a parameter for converting the data in the RGBA format into the HSV format is obtained by performing statistics on the histogram, and the data in the RGBA format is converted into the HSV format according to the parameter.
In order to enhance the brightness of an image by enhancing the exposure, the luminance value of an area where the luminance values intersect may be increased by a histogram of the image, or the luminance of the image may be increased by nonlinear superposition, specifically, if I denotes a dark image to be processed and T denotes a comparatively bright image after the processing, the exposure may be enhanced by T (x) I (x) (1-I (x)). Wherein, T and I are both [0, 1] valued images. The algorithm can iterate multiple times if one is not effective.
The image data is denoised to remove noise of the image, and particularly, the image is degraded due to interference and influence of various noises in the generation and transmission processes, which adversely affects the processing of subsequent images and the image visual effect. The noise may be of various types, such as electrical noise, mechanical noise, channel noise, and other noise. Therefore, in order to suppress noise, improve image quality, and facilitate higher-level processing, it is necessary to perform denoising preprocessing on an image. From the probability distribution of noise, there are gaussian noise, rayleigh noise, gamma noise, exponential noise and uniform noise.
Specifically, the image can be denoised by a gaussian filter, wherein the gaussian filter is a linear filter, and can effectively suppress noise and smooth the image. The principle of action is similar to that of an averaging filter, and the average value of pixels in a filter window is taken as output. The coefficients of the window template are different from those of the average filter, and the template coefficients of the average filter are all the same and are 1; while the coefficients of the template of the gaussian filter decrease with increasing distance from the center of the template. Therefore, the gaussian filter blurs the image to a lesser extent than the mean filter.
For example, a 5 × 5 gaussian filter window is generated, and sampling is performed with the center position of the template as the origin of coordinates. And substituting the coordinates of each position of the template into a Gaussian function, wherein the obtained value is the coefficient of the template. And then the Gaussian filter window is convolved with the image to denoise the image.
Wherein edge sharpening is used to sharpen the blurred image. There are generally two methods for image sharpening: one is a differential method, and the other is a high-pass filtering method.
In particular, contrast stretching is a method for enhancing an image, and also belongs to a gray scale transformation operation. By stretching the grey value through the grey scale transformation to the whole interval 0-255, the contrast is clearly greatly enhanced. The following formula can be used to map the gray value of a certain pixel to a larger gray space:
I(x,y)=[(I(x,y)-Imin)/(Imax-Imin)](MAX-MIN)+MIN;
where Imin, Imax are the minimum and maximum grayscale values of the original image, and MIN and MAX are the minimum and maximum grayscale values of the grayscale space to be stretched.
The image quality of the image in the target area meeting the preset condition can be improved through the video enhancement algorithm, in addition, different video enhancement algorithms can be selected according to different types of the target area, and specifically, the types can be people, animals, food, scenery and the like.
Then, according to the corresponding relationship between the type of the target object and the video enhancement algorithm, a video enhancement algorithm corresponding to the type of the target object is determined, specifically, the video enhancement algorithm may include at least one of exposure enhancement, denoising, edge sharpening, contrast increase, or saturation increase, and then the exposure enhancement, denoising, edge sharpening, contrast increase, or saturation increase corresponding to different types of target objects are different, for example, as shown in table 2:
TABLE 2
Figure BDA0001815999750000091
According to the corresponding relation shown in table 2, the video enhancement algorithm corresponding to the type of the target object can be determined, and then the image parameters of the image data in the target area meeting the preset condition are optimized, so that the image in the target area can show the super-definition effect.
S304: and displaying the optimized target frame image on a screen of the electronic equipment.
Specifically, the optimized target frame image is sent to a frame buffer corresponding to the screen, and the target frame image in the frame buffer is displayed on the screen when the screen refresh rate is waited.
The frame buffer corresponds to a screen and is used for storing data to be displayed on the screen, for example, Framebuffer shown in fig. 2, which is a kind of driver interface appearing in the kernel of the operating system. Taking an android system as an example, Linux works in a protected mode, so that a user mode process cannot use interrupt call provided in a display card BIOS to directly write and display data on a screen like a DOS system, and Linux abstracts a Framebuffer device for the user process to directly write and display data on the screen. The Framebuffer mechanism imitates the function of a video card, and the video memory can be directly operated through reading and writing of the Framebuffer. Specifically, Framebuffer may be regarded as an image of a display memory, and after the image is mapped to a process address space, read-write operation may be directly performed, and the written data may be displayed on a screen.
The frame buffer can be regarded as a space for storing data, the CPU or the GPU puts the data to be displayed into the frame buffer, and the Framebuffer itself does not have any capability of operating data, and the data in the Framebuffer is read by the video controller according to the screen refresh frequency and displayed on the screen.
Specifically, after the optimized image data is stored in the frame buffer, and the image processor detects that the data is written in the frame buffer, the optimized image data is read from the frame buffer and displayed on the screen.
In one embodiment, the image processor reads the optimized image data from the frame buffer frame by frame according to the refresh frequency of the screen, and displays the optimized image data on the screen after rendering and synthesizing.
Specifically, as shown in fig. 8, HQV algorithm modules are added in the GPU, and the HQV algorithm module is a module for a user to execute the image processing method, compared with fig. 2, when sending image data to be rendered to the surface flag after soft decoding, the image data is intercepted and optimized by the HQV algorithm module and then sent to the surface flag for rendering and subsequent display operation on the screen.
It should be noted that, although the present embodiment illustrates optimization of one image, the method is also applicable to processing of a video file, specifically, when playing a video file, after performing display of an optimized target frame image on a screen of the electronic device, the method further includes taking a next frame image of the target frame image as a new target frame image and returning to perform the step of determining a plurality of target object areas corresponding to the target frame image, that is, taking the next frame image as a new target frame image and returning to perform S302.
Therefore, in the method provided by the application, in the process of playing the video, each image to be played can determine a plurality of target object areas in the image, select a target object area meeting preset conditions, optimize image data corresponding to the target object area in the frame of image, and then display the optimized frame of image on the screen of the electronic device. Therefore, in the video playing process, the video can be optimized according to the target object area of each frame of image, the image quality of the user video file during playing is improved, and the user experience is improved.
In addition, in order to improve the flexibility of the user in selecting the target object to be optimized and meet the requirements of the user more, the user may manually select the target object corresponding to a certain region in the video for optimization, specifically, please refer to fig. 9, the method is applied to an image processor of an electronic device, and is used for improving the image quality effect when the video is played, specifically, refer to the image processing method shown in fig. 9, and the method includes: s901 to S909.
S901: and acquiring a target frame image corresponding to the video file.
S902: and determining a plurality of target object areas corresponding to the target frame images.
S903: before the target frame image is acquired and the video file is played on a screen of the electronic equipment, a touch gesture acted on the screen by a user is detected.
Recording a first time point of acquiring a target frame image, wherein the first time point represents a time point of executing the method, specifically, after an electronic device decodes a video file, acquiring a plurality of frames of images corresponding to the video file, and then rendering and displaying the images frame by frame, and the first time point represents a process of starting to execute rendering and displaying the target frame image, and the method is just optimized for image data in a target area meeting preset conditions in the image in the process of rendering the image.
Specifically, when the video file is played on the screen of the electronic device, the electronic device continuously monitors the touch gesture of the user on the screen, and when the input of the touch gesture is detected, records a second time point input by the touch gesture and a target position of the screen corresponding to the touch gesture, and stores the second time point and the target position in the touch gesture recording table.
In addition, considering that there may be a false determination in the detection of the touch gesture, that is, the user inadvertently touches the screen instead of continuously pressing a certain area of the screen, that is, does not select a certain area of the screen, the duration of the touch gesture may be determined after the touch gesture acting on the screen is detected, if the duration is greater than a preset time length, the touch gesture is considered to be valid, if the duration is less than or equal to the preset time length, the touch gesture is discarded, and if the duration is considered to be valid, the operation of determining the target position of the screen corresponding to the touch gesture may be continuously performed. The preset time length is a time length set by a user according to a requirement, and may be 1 to 3 seconds, for example.
And the electronic equipment searches all touch gestures in the starting point and the first time point by taking the time point of starting playing the video file as the starting point according to the first time point.
S904: and determining a target position of the screen corresponding to the touch gesture.
Specifically, the target position corresponding to the touch gesture is determined according to the touch gesture recording table, specifically, the screen may be set according to each independent touch unit (which may be a touch capacitor, etc.) on the touch screen, for example, a coordinate system is set horizontally and vertically with the touch unit at the leftmost side of the top of the screen as a starting point, and then each coordinate in the coordinate system may determine a coordinate according to the arrangement of the touch units, for example, the coordinate of (10, 20) represents the 10 th touch unit in the horizontal direction and the 20 th touch unit in the vertical direction.
When a user touches the screen, if an input touch gesture can be sensed by the touch unit in a certain area of the screen, the position of the touch unit sensing the touch gesture is the target position of the screen corresponding to the touch gesture.
S905: and acquiring an image displayed on the screen and a selected target object corresponding to the target position in the image at the moment when the touch gesture is detected.
For convenience of description, a touch gesture of a user acting on the screen before the target frame image is acquired and the video file is played on the screen of the electronic device is recorded as a target touch gesture, and then the moment when the target touch gesture is detected, namely the second time point, can be determined according to the touch gesture recording table.
In addition, when the electronic device plays a video file, a play starting point of the video file is recorded, a time point of displaying each frame of image corresponding to the video file on the screen is recorded, at the moment of detecting the touch gesture, the image currently played on the screen is detected, specifically, the image played on the screen is stored in a video frame buffer and is displayed on the screen according to the refreshing frequency of the screen, each image displayed on the screen can be recorded by the electronic device, and when the touch gesture input by the user on the screen is detected, the image currently displayed on the screen can be determined according to the sequence of the images played one by one in the frame buffer, so that the moment of detecting the touch gesture, the image displayed on the screen, can be determined.
Since the position of the touch unit of the touch screen corresponds to the position of the pixel point on the display panel of the screen, and the position corresponding relationship is obtained in advance, the position of the display unit (photosensitive element, such as transistor or liquid crystal unit) on the screen corresponding to the target position corresponding to the touch gesture can be determined according to the position corresponding relationship, and then the display area of the video is determined according to the current display state of the video, specifically, the display state can be a full-screen display state or a window display state, wherein the full-screen display state is that a real screen is used as the display area of the picture of the video, and the window display state is that the window is too small as the display area of the image of the video. Because each frame of image of the video is in the display area of the screen, each pixel point of the image corresponds to the position of one or more display units, so that the position of the corresponding display unit on the screen is determined according to the position of the touch unit corresponding to the touch gesture, then the position of the pixel point in the image corresponding to the display unit is determined, and the target area corresponding to the corresponding pixel point is determined by acquiring the position of the pixel point.
S906: and taking the selected target object as a target object to be optimized.
S907: and taking the target object area matched with the target object to be optimized as the target object area meeting preset conditions.
S908: and optimizing the image data in the target area meeting the preset condition.
S909: and displaying the optimized target frame image on a screen of the electronic equipment.
The target object selected by the user through the touch gesture on the screen can be used as the target object to be optimized corresponding to the video file, and when the image of the video file is played at the later stage, if the target object to be optimized is included in the image, the target object is optimized.
For example, as shown in fig. 10, a picture in the video file is displayed on the screen, and a user touches a "cock" in the picture with a finger, when the electronic device detects that the screen is touched by the user, it is determined that a target object area in an image corresponding to an area corresponding to a touch gesture input by the user is the target object area corresponding to the cock, the electronic device may select to redisplay the picture, that is, after video enhancement processing of the area corresponding to the cock, the picture is redisplayed again, or when playing a next frame of image, it is determined whether a cock is included in the next frame of image, and if so, the cock is subjected to video enhancement processing. Similarly, if the user wants to optimize the sky, the mountain or the grassland, the electronic device may optimize the area corresponding to the sky, the mountain or the grassland by clicking the area corresponding to the sky, the mountain or the grassland.
When the method is applied to image display, after an image is displayed, a touch gesture for acting on a screen may be acquired, the touch gesture is corresponding to a target object region of the image, and the image is displayed again after the target object region of the image is optimized.
It should be noted that, the above steps are parts of detailed description, and reference may be made to the foregoing embodiments, which are not repeated herein.
In addition, considering that a user may give up enhancing a display effect of a certain target when selecting to enhance the display effect of the certain target during playing a video, the enhancement of the certain target may be cancelled through a gesture, and in particular, please refer to fig. 11, the method includes:
s1101: and acquiring a target frame image corresponding to the video file.
S1102: and determining a plurality of target object areas corresponding to the target frame images.
S1103: before the target frame image is acquired and the video file is played on a screen of the electronic equipment, a touch gesture acted on the screen by a user is detected.
S1104: and determining a target position of the screen corresponding to the touch gesture.
S1105: and acquiring an image displayed on the screen and a selected target object corresponding to the target position in the image at the moment when the touch gesture is detected.
S1106: and judging whether the selected target object is already used as the target object to be optimized.
As an implementation manner, a target object list to be optimized is stored in the electronic device, where the target object list to be optimized stores a plurality of identifiers of target objects to be optimized, where the identifier of the target object to be optimized is used as identity information of the target object, and may be identity information determined according to feature values of the target object in the image, for example, the identifier of the face can be determined according to facial features of a human face image, and then the identifiers of the plurality of target objects to be optimized are stored in the target object list to be optimized, and each identifier of the target object to be optimized corresponds to feature information of the target object to be optimized, and the user of the feature information determines the identifier of the target object.
When the electronic device plays a video file or displays a certain picture, when a target frame image corresponding to the video file is optimized, extracting a plurality of target object regions in the target frame image and comparing the target object regions with a target object of a target object list to be optimized, and finding out a target object matched with the target object to be optimized in the target object list to be used as a target object region meeting preset conditions, so that the target object to be optimized in the target object list to be optimized is used as a criterion for judging whether the target object region in the target frame image meets the preset conditions.
When a touch gesture is input by detecting a user touching the screen, determining a target object corresponding to the touch gesture, recording the target object as a selected target object, judging whether the selected target object is in the target object list to be optimized, namely whether the selected target object is already taken as the target object to be optimized, if the selected target object is in the target object list to be optimized, judging that the selected target object is already taken as the target object to be optimized, namely the target object is already selected as the target object to be optimized by the user before, and if the selected target object is not in the target object list to be optimized, judging that the selected target object is not taken as the target object to be optimized, namely the target object is selected for the first time.
S1107: and taking the selected target object as a target object to be optimized.
If the selected target object is not taken as the target object to be optimized, the target object is considered to be selected for the first time, and the selected target object is taken as the target object to be optimized.
S1108: and canceling the selected target object as the target object to be optimized.
And if the selected target is not selected for the first time but is already selected as the target object to be optimized, the selected target object is cancelled as the target object to be optimized. Specifically, the identifier of the selected object may be deleted from the list of the object to be optimized, and when the electronic device matches the object to be optimized with the object in the target frame image, the selected object is not matched with the object in the target frame image, and the object is not optimized, so that after the user selects optimization of the object by touching the object, the optimization of the object may be turned off by a pressing operation again, and at this time, the identifier of the object in the list of the object to be optimized is deleted, and when the user presses the object again next time, the optimization operation of the object may be turned on again, that is, the object may be used as the object to be optimized again.
In addition, when the user cancels the optimization of the selected target object, a prompt message may be displayed, on one hand, the user may be prompted that the optimization of the selected target object is currently cancelled, and on the other hand, the user may be requested to confirm whether to cancel the optimization of the selected target object, specifically, the specific implementation manner of S1108 is: displaying prompt information on a current interface, wherein the prompt information is used for reminding a user whether the optimization of the selected target object needs to be cancelled or not; acquiring indication information input by the user based on the prompt information; if the indication information is a cancel instruction, canceling the selected target object as a target object to be optimized; and if the indication information is an optimization instruction, executing the operation of taking the selected target object as the target object to be optimized.
Specifically, when it is determined that the selected object is already the object to be optimized, a prompt message is displayed on the current interface, and as one embodiment, a prompt window 1200 is displayed on the interface, and the prompt message is displayed in the prompt window 1200, specifically, the prompt message is "whether or not to cancel the display enhancement of the selected object" shown in fig. 12, and a determination button 1201 and a cancel button 1202 are provided in the prompt window, the user clicks the determination button 1201 to input an optimization instruction for instructing the electronic device to use the selected object as the object to be optimized, that is, S1107 is executed. When the user clicks the cancel button 1202, a cancel instruction for canceling the selected target object as the target object to be optimized is input.
S1109: and taking the target object area matched with the target object to be optimized as the target object area meeting preset conditions.
S1110: and optimizing the image data in the target area meeting the preset condition.
S1111: and displaying the optimized target frame image on a screen of the electronic equipment.
It should be noted that, the above steps are parts of detailed description, and reference may be made to the foregoing embodiments, which are not repeated herein.
Referring to fig. 13, a block diagram of an image processing apparatus 1300 according to an embodiment of the present disclosure is shown, where the apparatus may include: an acquisition unit 1301, a determination unit 1302, an optimization unit 1303, and a display unit 1304.
An obtaining unit 1301 is configured to obtain a target frame image corresponding to the video file.
A determining unit 1302, configured to determine a plurality of target object areas corresponding to the target frame image.
And an optimizing unit 1303 configured to optimize image data in a target area that satisfies a preset condition in the target frame image.
A display unit 1304, configured to display the optimized target frame image on a screen of the electronic device.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 14, a block diagram of an electronic device according to an embodiment of the present application is shown. The electronic device 100 may be a smart phone, a tablet computer, an electronic book, or other electronic devices capable of running a client. The electronic device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, a screen 140, and one or more clients, wherein the one or more clients may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall electronic device 100 using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
Specifically, the processor 110 may include one or a combination of a Central Processing Unit (CPU) 111, a Graphics Processing Unit (GPU) 112, a modem, and the like. The CPU mainly processes an operating system, a user interface, a client and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The data storage area may also store data created by the electronic device 100 during use (e.g., phone book, audio-video data, chat log data), and the like.
The screen 140 is used to display information input by a user, information provided to the user, and various graphic user interfaces of the electronic device, which may be formed of graphics, text, icons, numbers, video, and any combination thereof, and in one example, a touch screen may be provided on the display panel so as to be integrated with the display panel.
Referring to fig. 15, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 800 has stored therein program code that can be called by a processor to execute the methods described in the above-described method embodiments.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (5)

1. An image processing method applied to an electronic device, the method comprising:
the method comprises the steps that a character to be optimized selected by a user in a character list based on a starting switch corresponding to each character is obtained and serves as a target object to be optimized corresponding to a video file, the video file is played on a playing interface, a content brief introduction corresponding to the video file and a character list are displayed on the playing interface, each character in the character list corresponds to a starting switch, the content brief introduction is used for introducing the relation of each character of the video file, and each character in the character list is an actor participating in the video file;
acquiring a target frame image corresponding to a video file;
determining a plurality of target object areas corresponding to the target frame images;
searching the image of the target object to be optimized as an image to be optimized;
taking the target object area matched with the image to be optimized as a target object area meeting preset conditions;
optimizing the image data in the target area meeting the preset condition;
displaying the optimized target frame image on a screen of the electronic equipment;
and taking the next frame image of the target frame images as a new target frame image, and returning to execute the step of determining a plurality of target object areas corresponding to the target frame images and the subsequent steps.
2. The method according to claim 1, wherein the optimizing the image data in the target area satisfying the preset condition comprises:
optimizing image parameters of image data in a target region satisfying a preset condition within the target frame image, wherein the image parameter optimization comprises at least one of exposure enhancement, denoising, edge sharpening, contrast increase or saturation increase.
3. An image processing apparatus applied to an electronic device, the image processing apparatus comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a person to be optimized selected by a user in a person list based on a starting switch corresponding to each person as a target object to be optimized corresponding to a video file, the video file is played on a playing interface, a content introduction and a person list corresponding to the video file are displayed on the playing interface, each person in the person list corresponds to the starting switch, the content introduction is used for introducing the relation of each role of the video file, each person in the person list is an actor participating in the video file, and a target frame image corresponding to the video file is acquired;
the determining unit is used for determining a plurality of target object areas corresponding to the target frame images;
the optimization unit is used for searching the image of the target object to be optimized as an image to be optimized; taking the target object area matched with the image to be optimized as a target object area meeting preset conditions; optimizing the image data in the target area meeting the preset condition;
and the display unit is used for displaying the optimized target frame image on the screen of the electronic equipment, taking the next frame image of the target frame image as a new target frame image, and returning to execute the step of determining the plurality of target object areas corresponding to the target frame image and the subsequent steps.
4. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of claim 1 or 2.
5. A computer-readable storage medium having program code stored therein, the program code being invoked by a processor to perform the method of claim 1 or 2.
CN201811141914.4A 2018-09-28 2018-09-28 Image processing method, image processing device, electronic equipment and computer readable medium Active CN109242802B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811141914.4A CN109242802B (en) 2018-09-28 2018-09-28 Image processing method, image processing device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811141914.4A CN109242802B (en) 2018-09-28 2018-09-28 Image processing method, image processing device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN109242802A CN109242802A (en) 2019-01-18
CN109242802B true CN109242802B (en) 2021-06-15

Family

ID=65054020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811141914.4A Active CN109242802B (en) 2018-09-28 2018-09-28 Image processing method, image processing device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN109242802B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223761B (en) * 2019-06-13 2023-08-22 上海联影医疗科技股份有限公司 Outlining data import method and device, electronic equipment and storage medium
CN112241936B (en) * 2019-07-18 2023-08-25 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
CN110599581B (en) * 2019-08-29 2023-03-31 Oppo广东移动通信有限公司 Image model data processing method and device and electronic equipment
CN110795054B (en) * 2019-10-21 2023-07-28 Oppo广东移动通信有限公司 Image quality adjusting method and related product
CN111091489B (en) * 2019-11-01 2024-05-07 平安科技(深圳)有限公司 Picture optimization method and device, electronic equipment and storage medium
CN111383197A (en) * 2020-03-16 2020-07-07 浙江大华技术股份有限公司 Method and device for displaying different images by security check machine
CN112312203B (en) * 2020-08-25 2023-04-07 北京沃东天骏信息技术有限公司 Video playing method, device and storage medium
CN111968605A (en) * 2020-08-28 2020-11-20 维沃移动通信有限公司 Exposure adjusting method and device
CN112218136B (en) * 2020-10-10 2021-08-10 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112351336A (en) * 2020-10-29 2021-02-09 南京创维信息技术研究院有限公司 Method, device, terminal and medium for optimizing television image quality based on video image segmentation
CN112070707B (en) * 2020-11-12 2021-02-23 国科天成科技股份有限公司 True color image intensifier based on micro-lens array
CN112887665B (en) * 2020-12-30 2023-07-18 重庆邮电大学移通学院 Video image processing method and related device
CN113132800B (en) * 2021-04-14 2022-09-02 Oppo广东移动通信有限公司 Video processing method and device, video player, electronic equipment and readable medium
CN113256660A (en) * 2021-06-04 2021-08-13 北京有竹居网络技术有限公司 Picture processing method and device and electronic equipment
CN113784084B (en) * 2021-09-27 2023-05-23 联想(北京)有限公司 Processing method and device
CN114125555B (en) * 2021-11-12 2024-02-09 深圳麦风科技有限公司 Editing data preview method, terminal and storage medium
CN114928765A (en) * 2022-05-05 2022-08-19 维沃移动通信有限公司 Control method, control device, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104918670A (en) * 2013-01-07 2015-09-16 微软技术许可有限责任公司 Location based augmentation for story reading
CN105227966A (en) * 2015-09-29 2016-01-06 深圳Tcl新技术有限公司 To televise control method, server and control system of televising
CN107610240A (en) * 2017-08-09 2018-01-19 广东欧珀移动通信有限公司 Head portrait replacement method, device and mobile terminal
CN107610080A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5735227B2 (en) * 2010-07-16 2015-06-17 ルネサスエレクトロニクス株式会社 Image conversion apparatus and image conversion system
CN103310411B (en) * 2012-09-25 2017-04-12 中兴通讯股份有限公司 Image local reinforcement method and device
CN104394313A (en) * 2014-10-27 2015-03-04 成都理想境界科技有限公司 Special effect video generating method and device
CN105303543A (en) * 2015-10-23 2016-02-03 努比亚技术有限公司 Image enhancement method and mobile terminal
CN105847728A (en) * 2016-04-13 2016-08-10 腾讯科技(深圳)有限公司 Information processing method and terminal
CN105872447A (en) * 2016-05-26 2016-08-17 努比亚技术有限公司 Video image processing device and method
KR102233175B1 (en) * 2017-01-05 2021-03-29 한국전자통신연구원 Method for determining signature actor and for identifying image based on probability of appearance of signature actor and apparatus for the same
CN107742274A (en) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104918670A (en) * 2013-01-07 2015-09-16 微软技术许可有限责任公司 Location based augmentation for story reading
CN105227966A (en) * 2015-09-29 2016-01-06 深圳Tcl新技术有限公司 To televise control method, server and control system of televising
CN107610240A (en) * 2017-08-09 2018-01-19 广东欧珀移动通信有限公司 Head portrait replacement method, device and mobile terminal
CN107610080A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium

Also Published As

Publication number Publication date
CN109242802A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109242802B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN109379625B (en) Video processing method, video processing device, electronic equipment and computer readable medium
CN109218802B (en) Video processing method and device, electronic equipment and computer readable medium
CN109525901B (en) Video processing method and device, electronic equipment and computer readable medium
CN109168068B (en) Video processing method and device, electronic equipment and computer readable medium
CN109379628B (en) Video processing method and device, electronic equipment and computer readable medium
CN109640168B (en) Video processing method, video processing device, electronic equipment and computer readable medium
CN109685726B (en) Game scene processing method and device, electronic equipment and storage medium
CN109379627B (en) Video processing method, video processing device, electronic equipment and storage medium
CN109361949B (en) Video processing method, video processing device, electronic equipment and storage medium
US11531458B2 (en) Video enhancement control method, electronic apparatus and storage medium
WO2020107989A1 (en) Video processing method and apparatus, and electronic device and storage medium
US11490157B2 (en) Method for controlling video enhancement, device, electronic device and storage medium
CN109120988B (en) Decoding method, decoding device, electronic device and storage medium
WO2020108060A1 (en) Video processing method and apparatus, and electronic device and storage medium
US20220351701A1 (en) Method and device for adjusting image quality, and readable storage medium
WO2020108010A1 (en) Video processing method and apparatus, electronic device and storage medium
WO2022218042A1 (en) Video processing method and apparatus, and video player, electronic device and readable medium
CN109151574B (en) Video processing method, video processing device, electronic equipment and storage medium
CN109167946B (en) Video processing method, video processing device, electronic equipment and storage medium
CN109218803B (en) Video enhancement control method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant