CN111163350A - Image processing method, terminal and computer storage medium - Google Patents

Image processing method, terminal and computer storage medium Download PDF

Info

Publication number
CN111163350A
CN111163350A CN201911242833.8A CN201911242833A CN111163350A CN 111163350 A CN111163350 A CN 111163350A CN 201911242833 A CN201911242833 A CN 201911242833A CN 111163350 A CN111163350 A CN 111163350A
Authority
CN
China
Prior art keywords
image frame
current image
mapping table
component value
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911242833.8A
Other languages
Chinese (zh)
Other versions
CN111163350B (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911242833.8A priority Critical patent/CN111163350B/en
Publication of CN111163350A publication Critical patent/CN111163350A/en
Application granted granted Critical
Publication of CN111163350B publication Critical patent/CN111163350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, which comprises the following steps: the method comprises the steps of obtaining the scene type of a current image frame, obtaining an image component value mapping table of the corresponding current image frame according to the scene type of the current image frame, when the scene type of the current image frame is different from the scene type of a previous image frame, re-determining the image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame, and processing the current image frame by using the re-determined image component value mapping table to obtain the processed current image frame. The embodiment of the application also provides a terminal and a computer storage medium.

Description

Image processing method, terminal and computer storage medium
Technical Field
The present application relates to image processing technologies, and in particular, to an image processing method, a terminal, and a computer storage medium.
Background
At present, when the frame rate of a video is high, and when the difference between scene content and an image component value mapping table is large during scene switching, the color change of an image has jump and the transition is not natural, and the transition visual effect of the video image is poor due to the existing image processing method.
Disclosure of Invention
The embodiment of the application provides an image processing method, a terminal and a computer storage medium, which can improve the transitional visual effect of a video image.
The technical scheme of the application is realized as follows:
the embodiment of the application provides an image processing method, which comprises the following steps:
acquiring a scene type of a current image frame;
acquiring an image component value mapping table of the corresponding current image frame according to the scene type of the current image frame;
when the scene type of the current image frame is different from the scene type of the previous image frame, re-determining an image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame;
and mapping the current image frame by using the re-determined image component value mapping table to obtain the processed current image frame.
The embodiment of the application provides a terminal, the terminal includes:
the first acquisition module is used for acquiring the scene type of the current image frame;
the second obtaining module is used for obtaining a corresponding image component value mapping table of the current image frame according to the scene type of the current image frame;
a determining module, configured to re-determine an image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame when the scene type of the current image frame is different from the scene type of the previous image frame;
and the processing module is used for mapping the current image frame by using the redetermined image component value mapping table to obtain the processed current image frame.
An embodiment of the present application further provides a terminal, where the terminal includes: the image processing method comprises a processor and a storage medium storing instructions executable by the processor, wherein the storage medium depends on the processor to execute operations through a communication bus, and when the instructions are executed by the processor, the image processing method of one or more of the above embodiments is executed.
The embodiment of the application provides a computer storage medium, which stores executable instructions, and when the executable instructions are executed by one or more processors, the processors execute the image processing method of one or more embodiments.
The embodiment of the application provides an image processing method, a terminal and a computer storage medium, wherein the method comprises the following steps: acquiring the scene type of a current image frame, acquiring a corresponding image component value mapping table of the current image frame according to the scene type of the current image frame, re-determining the image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame when the scene type of the current image frame is different from the scene type of the previous image frame, and processing the current image frame by using the re-determined image component value mapping table to obtain the processed current image frame; that is, in the embodiment of the present application, for a current image frame in a video image, a scene type of the current image frame is obtained first, then an image component value mapping table corresponding to the scene type is determined, so that only when the scene type of the current image frame is different from the scene type of a previous image frame, at this time, because a scene of the current image frame jumps with respect to a scene of the previous image frame, in order to better transition from the previous image frame to the current image frame, an image component value mapping table is determined again for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame, and finally, the image component value of the current image frame is processed by using the re-determined image component value mapping table, so that the image component value of the current image frame is processed by using the re-determined image component value mapping table, compared with the previous image frame, the obtained processed current image frame eliminates color jump visually and improves the transitional visual effect of the video image.
Drawings
Fig. 1 is a schematic flowchart of an alternative image processing method according to an embodiment of the present disclosure;
fig. 2 is a first schematic structural diagram of a terminal according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Example one
An embodiment of the present application provides an image processing method, and fig. 1 is a schematic flowchart of an optional image processing method provided in the embodiment of the present application, and referring to fig. 1, the image processing method may include:
s101: acquiring a scene type of a current image frame;
at present, for a video image, a scene abrupt change exists between adjacent image frames, for example, the scene content of a previous image frame is a green plant, and the scene content of a current image frame is a blue sky, so that for two adjacent image frames of the green plant and the blue sky, when processing image component values, if a mapping table of image component values corresponding to the scene content is used, a visual effect of color jump occurs, and thus, the visual effect is affected due to unnatural visual transition.
In order to enhance the transitional visual effect of the video image, first, a current image frame is obtained, and here, the above method may be applied to a terminal, which may be a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The terminal may include a camera, the camera may include a front-facing camera and/or a rear-facing camera, the terminal further includes a processor, a display, a battery, and other components, wherein the processor of the terminal may directly download video images through a network, may also receive video images from other terminals through the network, may also capture video images through the front-facing camera or the rear-facing camera of the terminal, perform mapping processing on the video images through the processor of the terminal, and display the video images after the mapping processing on the display.
After the current image frame is acquired, the scene type of the current image frame may be directly acquired, or scene recognition may be performed on the current image frame by using an Artificial Intelligence (AI) algorithm, after the scene of the current image frame is recognized, the scene of the current image frame is matched with a preset scene, and the preset scene type which is successfully matched is determined as the scene type of the current image frame.
For example, the terminal performs scene recognition on the current image frame by using an AI algorithm to obtain that the current image frame includes a grassland, matches the scene of the current image frame with a preset scene, and if the scene is successfully matched with the scene of the preset green plant, the scene category of the current image frame may be determined as the scene category of the green plant in the preset scene.
The preset scene is a scene preset in the terminal, and may include 2 or more than 2 scene categories, such as green plants, blue sky, flowers, indoor homes, offices, shopping malls, restaurants, and the like, or different light rays for the same image, such as green plants under strong light rays in a burning sun, green plants under weak light rays in a cloudy day, or green plants under strong light rays in a snow scene, and the like, where this is not specifically limited in the embodiments of the present application; in practical applications, 10-20 scene categories may be included in general.
In this way, the scene type of the current image frame may be determined.
S102: acquiring an image component value mapping table of a corresponding current image frame according to the scene type of the current image frame;
the scene type of the current image frame can be obtained through S101, since a corresponding relationship between the scene type and the image component value mapping table is preset in the terminal, that is, one scene type corresponds to one image component value mapping table, after the scene type of the current image frame is determined, the image component value mapping table corresponding to the scene type of the current image frame can be found from the corresponding relationship, and then the corresponding image component value mapping table is determined as the image component value mapping table of the current image frame.
It should be noted that the image components may be image component values in different color spaces, for example, when the color space is an RGB space, the image components may include an image component R, an image component G, and an image component B; when the color space is CMY, the image components may include an image component C, an image component M, and an image component Y; when the color space is HSV, the image components may include an image component H, an image component S, and an image component V; when the color space is HSI, the image components may include an image component H, an image component S, and an image component I.
In this way, a map of image component values for the current image frame may be determined.
S103: when the scene type of the current image frame is different from the scene type of the previous image frame, re-determining an image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame;
for a current image frame, there may be a case where the scene type of the current image frame is the same as that of a previous image frame, and there may also be a case where the scene type of the current image frame is different from that of the previous image frame.
In step S103, when the scene type of the current image frame is different from the scene type of the previous image frame, because the difference between the mapping tables of the image component values of the different scene types to one image component value is large, in order to prevent abrupt color change of the video image, the image component value of the current image frame is not processed by using the mapping table of the image component value corresponding to the scene type of the current image frame, but the mapping table of the image component value is determined for the current image frame again according to the mapping table of the image component value of the current image frame and the mapping table of the image component value of the previous image frame.
In an optional embodiment, the image processing method may further include:
and when the scene type of the current image frame is the same as that of the last image frame, mapping the current image frame by using the scene type of the current image frame to obtain the processed current image frame.
Specifically, when the scene type of the current image frame is the same as the scene type of the previous image frame, the video image does not have the hidden danger of color jump visually, so that the image component value mapping table corresponding to the scene type of the current image frame can be directly used for mapping the current image frame to obtain the processed current image frame.
In addition, for a video image, since a previous image frame does not exist for a first image frame, the first image frame is directly mapped by using an image component value mapping table corresponding to a scene type of the first image frame, so that the processed first image frame is obtained.
For example, for a certain image component value, to redetermine the mapping value, a mapping value a1 corresponding to the certain image component value may be found from the mapping table of image component values of the current image frame, a mapping value a2 corresponding to the certain image component value may be found from the mapping table of image component values of the previous image frame, then an average value of a1 and a2 is determined as the mapping value corresponding to the certain image component value, and in the same manner, the mapping table of image component values may be redetermined for the current image frame; in addition, a weight value may also be assigned to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame, and then the image component value mapping table is re-determined in a weighted summation manner, where this is not specifically limited in this embodiment of the present application.
In order to re-determine the appropriate image component value map for the current image frame, the re-determination is typically performed by means of weighted summation, in an alternative embodiment, the re-determination of the image component value map for the current image frame is based on the image component value map for the current image frame and the image component value map for the previous image frame, comprising:
determining a first weight value for an image component value mapping table of a current image frame;
determining a second weight value for an image component value mapping table of a previous image frame;
calculating to obtain a mapping value according to the first weight value, the second weight value, the mapping value of the image component value mapping table of the current image frame and the mapping value of the image component value mapping table of the previous image frame;
and re-determining an image component value mapping table for the current image frame by using the calculated mapping value.
Specifically, a weight value determined for an image component value mapping table of a current image frame is a first weight value, a weight value determined for an image component value mapping table of a previous image frame is a second weight value, then a mapping value of the image component value mapping table of the current image frame and a mapping value of the image component value mapping table of the previous image frame are weighted and summed by using the first weight value and the second weight value to obtain a weighted and summed mapping value, and the weighted and summed mapping value is used to form a re-determined image component value mapping table.
In order to determine a more appropriate weight value for the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame, a first weight value and a second weight value may be set in a preset manner, or may be determined by using a sequence number of the current image frame in the total image frame, which is not specifically limited in this embodiment of the present application.
To determine the appropriate first weight value, typically using the order number of the current image frame in the total image frame, in an alternative embodiment, determining the first weight value for the image component value map of the current image frame comprises:
determining a first weight value for an image component value mapping table of a current image frame according to a sequence number of the current image frame in a total image frame.
Here, for a video image, the total number of image frames may be represented by N, wherein the image frames may be sorted in a chronological order, the N image frames may include image frames with sequence numbers 1, 2, 3, …, N, and the sequence number of the current image frame is represented by N, so that since N may represent a position of the current image frame in the total image frame, a weight value appropriate to the position may be determined as the first weight value according to the sequence number N of the current image frame.
Further, in order to determine the first weight value, in an alternative embodiment, determining the first weight value for the image component value mapping table of the current image frame according to the sequence number of the current image frame in the total image frame includes:
and determining a first weight value according to the ratio of the sequence number of the current image frame in the total image frame to the number of the total image frames.
That is, here, N/N is determined as the first weight value, and thus, for the weight value N/N determined by the image component value mapping table of the current image frame, it can be seen that the first weight value linearly increases as the value of N increases for the current image frames numbered in different orders, and the minimum value and the maximum value of the first weight value are 0 and 1 for the total image frame of the video image.
In this way, the first weight value of the image component value mapping table of the current image frame is associated with the sequence of the current image frame in the total image frame, so that the re-determined image component value mapping table of the current image frame takes the position of the current image frame and the scene category difference between the current image frame and the previous image frame into consideration, and the color jump phenomenon of the video image is eliminated.
Further, in order to determine the first weight value, in an alternative embodiment, determining the first weight value for the image component value mapping table of the current image frame according to the sequence number of the current image frame in the total image frame includes:
and obtaining a first weight value by utilizing a monotone increasing function according to the ratio of the sequence number of the current image frame in the total image frame to the number of the total image frames.
Here, N/N is substituted as an argument into the monotonically increasing function to obtain the first weight value, for example, in practical applications, the commonly used monotonically increasing function is a Sigmod function, and the range of the first weight value can be mapped between 0 and 1 by using the properties of single increase of the Sigmod function and single increase of an inverse function.
Further, in order to determine the first weight value, in an alternative embodiment, determining the first weight value for the image component mapping table of the current image frame according to the sequence number of the current image frame in the total image frame includes:
and obtaining a first weight value by utilizing a hyperbolic tangent function according to the ratio of the sequence number of the current image frame in the total image frame to the number of the total image frames.
Specifically, N/N is substituted as an argument into a hyperbolic tangent function, which is computationally equal to a ratio of hyperbolic sine to hyperbolic cosine, i.e., tanh (x) sinh (x)/cosh (x), so that a first weight value may be obtained, and the range of the first weight value may fall between 0 and 1.
Further, to determine the second weight value, in an alternative embodiment, determining the second weight value for the image component value mapping table of the previous image frame includes:
and determining a value obtained by subtracting the first weight value from 1 as a second weight value.
As can be seen, after the first weight value w is determined, the second weight value is 1-w; that is, in the re-determined image component value mapping table, for a certain image component, a mapping value obtained by a certain image component value that is re-determined is equal to a mapping value x w of the current image component value mapping table + a mapping value x (1-w) of the image component value mapping table of the previous image frame, so that the mapping value can be re-determined for the certain image component, and by analogy, the mapping value can be re-determined for each image component value, thereby re-determining the image component value mapping table for the current image frame.
S104: and mapping the current image frame by using the re-determined image component value mapping table to obtain the processed current image frame.
And finally, after the redetermined image component value mapping table is obtained, the redetermined image component value mapping table is used for mapping the current image frame so as to obtain a processed current image frame, and color jump between the current image frame and the previous image frame is eliminated in the processed current image frame, so that the previous image frame can be slowly transited to the current image frame, and the color transition effect of the video image is greatly improved.
The following describes an image processing method according to one or more embodiments described above by way of example.
S201: the method comprises the steps that a camera of a terminal is started, an initial image frame is obtained through the camera of the terminal, the current image frame is the initial image frame, AI scene identification is carried out on the initial image frame, a scene type CURT _ SCENES of the initial image frame is obtained, a color mapping table CURT _ LUT (equivalent to the image component value mapping table) corresponding to the scene type is used for applying the CURT _ LUT to the initial image frame to carry out table lookup and carry out color change.
S202: the terminal continues to detect the NEXT image frame, and identifies the AI scene of the NEXT image frame to obtain that the scene type of the NEXT image frame is NEXT _ SCENES, when the NEXT _ SCENES is different from the CURT _ SCENES, namely the scene changes, a color mapping table NEXT _ LUT corresponding to the NEXT _ SCENES is obtained, a step of generating a gradient color mapping table in real time is started, and S203 is executed; otherwise, S204 is executed.
S203: from the beginning of gradual change to the ending of gradual change, the total frame number is N, N determines the length of the gradual change, the frame number of the NEXT image frame is N, the color mapping table TRAN _ LUT of the gradual change frame is obtained by fusing a CURT _ LUT and a NEXT _ LUT, namely, TRAN _ LUT ═ CURT _ LUT (1-N/N) + NEXT _ LUT N/N, the fusion weighting coefficient N/N of the NEXT _ LUT is a linear increasing linear function, the minimum value is 0, the maximum value is 1, then the TRAN _ LUT is used for performing color mapping transformation on the corresponding NEXT image frame in the gradual change process, and when the gradual change process is ended, S204 is executed.
S204: the current image frame is replaced with the NEXT image frame, NEXT _ SCENES is replaced with CURT _ SCENES, NEXT _ LUT is replaced with CURT _ LUT, and color mapping is performed on the current image frame with the CURT _ LUT until the detected AI scene changes again, and the process returns to perform S202.
The embodiment of the application provides an image processing method, which comprises the following steps: acquiring the scene type of a current image frame, acquiring a corresponding image component value mapping table of the current image frame according to the scene type of the current image frame, re-determining the image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame when the scene type of the current image frame is different from the scene type of the previous image frame, and processing the current image frame by using the re-determined image component value mapping table to obtain the processed current image frame; that is, in the embodiment of the present application, for a current image frame in a video image, a scene type of the current image frame is obtained first, then an image component value mapping table corresponding to the scene type is determined, so that only when the scene type of the current image frame is different from the scene type of a previous image frame, at this time, because a scene of the current image frame jumps with respect to a scene of the previous image frame, in order to better transition from the previous image frame to the current image frame, an image component value mapping table is determined again for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame, and finally, the image component value of the current image frame is processed by using the re-determined image component value mapping table, so that the image component value of the current image frame is processed by using the re-determined image component value mapping table, compared with the previous image frame, the obtained processed current image frame eliminates color jump visually and improves the transitional visual effect of the video image.
Example two
Fig. 2 is a first schematic structural diagram of a terminal provided in an embodiment of the present application, and as shown in fig. 2, an embodiment of the present application provides a terminal, including: a first obtaining module 21, a second obtaining module 22, a determining module 23 and a processing module 24; wherein the content of the first and second substances,
a first obtaining module 21, configured to obtain a scene type of a current image frame;
the second obtaining module 22 is configured to obtain an image component value mapping table of a corresponding current image frame according to a scene type of the current image frame;
a determining module 23, configured to re-determine an image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame when the scene type of the current image frame is different from the scene type of the previous image frame;
and the processing module 24 is configured to perform mapping processing on the current image frame by using the re-determined image component value mapping table to obtain a processed current image frame.
Optionally, the determining module 23 re-determines the image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame, where the determining includes:
determining a first weight value for an image component value mapping table of a current image frame;
determining a second weight value for an image component value mapping table of a previous image frame;
calculating to obtain a mapping value according to the first weight value, the second weight value, the mapping value of the image component value mapping table of the current image frame and the mapping value of the image component value mapping table of the previous image frame;
and re-determining an image component value mapping table for the current image frame by using the calculated mapping value.
Optionally, the determining module 23 determines, for the image component value mapping table of the current image frame, the first weight value, which includes:
determining a first weight value for an image component value mapping table of a current image frame according to a sequence number of the current image frame in a total image frame.
Optionally, the determining module 23 determines, according to the sequence number of the current image frame in the total image frame, a first weight value for the image component value mapping table of the current image frame, where the determining includes:
and determining the ratio of the sequence number of the current image frame in the total image frame to the number of the total image frames as a first weight value.
Optionally, the determining module 23 determines, according to the sequence number of the current image frame in the total image frame, a first weight value for the image component value mapping table of the current image frame, where the determining includes:
and obtaining a first weight value by utilizing a monotone increasing function according to the ratio of the sequence number of the current image frame in the total image frame to the number of the total image frames.
Optionally, the determining module 23 determines, according to the sequence number of the current image frame in the total image frame, a first weight value for the image component value mapping table of the current image frame, where the determining includes:
and obtaining a first weight value by utilizing a hyperbolic tangent function according to the ratio of the sequence number of the current image frame in the total image frame to the number of the total image frames.
Optionally, the determining module 23 determines, for the image component value mapping table of the previous image frame, a second weight value, where the second weight value includes:
and determining a value obtained by subtracting the first weight value from 1 as a second weight value.
In practical applications, the first obtaining module 21, the second obtaining module 22, the determining module 23 and the Processing module 24 may be implemented by a processor located on a terminal, specifically, implemented by a CPU, a Microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application, and as shown in fig. 3, an embodiment of the present application provides a terminal 300, including:
a processor 31 and a storage medium 32 storing instructions executable by the processor 31, wherein the storage medium 32 depends on the processor 31 to perform operations through a communication bus 33, and when the instructions are executed by the processor 31, the image processing method according to the first embodiment is performed.
It should be noted that, in practical applications, the various components in the terminal are coupled together by a communication bus 33. It will be appreciated that the communication bus 33 is used to enable communications among the components. The communication bus 33 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled in figure 3 as communication bus 33.
The embodiment of the application provides a computer storage medium, which stores executable instructions, and when the executable instructions are executed by one or more processors, the processors execute the image processing method of the first embodiment.
The computer-readable storage medium may be a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM), among others.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (11)

1. An image processing method, comprising:
acquiring a scene type of a current image frame;
acquiring an image component value mapping table of the corresponding current image frame according to the scene type of the current image frame;
when the scene type of the current image frame is different from the scene type of the previous image frame, re-determining an image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame;
and mapping the current image frame by using the re-determined image component value mapping table to obtain the processed current image frame.
2. The method of claim 1, further comprising:
and when the scene type of the current image frame is the same as the scene type of the last image frame, mapping the current image frame by using the scene type of the current image frame to obtain the processed current image frame.
3. The method of claim 1, wherein said redefining an image component value map for said current image frame from an image component value map for said current image frame and an image component value map for said previous image frame comprises:
determining a first weight value for an image component value mapping table of the current image frame;
determining a second weight value for the image component value mapping table of the previous image frame;
calculating to obtain a mapping value according to the first weight value, the second weight value, the mapping value of the image component value mapping table of the current image frame and the mapping value of the image component value mapping table of the previous image frame;
and re-determining an image component value mapping table for the current image frame by using the calculated mapping value.
4. The method of claim 3, wherein determining a first weight value for an image component value map of the current image frame comprises:
determining the first weight value according to an order number of the current image frame in a total image frame.
5. The method as claimed in claim 4, wherein said determining said first weight value for an image component value mapping table of said current image frame according to an order number of said current image frame in a total image frame comprises:
determining a ratio of an order number of the current image frame in a total image frame to the number of the total image frame as the first weight value.
6. The method as claimed in claim 4, wherein said determining said first weight value for an image component value mapping table of said current image frame according to an order number of said current image frame in a total image frame comprises:
and obtaining the first weight value by utilizing a monotone increasing function according to the ratio of the sequence number of the current image frame in the total image frame to the number of the total image frame.
7. The method as claimed in claim 4, wherein said determining said first weight value for an image component value mapping table of said current image frame according to an order number of said current image frame in a total image frame comprises:
and obtaining the first weight value by utilizing a hyperbolic tangent function according to the ratio of the sequence number of the current image frame in the total image frame to the number of the total image frames.
8. The method according to any of claims 3 to 6, wherein said determining a second weight value for an image component value map of said previous image frame comprises:
and determining a value obtained by subtracting 1 from the first weight value as the second weight value.
9. A terminal, characterized in that the terminal comprises:
the first acquisition module is used for acquiring the scene type of the current image frame;
the second obtaining module is used for obtaining a corresponding image component value mapping table of the current image frame according to the scene type of the current image frame;
a determining module, configured to re-determine an image component value mapping table for the current image frame according to the image component value mapping table of the current image frame and the image component value mapping table of the previous image frame when the scene type of the current image frame is different from the scene type of the previous image frame;
and the processing module is used for mapping the current image frame by using the redetermined image component value mapping table to obtain the processed current image frame.
10. A terminal, characterized in that the terminal comprises: a processor and a storage medium storing instructions executable by the processor to perform operations dependent on the processor via a communication bus, the instructions when executed by the processor performing the image processing method of any of claims 1 to 8 above.
11. A computer storage medium having stored thereon executable instructions which, when executed by one or more processors, perform the image processing method of any one of claims 1 to 8.
CN201911242833.8A 2019-12-06 2019-12-06 Image processing method, terminal and computer storage medium Active CN111163350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911242833.8A CN111163350B (en) 2019-12-06 2019-12-06 Image processing method, terminal and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911242833.8A CN111163350B (en) 2019-12-06 2019-12-06 Image processing method, terminal and computer storage medium

Publications (2)

Publication Number Publication Date
CN111163350A true CN111163350A (en) 2020-05-15
CN111163350B CN111163350B (en) 2022-03-01

Family

ID=70556561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911242833.8A Active CN111163350B (en) 2019-12-06 2019-12-06 Image processing method, terminal and computer storage medium

Country Status (1)

Country Link
CN (1) CN111163350B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111757021A (en) * 2020-07-06 2020-10-09 浙江大学 Multi-sensor real-time fusion method for mobile robot remote takeover scene
CN112383993A (en) * 2020-10-27 2021-02-19 一飞(海南)科技有限公司 Gradual change color light effect control method and system for unmanned aerial vehicle formation and unmanned aerial vehicle formation
CN115633250A (en) * 2021-07-31 2023-01-20 荣耀终端有限公司 Image processing method and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110149109A1 (en) * 2009-12-21 2011-06-23 Electronics And Telecommunications Research Institute Apparatus and method for converting color of taken image
CN102780889A (en) * 2011-05-13 2012-11-14 中兴通讯股份有限公司 Video image processing method, device and equipment
CN107257455A (en) * 2017-07-10 2017-10-17 广东欧珀移动通信有限公司 White balancing treatment method and device
CN108024104A (en) * 2017-12-12 2018-05-11 上海顺久电子科技有限公司 The method and display device that a kind of high dynamic range images to input are handled
CN108109180A (en) * 2017-12-12 2018-06-01 上海顺久电子科技有限公司 The method and display device that a kind of high dynamic range images to input are handled
CN108174173A (en) * 2017-12-25 2018-06-15 广东欧珀移动通信有限公司 Image pickup method and device, computer readable storage medium and computer equipment
CN108830816A (en) * 2018-06-27 2018-11-16 厦门美图之家科技有限公司 Image enchancing method and device
CN109741288A (en) * 2019-01-04 2019-05-10 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109741279A (en) * 2019-01-04 2019-05-10 Oppo广东移动通信有限公司 Image saturation method of adjustment, device, storage medium and terminal
CN109905597A (en) * 2019-02-13 2019-06-18 深圳市欧蒂姆光学有限公司 Eyeglass color-changing control method, device, glasses and readable storage medium storing program for executing

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110149109A1 (en) * 2009-12-21 2011-06-23 Electronics And Telecommunications Research Institute Apparatus and method for converting color of taken image
CN102780889A (en) * 2011-05-13 2012-11-14 中兴通讯股份有限公司 Video image processing method, device and equipment
CN107257455A (en) * 2017-07-10 2017-10-17 广东欧珀移动通信有限公司 White balancing treatment method and device
CN108024104A (en) * 2017-12-12 2018-05-11 上海顺久电子科技有限公司 The method and display device that a kind of high dynamic range images to input are handled
CN108109180A (en) * 2017-12-12 2018-06-01 上海顺久电子科技有限公司 The method and display device that a kind of high dynamic range images to input are handled
CN108174173A (en) * 2017-12-25 2018-06-15 广东欧珀移动通信有限公司 Image pickup method and device, computer readable storage medium and computer equipment
CN108830816A (en) * 2018-06-27 2018-11-16 厦门美图之家科技有限公司 Image enchancing method and device
CN109741288A (en) * 2019-01-04 2019-05-10 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109741279A (en) * 2019-01-04 2019-05-10 Oppo广东移动通信有限公司 Image saturation method of adjustment, device, storage medium and terminal
CN109905597A (en) * 2019-02-13 2019-06-18 深圳市欧蒂姆光学有限公司 Eyeglass color-changing control method, device, glasses and readable storage medium storing program for executing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHINYA SHIMIZU等: "Depth-based weighted bi-prediction for video plus depth map coding", 《 2012 19TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 *
李赓飞: "自适应图像实时增强算法的技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111757021A (en) * 2020-07-06 2020-10-09 浙江大学 Multi-sensor real-time fusion method for mobile robot remote takeover scene
CN111757021B (en) * 2020-07-06 2021-07-20 浙江大学 Multi-sensor real-time fusion method for mobile robot remote takeover scene
CN112383993A (en) * 2020-10-27 2021-02-19 一飞(海南)科技有限公司 Gradual change color light effect control method and system for unmanned aerial vehicle formation and unmanned aerial vehicle formation
CN115633250A (en) * 2021-07-31 2023-01-20 荣耀终端有限公司 Image processing method and electronic equipment

Also Published As

Publication number Publication date
CN111163350B (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN111163350B (en) Image processing method, terminal and computer storage medium
US10026160B2 (en) Systems and techniques for automatic image haze removal across multiple video frames
US10445866B2 (en) Image processing method and device, equipment and computer storage medium
CN107690804B (en) Image processing method and user terminal
CN109472832B (en) Color scheme generation method and device and intelligent robot
CN111127307A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113792827B (en) Target object recognition method, electronic device, and computer-readable storage medium
CN113079329A (en) Matting method, related device and matting system
CN112767312A (en) Image processing method and device, storage medium and processor
CN113793257A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112950523B (en) Definition evaluation value calculation method, device, camera and storage medium
CN110570376B (en) Image rain removing method, device, equipment and computer readable storage medium
CN112258541A (en) Video boundary detection method, system, device and storage medium
US20190349535A1 (en) Intelligent photographing method and apparatus, and intelligent terminal
CN104851114A (en) Method for partial color changing of image, and terminal
CN111738964A (en) Image data enhancement method based on modeling
CN107909551A (en) Image processing method, device, computer installation and computer-readable recording medium
CN112995633B (en) Image white balance processing method and device, electronic equipment and storage medium
CN113011328B (en) Image processing method, device, electronic equipment and storage medium
CN114286000A (en) Image color processing method and device and electronic equipment
CN107945201B (en) Video landscape processing method and device based on self-adaptive threshold segmentation
CN113888419A (en) Method for removing dark corners of image
CN112488933A (en) Video detail enhancement method and device, mobile terminal and storage medium
CN111915529A (en) Video dim light enhancement method and device, mobile terminal and storage medium
CN114143447B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant