CN111311526B - Video enhancement method, video enhancement device and terminal equipment - Google Patents

Video enhancement method, video enhancement device and terminal equipment Download PDF

Info

Publication number
CN111311526B
CN111311526B CN202010116826.XA CN202010116826A CN111311526B CN 111311526 B CN111311526 B CN 111311526B CN 202010116826 A CN202010116826 A CN 202010116826A CN 111311526 B CN111311526 B CN 111311526B
Authority
CN
China
Prior art keywords
image
gray level
video
level histogram
moving object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010116826.XA
Other languages
Chinese (zh)
Other versions
CN111311526A (en
Inventor
李国胜
赵昕
谷湘煜
彭志远
罗洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Launch Digital Technology Co Ltd
Original Assignee
Shenzhen Launch Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Launch Digital Technology Co Ltd filed Critical Shenzhen Launch Digital Technology Co Ltd
Priority to CN202010116826.XA priority Critical patent/CN111311526B/en
Publication of CN111311526A publication Critical patent/CN111311526A/en
Application granted granted Critical
Publication of CN111311526B publication Critical patent/CN111311526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a video enhancement method, a video enhancement device and terminal equipment, wherein the video enhancement method comprises the following steps: for each frame of video image in the video to be processed, performing motion segmentation processing on the video image to obtain a moving object image and a static background image in the video image, and dividing the static background image into a plurality of non-overlapping image blocks; respectively counting the gray histograms of the moving object image and each image block, and respectively carrying out data merging processing on each gray histogram; respectively carrying out enhancement processing on the moving object image and the static background image according to the gray level histogram after the data combination processing; and splicing the enhanced moving object image with each enhanced image block to obtain an enhanced video image. By the method, the condition that the color of the moving object is suddenly changed when the video image is locally enhanced is effectively avoided, and the picture of the image is more natural.

Description

Video enhancement method, video enhancement device and terminal equipment
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a video enhancement method, a video enhancement device and terminal equipment.
Background
Image local enhancement refers to adjusting local information in an image to selectively highlight features of interest in the image or suppress unwanted features so that an unclear image becomes clear or emphasizes certain features. The existing image local enhancement method is to divide an image into a plurality of image blocks, respectively carry out image enhancement on each image block, and carry out smooth transition on pixels among the image blocks, so that each pixel has stronger contrast with adjacent pixels, and the purpose of local enhancement is achieved. For example: a gray object, when the surrounding is brighter, the color of the object will darken after the local enhancement treatment; when the surroundings are darker, the object color will lighten after the local enhancement treatment.
However, in video images, there is usually a moving object, that is, a certain object is located differently in each frame of video image, and then a background image around the moving object is also changed. If the existing image local enhancement method is used for carrying out local enhancement processing on the video image with the moving object, the phenomenon that the color of the moving object is in a bright state or a dark state will occur in the video, and the picture after local enhancement is unnatural.
Disclosure of Invention
The embodiment of the application provides a video enhancement method, a video enhancement device and terminal equipment, which can solve the problems that the color of a moving object is neglected and light and the picture after local enhancement is unnatural in the existing video local enhancement method.
In a first aspect, an embodiment of the present application provides a video enhancement method, including:
for each frame of video image in the video to be processed, carrying out motion segmentation processing on the video image to obtain a moving object image and a static background image in the video image, and dividing the static background image into a plurality of non-overlapping image blocks;
respectively acquiring a first gray level histogram of the moving object image and a second gray level histogram of each image block;
performing enhancement processing on the moving object image according to the first gray level histogram, and performing enhancement processing on each image block according to each second gray level histogram;
and splicing the enhanced moving object image with each enhanced image block to obtain an enhanced video image.
In a possible implementation manner of the first aspect, the separately acquiring the first gray level histogram of the moving object image and the second gray level histogram of each image block includes:
respectively counting a third gray level histogram of the moving object image and a fourth gray level histogram of each image block;
adding all the fourth gray level histogram and the third gray level histogram to obtain the first gray level histogram;
and adding the third gray level histogram with each fourth gray level histogram to obtain each second gray level histogram.
In a possible implementation manner of the first aspect, the enhancing the moving object image according to the first gray level histogram includes:
calculating a first cumulative histogram corresponding to the first gray level histogram;
and determining an enhanced gray value corresponding to each pixel point in the moving object image according to the first cumulative histogram.
In a possible implementation manner of the first aspect, in the step of performing enhancement processing on each image block according to each of the second gray level histograms, enhancement processing is performed on one image block, including:
calculating a second cumulative histogram corresponding to a second gray level histogram of the image block;
and determining an enhanced gray value corresponding to each pixel point in the image block according to a preset number of adjacent blocks and a second cumulative histogram corresponding to each adjacent block, wherein the adjacent blocks are image blocks adjacent to the image block.
In a possible implementation manner of the first aspect, the determining, according to the preset number of neighboring blocks and the second cumulative histogram corresponding to each neighboring block, the enhanced gray value corresponding to each pixel point in the image block includes:
for each pixel point in the image block, determining a corresponding gray value of the pixel point in each adjacent block according to the second cumulative histogram corresponding to each adjacent block;
and carrying out weighted summation on the gray values corresponding to the pixel points in each adjacent block to obtain the enhanced gray values corresponding to the pixel points.
In a possible implementation manner of the first aspect, the stitching processing is performed on the motion object image after enhancement processing and each image block after enhancement processing, to obtain an enhanced video image, including:
splicing each image block after the enhancement treatment into an enhanced static background image;
and for each pixel point in the enhanced moving object image, replacing the pixel point with a co-location point in the enhanced static background image, wherein the position of the co-location point in the video image is the same as the position of the pixel point in the video image.
In a second aspect, an embodiment of the present application provides a video enhancement device, including:
the motion segmentation unit is used for carrying out motion segmentation processing on each frame of video image in the video to be processed to obtain a moving object image and a static background image in the video image, and dividing the static background image into a plurality of non-overlapping image blocks;
a data acquisition unit configured to acquire a first gray level histogram of the moving object image and a second gray level histogram of each image block, respectively;
an enhancement processing unit, configured to perform enhancement processing on the moving object image according to the first gray level histogram, and perform enhancement processing on each image block according to each second gray level histogram;
and the image stitching unit is used for stitching the enhanced moving object image with each image block after the enhancement processing to obtain an enhanced video image.
In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the video enhancement method according to any one of the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the video enhancement method according to any one of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to perform the video enhancement method according to any one of the first aspects above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
in the embodiment of the application, for each frame of video image in a video to be processed, performing motion segmentation processing on the video image to obtain a moving object image and a static background image in the video image, and dividing the static background image into a plurality of non-overlapping image blocks; the moving object and the background image are divided, so that the moving part and the non-moving part can be processed in a targeted manner; respectively acquiring a first gray level histogram of the moving object image and a second gray level histogram of each image block; performing enhancement processing on the moving object image according to the first gray level histogram, and performing enhancement processing on each image block according to each second gray level histogram; splicing the enhanced moving object image and each enhanced image block to obtain an enhanced video image; by using the method, the moving part and the non-moving part are processed respectively in a targeted way, so that the contrast ratio between the two parts is enhanced. By the method, the condition that the color of the moving object is suddenly changed when the video image is locally enhanced is effectively avoided, and the picture of the video image after the local enhancement is more natural.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a video enhancement system according to an embodiment of the present application;
FIG. 2 is a flow chart of a video enhancement method according to an embodiment of the present application;
FIG. 3 is a block diagram of a video enhancement device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise.
An application scenario of the video enhancement method provided in the embodiment of the present application is described first. Referring to fig. 1, a schematic diagram of a video enhancement system according to an embodiment of the present application is provided. As shown in fig. 1, the video enhancement system may include: a video camera 101 and a terminal device 102. The video camera 101 and the terminal device 102 may be communicatively connected by wire or wirelessly. The video capturing device 101 may be a video camera, a still camera, or a terminal device with a video capturing function, such as a mobile phone, a computer, or the like. The terminal device 102 may be a processor or a device with data processing functions, such as a mobile phone, a computer, etc. The terminal device 102 may acquire a video shot by the video shooting apparatus 101, and perform video enhancement processing on the acquired video by using the video enhancement method provided in the embodiment of the present application.
Fig. 2 shows a schematic flow chart of a video enhancement method according to an embodiment of the present application, which may include, by way of example and not limitation, the following steps:
s201, for each frame of video image in the video to be processed, performing motion segmentation processing on the video image to obtain a moving object image and a static background image in the video image, and dividing the static background image into a plurality of non-overlapping image blocks.
The video to be processed comprises a plurality of video images, and the method in the embodiment of the application can be used for carrying out local enhancement processing on each video image. After the local enhancement processing is carried out on all the video images, all the video images are played in an associated mode according to the sequence of frames, and the displayed video picture is the enhanced picture.
The moving object image is an image of an object whose position changes in each frame of video image of the video. For example: in a video, a person walks, and then in each frame of video image of the video, the image of the person is the moving object image.
When the video image is segmented, the existing motion analysis technology (such as mixed Gaussian modeling, frame difference method, optical flow method and the like) can be adopted to extract the moving object in the video.
After the motion segmentation process, the size of the static background image is consistent with that of the original image, and the size of the moving object image is smaller than that of the original image, namely the moving object image is a local image of the original image.
When the static background image is divided into a plurality of image blocks, no overlapping needs to be ensured among the image blocks, and the complete static background image can be obtained after all the image blocks are spliced together. Preferably, the static background image may be divided into a plurality of image blocks of equal size. The number of the divided image blocks can be set according to actual needs, and is not particularly limited.
S202, respectively acquiring a first gray level histogram of the moving object image and a second gray level histogram of each image block.
The gray level histogram is a function of the gray level distribution and is a statistic of the gray level distribution in the image. The gray histogram is a frequency of counting all pixels in the image according to the gray value. The gray histogram represents the number of pixels in an image having a certain gray level, reflecting the frequency of occurrence of a certain gray level in the image.
A common number of gray levels is 256. And counting the gray level histogram of an image, namely counting the number of pixel points corresponding to 0-256 gray levels in the image respectively, and dividing the number of pixel points corresponding to the gray levels by the total number of pixel points in the image respectively to obtain the frequency number corresponding to the gray levels.
In one embodiment, step S202 may include the steps of:
s11, respectively counting a third gray level histogram of the moving object image and a fourth gray level histogram of each image block.
And S12, adding all the fourth gray level histogram and the third gray level histogram to obtain the first gray level histogram.
All the fourth gray histograms are added to obtain a gray histogram of the static background image. In other words, the gray histograms corresponding to the respective image blocks are added to obtain the gray histogram of the static background image.
The addition of all the fourth gray level histogram and the third gray level histogram means that the frequency numbers corresponding to the same gray level in each of the fourth gray level histogram and the third gray level histogram are added.
And S13, adding the third gray level histogram with each fourth gray level histogram to obtain each second gray level histogram.
For example, assume that there are 4 tiles I, II, III, and IV. And adding the third gray level histogram and the fourth gray level histogram corresponding to the image block I to obtain a second gray level histogram corresponding to the image block I. And adding the third gray level histogram and the fourth gray level histogram corresponding to the image block II to obtain a second gray level histogram corresponding to the image block II. And adding the third gray level histogram and the fourth gray level histogram corresponding to the image block III to obtain a second gray level histogram corresponding to the image block III. And adding the third gray level histogram and the fourth gray level histogram corresponding to the image block IV to obtain a second gray level histogram corresponding to the image block IV.
Adding the third gray histogram to a fourth gray histogram means adding the frequency numbers corresponding to the same gray level in the third gray histogram and the fourth gray histogram.
The essence of adding the third gradation histogram and the fourth gradation histogram is to fuse each image block with the information of the moving object image. Thus, when the moving object passes through each image block, the new gray level histogram corresponding to each image block and the moving object image is kept relatively stable, so that the color of the output image is not in the condition of being bright or dark after the subsequent enhancement processing.
And S203, carrying out enhancement processing on the moving object image according to the first gray level histogram, and carrying out enhancement processing on each image block according to each second gray level histogram.
In one embodiment, in step S203, the enhancement processing is performed on the moving object image according to the first gray-scale histogram, including:
s21, calculating a first cumulative histogram corresponding to the first gray level histogram after the data combination processing.
The cumulative histogram is a cumulative function on the gray level frequency. For each gray level, the frequency numbers corresponding to the gray levels before the gray level are added to obtain the cumulative probability corresponding to the gray level. For example: s (0) =p (0)/p Total (S) ;s(1)=[p(0)+p(1)]/p Total (S) ;s(2)=[p(0)+p(1)+p(2)]/p Total (S) ;……;s(i)=[p(0)+p(1)+…+p(i)]/p Total (S) . Where s (i) is the cumulative probability corresponding to gray level i, p (i) is the frequency corresponding to gray level i, p Total (S) Is the sum of the frequency numbers corresponding to the gray levels in the first gray level histogram.
S22, determining an enhanced gray value corresponding to each pixel point in the moving object image according to the first cumulative histogram.
In practical application, the gray value mapping relationship may be determined according to the first cumulative histogram, which is the corresponding relationship between the original gray value before enhancement and the gray value after enhancement. This gray value mapping may be calculated according to a preset function. For example, the gray value map may be calculated from the function ss (i) =int { [ max (pix) -min (pix) ]. S (i) +c }. Wherein ss (i) is the enhanced gray value, pix is the original gray value, and c is a preset constant. The above is only an example of calculating the gray value mapping relationship, and different mapping functions may be set as required in practical applications, which is not limited herein.
And for each pixel point, acquiring an original gray value of the pixel point, and searching for an enhanced gray value corresponding to the original gray value according to the gray value mapping relation. And traversing all the pixel points in sequence, and mapping all the gray values of the pixel points to obtain an enhanced image.
In one embodiment, in step S203, in the step of performing enhancement processing on each image block according to each of the second gray level histograms, enhancement processing is performed on one image block, including:
s31, calculating a second cumulative histogram corresponding to the second gray level histogram of the image block.
Step S31 is the same as step S21, and specific reference may be made to the description in step S21, and details are not repeated here.
S32, determining an enhanced gray value corresponding to each pixel point in the image block according to a preset number of adjacent blocks and a second cumulative histogram corresponding to each adjacent block, wherein the adjacent blocks are image blocks adjacent to the image block.
The preset number can be preset according to actual needs.
Optionally, step S32 may include:
s331, for each pixel point in the image block, determining a gray value corresponding to the pixel point in each adjacent block according to the second cumulative histogram corresponding to each adjacent block.
Step S331 is the same as step S22, and the description in step S22 is specifically referred to, and will not be repeated here.
And S332, carrying out weighted summation on the gray values corresponding to the pixel points in each adjacent block to obtain the enhanced gray values corresponding to the pixel points.
Illustratively, assume that image block I is subjected to enhancement processing, there are image blocks II, III, and IV adjacent to image block I. For any pixel point in the image block I, determining a gray value corresponding to the pixel point according to a second cumulative histogram corresponding to the image block II, and marking the gray value as a; determining a gray value corresponding to the pixel point according to a second cumulative histogram corresponding to the image block III, and marking the gray value as b; and determining a gray value corresponding to the pixel point according to the second cumulative histogram corresponding to the image block IV, and marking the gray value as c. And then carrying out weighted summation on the a, the b and the c to obtain the enhanced gray value corresponding to the pixel point.
Alternatively, when the weighted summation is performed, the weight corresponding to each adjacent block may be inversely proportional to the distance from the pixel point to the center of the adjacent block, and the sum of the weights is equal to 1.
In practice, in S331, a pixel is enhanced multiple times, and in S332, the gradation value after the multiple enhancement is weighted, so as to obtain the final gradation value after the enhancement. This may make the image fusion between each image block after enhancement and the adjacent image block more natural.
And S204, splicing the enhanced moving object image and each enhanced image block to obtain an enhanced video image.
In one embodiment, step S204 may include:
and S41, splicing each image block after the enhancement processing into an enhanced static background image.
The rule of stitching is the same as the rule of dividing the static background image in step S201. I.e. how the static background image is divided, and how the image blocks are stitched. Therefore, the spliced static background image and the original image are ensured to have the same size, the quantity of the contained image information is the same, and the image information is ensured not to be lost.
And S42, for each pixel point in the enhanced moving object image, replacing the pixel point with a co-location point in the enhanced static background image, wherein the position of the co-location point in the video image is the same as the position of the pixel point in the video image.
The above steps correspond to performing enhancement processing on the moving object image and the static background image, respectively, and then replacing the corresponding region in the static background image with the enhanced moving object image (i.e., the partial image). In this way, the moving and non-moving areas can be processed in a targeted manner, so that the contrast between the two parts is more obvious.
According to the embodiment of the application, motion segmentation processing is carried out on each frame of video image in the video to be processed to obtain a moving object image and a static background image in the video image, and the static background image is divided into a plurality of non-overlapping image blocks; by the method, the moving object and the background image are separated, so that the moving part and the non-moving part can be processed respectively in a targeted manner; respectively counting a first gray level histogram of the moving object image and a second gray level histogram of each image block, and respectively carrying out data merging processing on the first gray level histogram and each second gray level histogram; by utilizing the method, the information of the moving object image and the information of the static background image are organically combined, so that the gray level histogram of the newly generated moving object image and the gray level histogram of each image block are kept relatively stable, and the occurrence of the condition that the color is neglected and the light is neglected is effectively prevented; performing enhancement processing on the moving object image according to the first gray level histogram after the data combination processing, and performing enhancement processing on each image block according to each second gray level histogram after the data combination processing; splicing the enhanced moving object image and each enhanced image block to obtain an enhanced video image; by using the method, the moving part and the non-moving part are processed respectively in a targeted way, so that the contrast ratio between the two parts is enhanced. By the method, the condition that the color of the moving object is suddenly changed when the video image is locally enhanced is effectively avoided, and the picture of the video image after the local enhancement is more natural.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the video enhancement method described in the above embodiments, fig. 3 shows a block diagram of the video enhancement apparatus provided in the embodiment of the present application, and for convenience of explanation, only the portions relevant to the embodiment of the present application are shown.
Referring to fig. 3, the apparatus 30 includes:
the motion segmentation unit 31 is configured to perform motion segmentation processing on each frame of video image in a video to be processed, obtain a moving object image and a static background image in the video image, and divide the static background image into a plurality of non-overlapping image blocks.
A data acquisition unit 32 for acquiring a first gray level histogram of the moving object image and a second gray level histogram of each image block, respectively.
An enhancement processing unit 33, configured to perform enhancement processing on the moving object image according to the first gray level histogram, and perform enhancement processing on each image block according to each of the second gray level histograms.
An image stitching unit 34, configured to stitch the motion object image after enhancement processing and each image block after enhancement processing, so as to obtain an enhanced video image.
Optionally, the data acquisition unit 32 includes:
a statistics module, configured to respectively count a third gray histogram of the moving object image and a fourth gray histogram of each image block;
the first merging module is used for adding all the fourth gray level histogram and the third gray level histogram to obtain the first gray level histogram;
and the second merging module is used for respectively adding the third gray level histogram and each fourth gray level histogram to obtain each second gray level histogram.
Optionally, the enhancement processing unit 33 includes:
and the first calculation module is used for calculating a first cumulative histogram corresponding to the first gray level histogram after the data combination processing.
And the first determining module is used for determining the enhanced gray value corresponding to each pixel point in the moving object image according to the first cumulative histogram.
Optionally, the enhancement processing unit 33 further includes:
and the second calculation module is used for calculating a second cumulative histogram corresponding to the second gray level histogram of the image block.
And the second determining module is used for determining the enhanced gray value corresponding to each pixel point in the image block according to the preset number of adjacent blocks and the second cumulative histogram corresponding to each adjacent block, wherein the adjacent blocks are the image blocks adjacent to the image block.
Optionally, the second determining module includes:
and the determining submodule is used for determining the gray value corresponding to each adjacent block of each pixel point in the image block according to the second cumulative histogram corresponding to each adjacent block.
And the weighting sub-module is used for carrying out weighted summation on the gray values corresponding to the pixel points in each adjacent block to obtain the enhanced gray values corresponding to the pixel points.
Optionally, the image stitching unit 34 includes:
and the first splicing module is used for splicing each image block after the enhancement processing into an enhanced static background image.
And the second splicing module is used for replacing the pixel point with the same point in the enhanced static background image for each pixel point in the enhanced moving object image, wherein the position of the same point in the video image is the same as the position of the pixel point in the video image.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
In addition, the video enhancement device 30 shown in fig. 3 may be a software unit, a hardware unit, or a unit combining both of them, which are built in an existing terminal device, or may be integrated into the terminal device as a separate pendant, or may exist as a separate terminal device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 4, the terminal device 4 of this embodiment includes: at least one processor 40 (only one shown in fig. 4), a memory 41, and a computer program 42 stored in the memory 41 and executable on the at least one processor 40, the processor 40 implementing the steps in any of the various video enhancement method embodiments described above when executing the computer program 42.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that fig. 4 is merely an example of the terminal device 4 and is not meant to be limiting as to the terminal device 4, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 40 may be a central processing unit (Central Processing Unit, CPU), the processor 40 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may in some embodiments be an internal storage unit of the terminal device 4, such as a hard disk or a memory of the terminal device 4. The memory 41 may in other embodiments also be an external storage device of the terminal device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the terminal device 4. The memory 41 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 41 may also be used for temporarily storing data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that may be performed in the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to video enhancement devices/terminal equipment, recording media, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. A method of video enhancement, comprising:
for each frame of video image in the video to be processed, carrying out motion segmentation processing on the video image to obtain a moving object image and a static background image in the video image, and dividing the static background image into a plurality of non-overlapping image blocks;
respectively acquiring a first gray level histogram of the moving object image and a second gray level histogram of each image block;
performing enhancement processing on the moving object image according to the first gray level histogram, and performing enhancement processing on each image block according to each second gray level histogram;
splicing the enhanced moving object image and each enhanced image block to obtain an enhanced video image;
the splicing processing is carried out on the motion object image after the enhancement processing and each image block after the enhancement processing, so as to obtain an enhanced video image, which comprises the following steps:
splicing each image block after the enhancement treatment into an enhanced static background image;
and for each pixel point in the enhanced moving object image, replacing the pixel point with a co-location point in the enhanced static background image, wherein the position of the co-location point in the video image is the same as the position of the pixel point in the video image.
2. The video enhancement method according to claim 1, wherein the separately acquiring the first gray level histogram of the moving object image and the second gray level histogram of each image block comprises:
respectively counting a third gray level histogram of the moving object image and a fourth gray level histogram of each image block;
adding all the fourth gray level histogram and the third gray level histogram to obtain the first gray level histogram;
and adding the third gray level histogram with each fourth gray level histogram to obtain each second gray level histogram.
3. The video enhancement method according to claim 1, wherein said enhancement processing of said moving object image according to said first gray level histogram comprises:
calculating a first cumulative histogram corresponding to the first gray level histogram;
and determining an enhanced gray value corresponding to each pixel point in the moving object image according to the first cumulative histogram.
4. The video enhancement method according to claim 1, wherein in the step of performing enhancement processing on each of said image blocks according to each of said second gray level histograms, enhancement processing is performed on one image block, comprising:
calculating a second cumulative histogram corresponding to a second gray level histogram of the image block;
and determining an enhanced gray value corresponding to each pixel point in the image block according to a preset number of adjacent blocks and a second cumulative histogram corresponding to each adjacent block, wherein the adjacent blocks are image blocks adjacent to the image block.
5. The video enhancement method according to claim 4, wherein the determining the enhanced gray value corresponding to each pixel point in the image block according to the preset number of neighboring blocks and the second cumulative histogram corresponding to each neighboring block includes:
for each pixel point in the image block, determining a corresponding gray value of the pixel point in each adjacent block according to the second cumulative histogram corresponding to each adjacent block;
and carrying out weighted summation on the gray values corresponding to the pixel points in each adjacent block to obtain the enhanced gray values corresponding to the pixel points.
6. A video enhancement device, comprising:
the motion segmentation unit is used for carrying out motion segmentation processing on each frame of video image in the video to be processed to obtain a moving object image and a static background image in the video image, and dividing the static background image into a plurality of non-overlapping image blocks;
a data acquisition unit configured to acquire a first gray level histogram of the moving object image and a second gray level histogram of each image block, respectively;
an enhancement processing unit, configured to perform enhancement processing on the moving object image according to the first gray level histogram, and perform enhancement processing on each image block according to each second gray level histogram;
the image stitching unit is used for stitching the enhanced moving object image with each image block after the enhancement processing to obtain an enhanced video image;
the image stitching unit is further configured to:
splicing each image block after the enhancement treatment into an enhanced static background image;
and for each pixel point in the enhanced moving object image, replacing the pixel point with a co-location point in the enhanced static background image, wherein the position of the co-location point in the video image is the same as the position of the pixel point in the video image.
7. The video enhancement device according to claim 6, wherein the data acquisition unit includes:
a statistics module, configured to respectively count a third gray histogram of the moving object image and a fourth gray histogram of each image block;
the first merging module is used for adding all the fourth gray level histogram and the third gray level histogram to obtain the first gray level histogram;
and the second merging module is used for respectively adding the third gray level histogram and each fourth gray level histogram to obtain each second gray level histogram.
8. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the computer program.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 5.
CN202010116826.XA 2020-02-25 2020-02-25 Video enhancement method, video enhancement device and terminal equipment Active CN111311526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010116826.XA CN111311526B (en) 2020-02-25 2020-02-25 Video enhancement method, video enhancement device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010116826.XA CN111311526B (en) 2020-02-25 2020-02-25 Video enhancement method, video enhancement device and terminal equipment

Publications (2)

Publication Number Publication Date
CN111311526A CN111311526A (en) 2020-06-19
CN111311526B true CN111311526B (en) 2023-07-25

Family

ID=71146424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010116826.XA Active CN111311526B (en) 2020-02-25 2020-02-25 Video enhancement method, video enhancement device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111311526B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995178B (en) * 2021-02-25 2023-04-25 上海吉盛网络技术有限公司 Elevator multiparty call device with remote callback function
CN113794816B (en) * 2021-08-04 2023-05-23 成都市联洲国际技术有限公司 Image enhancement method, device, equipment and storage medium in dim light environment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012019417A1 (en) * 2010-08-10 2012-02-16 中国科学院自动化研究所 Device, system and method for online video condensation
CN103810681A (en) * 2014-03-12 2014-05-21 中国科学院上海高等研究院 Low-power consumption contrast enhancing method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI373961B (en) * 2008-09-18 2012-10-01 Ind Tech Res Inst Fast video enhancement method and computer device using the method
US8259198B2 (en) * 2009-10-20 2012-09-04 Apple Inc. System and method for detecting and correcting defective pixels in an image sensor
CN104253994B (en) * 2014-09-03 2017-09-26 电子科技大学 A kind of night monitoring video real time enhancing method merged based on sparse coding
WO2017132858A1 (en) * 2016-02-03 2017-08-10 Chongqing University Of Posts And Telecommunications Methods, systems, and media for image processing
CN109858449B (en) * 2019-02-01 2022-08-16 安谋科技(中国)有限公司 Image processing method, apparatus, medium, and system
CN110349114A (en) * 2019-05-24 2019-10-18 江西理工大学 Applied to the image enchancing method of AOI equipment, device and road video monitoring equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012019417A1 (en) * 2010-08-10 2012-02-16 中国科学院自动化研究所 Device, system and method for online video condensation
CN103810681A (en) * 2014-03-12 2014-05-21 中国科学院上海高等研究院 Low-power consumption contrast enhancing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
视频增强算法及其应用;黎红;肖何;;科技通报(03);第183-188页 *

Also Published As

Publication number Publication date
CN111311526A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN111292269B (en) Image tone mapping method, computer device, and computer-readable storage medium
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
CN111368587B (en) Scene detection method, device, terminal equipment and computer readable storage medium
CN111311526B (en) Video enhancement method, video enhancement device and terminal equipment
CN109214996B (en) Image processing method and device
CN111601048B (en) Image processing method and device
CN115496668A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110691226A (en) Image processing method, device, terminal and computer readable storage medium
CN110838088B (en) Multi-frame noise reduction method and device based on deep learning and terminal equipment
CN112565887A (en) Video processing method, device, terminal and storage medium
CN111563517A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
CN116433496A (en) Image denoising method, device and storage medium
CN113205011B (en) Image mask determining method and device, storage medium and electronic equipment
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium
CN113379702A (en) Blood vessel path extraction method and device of microcirculation image
CN111539975A (en) Method, device and equipment for detecting moving target and storage medium
CN111311610A (en) Image segmentation method and terminal equipment
CN111754412A (en) Method and device for constructing data pairs and terminal equipment
CN114119377B (en) Image processing method and device
CN111970451B (en) Image processing method, image processing device and terminal equipment
CN111986144B (en) Image blurring judging method, device, terminal equipment and medium
CN114998172A (en) Image processing method and related system
CN112087556B (en) Dark light imaging method and device, readable storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: B501, Building F2, TCL Science Park, No. 1001, Zhongshanyuan Road, Shuguang Community, Xili Street, Nanshan District, Shenzhen City, Guangdong Province, 518000

Patentee after: LAUNCH DIGITAL TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: 518000 Third Floor, Fengyun Building, Galaxy, No. 5 Xinxi Road, North District, Nanshan High-tech Park, Shenzhen City, Guangdong Province

Patentee before: LAUNCH DIGITAL TECHNOLOGY Co.,Ltd.

Country or region before: China