CN115760903A - Image motion area positioning method and device, storage medium and terminal - Google Patents

Image motion area positioning method and device, storage medium and terminal Download PDF

Info

Publication number
CN115760903A
CN115760903A CN202211482042.4A CN202211482042A CN115760903A CN 115760903 A CN115760903 A CN 115760903A CN 202211482042 A CN202211482042 A CN 202211482042A CN 115760903 A CN115760903 A CN 115760903A
Authority
CN
China
Prior art keywords
image
frame
processed
information
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211482042.4A
Other languages
Chinese (zh)
Inventor
李佳坤
宋博
王勇
温建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Image Design Technology Co Ltd
Original Assignee
Chengdu Image Design Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Image Design Technology Co Ltd filed Critical Chengdu Image Design Technology Co Ltd
Priority to CN202211482042.4A priority Critical patent/CN115760903A/en
Publication of CN115760903A publication Critical patent/CN115760903A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method, a device, a storage medium and a terminal for positioning an image motion area, wherein the method comprises the following steps: acquiring an image to be processed, and judging whether the image to be processed meets the basic condition of the existence of a motion area; after determining that basic conditions of motion areas exist in the image to be processed are met, selecting one frame of image from the image to be processed as a reference frame image, and taking the rest frames in the image to be processed as comparison frame images; extracting multi-level image information in the comparison frame image and the reference frame image, and performing image enhancement processing on the multi-level image information to respectively obtain a comparison frame enhanced image and a reference frame enhanced image; calculating an image difference factor of the comparison frame enhanced image and the reference frame enhanced image; and determining a motion area in the comparative frame enhanced image according to the image difference factor. The invention can quickly identify the motion area among the multi-frame images and improve the image processing effect.

Description

Image motion area positioning method and device, storage medium and terminal
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for positioning an image motion area, a storage medium, and a terminal.
Background
In the computed radiography, in order to improve image quality, an image is generally processed by taking a plurality of frames of images. For example, high Dynamic Range Imaging (HDR) performs weighted fusion on Low Dynamic Range images (LDR) with different exposures to obtain a High Dynamic Range image; the multi-frame noise reduction is also called time domain noise reduction, and after a plurality of images are aligned and overlapped, the average value or the median value and the like of a plurality of layers of pixels corresponding to each position are used as output, so that the purpose of reducing the noise is achieved.
The processing methods need to simultaneously superimpose information of multiple frames of images, but due to the movement of the photographing equipment or a photographing object, continuous multiple frames of images acquired by the equipment may have movement deviation, so that the phenomena of movement smear and the like occur after the multiple frames of images are processed, and the image quality is influenced. Therefore, in the image processing process, detection and judgment are often required to be performed on the motion area, and then different processing is performed according to whether the motion area is the motion area, so as to avoid an abnormal phenomenon caused by motion.
Under the condition of no support of a motion judgment algorithm, when a multi-frame image is processed into a single frame for output, a motion smear phenomenon often occurs; under some simpler and common methods, such as difference calculation of points at the same position of a reference frame and other frames, the method is easily affected by noise, image defects, etc., and the phenomenon that smear is partially eliminated or other problems are introduced often occurs, so how to quickly identify a motion region in an image becomes an important problem.
Therefore, it is necessary to provide a novel method, an apparatus, a storage medium and a terminal for locating a motion area of an image to solve the above problems in the prior art.
Disclosure of Invention
The invention aims to provide a method, a device, a storage medium and a terminal for positioning an image motion area, which can quickly identify the motion area among multi-frame images and improve the image processing effect.
In order to achieve the above object, the method for locating a motion region of an image according to the present invention comprises:
acquiring an image to be processed, and judging whether the image to be processed meets the basic condition of the existence of a motion area;
after determining that the to-be-processed image meets the basic condition that a motion area exists, selecting one frame of image from the to-be-processed image as a reference frame image, and taking the rest frames of the to-be-processed image as comparison frame images;
extracting multi-level image information in the comparison frame image and the reference frame image, and performing image enhancement processing on the multi-level image information to respectively obtain a comparison frame enhanced image and a reference frame enhanced image;
calculating an image difference factor of the comparison frame enhanced image and the reference frame enhanced image;
and determining a motion area in the comparative frame enhanced image according to the image difference factor.
The method for positioning the image motion area has the advantages that: after the motion area exists in the image to be processed is determined, one frame of image is selected from the image to be processed as a reference frame image, the other frames of image are selected as comparison frame images, image enhancement processing is carried out on the reference frame image and the comparison frame images, the visual effect of the image is improved, the detailed characteristics are highlighted, the accuracy of subsequent motion area positioning is improved, and the image meeting the conditions is selected as the reference frame image, so that the judgment accuracy can be improved, the errors are reduced, and the calculation resources are saved.
Optionally, the determining whether the basic condition of the motion area exists in the image to be processed includes:
acquiring exposure time and histogram information of each frame of image in the image to be processed;
determining whether a scene change exists according to the exposure time and the histogram information;
after the scene change is determined to exist, determining a current application scene of the image to be processed, and determining an exposure judgment threshold according to the current application scene;
comparing the maximum exposure time with the exposure judgment threshold value;
when the maximum exposure time is smaller than the exposure judgment threshold, determining that the to-be-processed image does not meet the basic condition of existence of a motion area and does not have the motion area;
and when the maximum exposure time is greater than or equal to the exposure judgment threshold, determining that the image to be processed meets the basic condition that a motion area exists, wherein the motion area possibly exists.
Optionally, the selecting a frame of image from the to-be-processed image as a reference frame image includes:
calculating the evaluation parameter of each frame of image in the image to be processed;
and selecting one frame of image with the evaluation parameter meeting a preset condition as the reference frame image according to the size of the evaluation parameter, wherein the evaluation parameter comprises at least one of image noise information, signal-to-noise ratio, histogram form information, brightness information, detail information and image definition information. The beneficial effects are that: and selecting a frame image with the optimal quality from the images to be processed as a reference frame image according to the evaluation parameter so as to improve the accuracy of the subsequent motion region positioning.
Optionally, the selecting, according to the size of the evaluation parameter, a frame image of which the evaluation parameter meets a preset condition as the reference frame image includes:
calculating the brightness information of each frame of image in the image to be processed, and calculating the brightness validity factor of each frame of image according to the brightness information;
extracting the detail information of each frame of image in the image to be processed to calculate the detail factor of each frame of image;
calculating the detail richness of each frame of image according to the brightness effectiveness factor and the detail factor of each frame of image;
and selecting the frame image with the maximum detail richness as the reference frame image.
Optionally, the calculating a brightness effectiveness factor of each frame of image according to the brightness information includes:
determining a first brightness threshold and a second brightness threshold according to the shooting scene information and the shooting equipment information of the image to be processed;
and calculating a brightness effectiveness factor according to the first brightness threshold, the second brightness threshold and the brightness information of each frame of image.
Optionally, the extracting multi-level image information in the comparison frame image and the reference frame image includes:
selecting the comparison frame image and the reference frame image as sampling images respectively;
dividing the sampling image into M multiplied by Q division blocks, and obtaining a down-sampling image after point selection is carried out on points in each division block, wherein M and Q are both positive integers;
and extracting information of the down-sampling image to obtain the multi-level image information. The beneficial effects are that: by extracting multi-level image information from the comparison frame image and the reference frame image, details from low frequency to high frequency can be conveniently extracted from images of different levels, and the judgment of image difference based on the details of different frequencies is beneficial to improving the accuracy and simultaneously can reduce unnecessary calculation.
Optionally, the calculating an image difference factor between the comparison frame enhanced image and the reference frame enhanced image includes:
calculating a brightness difference factor of each frame of the comparison frame enhanced image according to the brightness information of the reference frame enhanced image and the brightness information of the comparison frame enhanced image;
calculating detail difference factors of the enhanced images of the comparison frames of each frame according to the detail information of the enhanced images of the reference frames and the detail information of the enhanced images of the comparison frames;
calculating the image difference factor of each layer of image in the comparison frame enhanced image according to the brightness difference factor and the detail difference factor.
Optionally, the calculating the image difference factor according to the brightness difference factor and the detail difference factor includes:
comparing the magnitude of the brightness difference factor to a third threshold and the magnitude of the detail difference factor to a fourth threshold for the comparison frame enhanced image;
determining the size of the image difference factor to be A when the brightness difference factor of the comparison frame enhanced image is determined to be larger than the third threshold and the detail difference factor is determined to be larger than the fourth threshold, otherwise, the size of the image difference factor is B, wherein A is larger than B.
Optionally, the determining a motion region in the comparison frame enhanced image according to the image difference factor includes:
acquiring the corresponding position of each pixel point of the current layer image in the comparison frame enhanced image in the previous layer image;
judging whether a pixel point of the current layer image is a motion area or not according to the image difference factor of the corresponding position;
judging the next layer of image after judging each pixel point of the current layer of image;
acquiring corresponding pixel points of pixel points determined as a motion area in a current layer image, which correspond to corresponding positions in a next layer image;
judging whether the corresponding pixel point of the next layer image is a motion area or not according to the image difference factor of the corresponding pixel point;
and circularly executing the processes until the last layer of image of the enhanced image of the comparison frame is subjected to a motion region judgment process, so as to obtain an image motion region based on the original resolution of the image to be processed. .
The invention also provides a device for positioning the image motion area, which comprises:
the motion judgment module is used for acquiring an image to be processed and judging whether the image to be processed meets the basic condition of the existence of a motion area;
the image selecting module is used for selecting one frame of image meeting the condition from the images to be processed as a reference frame image and using the rest frames in the images to be processed as comparison frame images after determining that the motion areas exist in the images to be processed;
the enhancement processing module is used for extracting multi-level image information in the comparison frame image and the reference frame image, and performing image enhancement processing on the multi-level image information to respectively obtain a comparison frame enhanced image and a reference frame enhanced image;
a difference factor calculation module for calculating an image difference factor of the comparison frame enhanced image and the reference frame enhanced image;
and the positioning module is used for determining a motion area in the enhanced image of the comparison frame according to the image difference factor.
The positioning device of the image motion area has the advantages that: after the motion judging module determines that a motion area exists in the image to be processed, the image selecting module selects one frame of image from the image to be processed as a reference frame image and the other frames of image are used as comparison frame images, and the enhancement processing module performs image enhancement processing on the reference frame image and the comparison frame images so as to improve the visual effect of the image, highlight detailed characteristics and improve the accuracy of subsequent motion area positioning.
The invention also discloses a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the positioning method of the image motion area.
The invention discloses a terminal, which is characterized by comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, so that the terminal performs the above-mentioned method for locating the image motion area.
Drawings
Fig. 1 is a schematic flowchart of a method for locating an image motion area according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an image enhancement curve used in a method for locating an image motion region according to an embodiment of the present invention;
fig. 3 is a block diagram of a device for locating an image motion area according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention. Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. As used herein, the word "comprising" and similar words are intended to mean that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
In view of the problems in the prior art, an embodiment of the present invention provides a method for locating a motion region of an image, and with reference to fig. 1, the method includes the following steps:
s101, acquiring an image to be processed, and judging whether the image to be processed meets basic conditions of existence of a motion area.
In some embodiments, the determining whether a motion region exists in the image to be processed includes:
acquiring exposure time and histogram information of each frame of image in the image to be processed;
determining whether a scene change exists according to the exposure time and the histogram information;
after scene change is determined to exist, determining a current application scene of the image to be processed, and determining an exposure judgment threshold according to the current application scene;
comparing the maximum exposure time with the exposure judgment threshold value;
when the maximum exposure time is smaller than the exposure judgment threshold, determining that the to-be-processed image does not meet the basic condition of existence of a motion area and does not have the motion area;
and when the maximum exposure time is greater than or equal to the exposure judgment threshold, determining that the image to be processed meets the basic condition that a motion area exists, wherein the motion area possibly exists.
In this embodiment, after the to-be-processed image is acquired, for each frame of image in the to-be-processed image, histogram information of each frame of image and currently applied scene information are counted by collecting exposure information and exposure relations of each frame before and after the to-be-processed image, so as to facilitate preliminary judgment of a motion process.
Illustratively, it is noted that N frames of images exist in the image to be processed, where N is a positive integer, the exposure time and histogram information of each frame of image in the image to be processed are first obtained, the sizes and differences of the exposure times of different frames of images are compared, and the form and peak value of the histogram of the image are compared according to the histogram information, so as to determine whether there is a severe scene change in the current image, and different subsequent adjustments are used for different scenes. And after the scene change is determined to exist, determining the application scene of the current image, and determining the exposure judgment threshold according to the current application scene. Such as portrait and traffic stream shooting, handheld and support shooting, vehicle-mounted and monitoring shooting and the like, the motion speeds of a shooting main body and a shot object are obviously different, and the exposure judgment threshold value is determined according to the specific application scene experiments.
Then, according to the maximum exposure time in the N frames of images and the size of the exposure judgment threshold value, carrying out preliminary judgment on the motion area, specifically, when the maximum exposure time t in the N frames of images is max When the exposure judgment threshold Th1 is smaller than the exposure judgment threshold Th1, determining that no motion area exists in the image to be processed; at the maximum of said exposure time t max Greater than or equal to the exposureAnd when the threshold Th1 is judged, determining that a motion area exists in the image to be processed, and needing to perform a subsequent further judgment process.
S102, after the image to be processed is determined to meet the basic condition that a motion area exists, selecting one frame of image from the image to be processed as a reference frame image, and taking the rest frames of the image to be processed as comparison frame images.
In some embodiments, the selecting, as the reference frame image, one frame image satisfying a condition in the to-be-processed image includes:
calculating the evaluation parameter of each frame of image in the image to be processed;
and selecting a frame of image with the evaluation parameter meeting a preset condition as the reference frame image according to the size of the evaluation parameter, wherein the evaluation parameter comprises at least one of image noise information, signal-to-noise ratio, histogram form information, brightness information, detail information and image definition information.
In the embodiment, the evaluation parameter of each frame image in the image to be processed is calculated so as to select the reference frame image satisfying the condition according to the evaluation parameter, and the evaluation parameter includes at least one of image noise information, signal-to-noise ratio, histogram morphology information, brightness information, detail information and image definition information
To further illustrate the selection process of the reference frame image, the reference frame image is selected with the luminance information and the detail information. Firstly, calculating the brightness information of each frame of image in the image to be processed, and calculating the brightness effectiveness factor of each frame of image according to the brightness information;
extracting the detail information of each frame of image in the image to be processed to calculate the detail factor of each frame of image;
calculating the detail richness of each frame of image according to the brightness effectiveness factor and the detail factor of each frame of image;
and selecting the frame image with the maximum detail richness as the reference frame image.
Illustratively, the brightness information of different images has different calculation modes, the brightness calculation methods for calculating the brightness of different data sources have fundamental differences, and the data sources of the same type also have multiple calculation methods. For example, window filtering is usually used for the brightness of the RAW image, and the simplest method is gaussian filtering, which is not described herein again; or the values of rGain and bGain, and the gradients in different directions, etc.
Also as in RGB image, luminance information Y i The calculation process of (2) is as follows:
Y i =R*0.299+G*0.587+B*0.114
where R, G, B represent the luminance of three color channels in the image, respectively.
In some embodiments, calculating a luminance validity factor for each frame of image from the luminance information comprises:
determining a first brightness threshold and a second brightness threshold according to the shooting scene information and the shooting equipment information of the image to be processed;
and calculating a brightness effectiveness factor according to the first brightness threshold, the second brightness threshold and the brightness information of each frame of image.
Specifically, for the luminance validity factor, there may be different calculation manners according to different devices and scenes, for example, the luminance validity with coordinates (x, y) is ValidY i (x, y), effectiveness factor eV i For the sum of all the lighting intensities valid, then:
Figure BDA0003962087280000101
Figure BDA0003962087280000102
wherein
Figure BDA0003962087280000103
The first brightness threshold value and the second brightness threshold value are respectively determined through experience and experiments according to photographing equipment, scenes and the like; σ represents a set of coordinates of all points in each frame image.
For the detail factor, a method for extracting image edge information can be used, such as calculating the detail factor eD through a sobel operator i
Figure BDA0003962087280000104
Figure BDA0003962087280000105
And
Figure BDA0003962087280000106
squares, lumi, of sobel operators of images in horizontal and vertical directions, respectively i (x, y) is the luminance information of the luminance of each frame image at the coordinate (x, y), and since the sobel operator is the content of the prior art, the details are not described here.
On the other hand, detail factors can be calculated in a mode of customizing the difference of pixel values in different directions and the like, for example
eD i (x,y)=|Lumi(x-1,y)-Lumi(x+1,y)|*|Lumi(x,y-1)-Lumi(x,y+1)|
Lumi (x-1, y), lumi (x +1, y), lumi (x, y-1), lumi (x, y + 1) respectively represent luminance information of coordinate points (x-1, y), (x +1, y), (x, y-1), (x, y + 1) on each frame image.
After the brightness effectiveness factor and the detail factor of each frame of image are respectively calculated, the detail richness F of each frame of image is calculated i Comparing the richness of detail F of each frame i All of F i The set of constructs is denoted as { F i }, richness of details F i Selecting the largest frame image as the reference frame image
F i =eV i *eD i
BaseIndex={k|F k =max({F i })}
BaseIndex is the serial number of a reference frame, i =1,2 \8230 \ 8230, N, namely, the detail richness F in an N-frame image can be determined k The corresponding frame image is used as a reference frame image.
S103, extracting multi-level image information in the comparison frame image and the reference frame image, and performing image enhancement processing on the multi-level image information to respectively obtain a comparison frame enhanced image and a reference frame enhanced image.
In some embodiments, the extracting multi-level image information in the comparison frame image and the reference frame image comprises:
selecting the comparison frame image and the reference frame image as sampling images respectively;
dividing the sampling image into M multiplied by Q division blocks, and obtaining a down-sampling image after point fetching is carried out on points in each division block, wherein M and Q are positive integers;
and extracting information of the down-sampling image to obtain the multi-level image information.
The method for extracting the points in the partition blocks comprises various methods, including averaging or median or some filtered values to obtain an MxQ downsampled image, extracting multi-level information of the downsampled image, extracting details from low frequency to high frequency from images of different levels, judging image differences based on the details of different frequencies to help to improve accuracy, and meanwhile reducing unnecessary calculation.
It should be noted that, in the present scheme, there are various methods for extracting multilevel information from a downsampled image, such as a common gaussian pyramid and a laplacian pyramid, and the present scheme is not particularly limited to this.
After extracting the multi-level image information, performing feature extraction and image enhancement processing on the multi-level image information to respectively obtain a comparison frame enhanced image and a reference frame enhanced image, so that the sharpness and the contrast of the image can be improved, the histogram distribution is improved, the brightness performance of the image is enhanced, the image visual effect is better, and the detail features are more prominent.
Common methods such as an equilibrium histogram, weber contrast, and the like are not described herein again; and information such as exposure, gain, scene, various parameters of equipment and the like of the image can be extracted, and the image enhancement suitable for current application is designedThe strong curve, as shown in FIG. 2, is that the multi-level image of the enhanced image of the reference frame obtained after the processing is from the top layer to the bottom layer
Figure BDA0003962087280000121
The multi-level image of the processed comparison frame enhanced image is respectively from the top layer to the bottom layer
Figure BDA0003962087280000122
And S104, calculating an image difference factor of the comparison frame enhanced image and the reference frame enhanced image.
In some embodiments, said calculating an image difference factor for said comparison frame enhanced image and said reference frame enhanced image comprises:
calculating a brightness difference factor of each frame of the enhanced image of the comparison frame according to the brightness information of the enhanced image of the reference frame and the brightness information of the enhanced image of the comparison frame;
calculating detail difference factors of the enhanced images of the comparison frames of each frame according to the detail information of the enhanced images of the reference frames and the detail information of the enhanced images of the comparison frames;
calculating the image difference factor of each layer of image in the comparison frame enhanced image according to the brightness difference factor and the detail difference factor.
For each point of each layer of images of the reference frame and the ith frame of image, the current point, local information and global information can be used to calculate the brightness difference factor by combining the brightness information of the images
Figure BDA0003962087280000123
Calculating to obtain detail factors by combining with image detail information, and calculating to obtain detail difference factors according to the detail factors
Figure BDA0003962087280000124
Illustratively, taking a window of size X Y centered on a point of the ith frame image of the comparison frame enhanced image, all points within the window are calculated as compared to the reference frame enhanced imageThe size of the difference is determined and marked, e.g. the point with coordinates (x, y) is calculated to obtain the brightness difference factor
Figure BDA0003962087280000125
And detail difference factor
Figure BDA0003962087280000126
Figure BDA0003962087280000127
Figure BDA0003962087280000128
The brightness difference factor and the detail difference factor can also be calculated in combination with exposure information, white balance parameters, noise models and other multidimensional ways, and are not described herein again.
In some embodiments, said calculating said image difference factor from said brightness difference factor and said detail difference factor comprises:
comparing the magnitude of the brightness difference factor to a third threshold and the magnitude of the detail difference factor to a fourth threshold for the comparison frame enhanced image;
determining the size of the image difference factor to be A when the brightness difference factor of the comparison frame enhanced image is determined to be larger than the third threshold and the detail difference factor is determined to be larger than the fourth threshold, otherwise, the size of the image difference factor is B, wherein A is larger than B.
The brightness difference factor is obtained through calculation
Figure BDA0003962087280000131
And detail difference factor
Figure BDA0003962087280000132
Then, by judging the brightness difference factor
Figure BDA0003962087280000133
And detail difference factor
Figure BDA0003962087280000134
The image difference factor can be obtained by comparing the values with the third threshold and the fourth threshold, and taking a =1 and b =0 as an example, the calculation process of the image difference factor satisfies the following formula:
Figure BDA0003962087280000135
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003962087280000136
and
Figure BDA0003962087280000137
the luminance difference factor and the detail difference factor of the layer 1 pyramid of the ith frame and the reference frame are respectively compared, tau is a set of coordinates of all points in an X multiplied by Y window taking (X, Y) as the center, i =1,2 \8230; N-1.
The above is a method for calculating an image difference factor by extracting a brightness difference factor and a detail difference factor, but the method for calculating an image difference factor is not limited to this, and other methods may also be used for calculation, such as calculating a difference of a histogram, calculating a difference of a signal-to-noise ratio, calculating an edge difference in a frequency domain after fourier transform, calculating a brightness difference, a color difference, and a brightness detail difference of an image.
And S105, determining a motion area in the enhanced image of the comparison frame according to the image difference factor.
In some embodiments, the process of step S105 includes:
acquiring the corresponding position of each pixel point of the current layer image in the comparison frame enhanced image in the previous layer image;
judging whether the pixel point of the current layer image is a motion area or not according to the image difference factor of the corresponding position;
judging the next layer of image after judging each pixel point of the current layer of image;
acquiring corresponding pixel points of pixel points determined as a motion area in a current layer image corresponding to corresponding positions in a next layer image;
judging whether the corresponding pixel point of the next layer image is a motion area or not according to the image difference factor of the corresponding pixel point;
and circularly executing the processes until the motion area judgment process is completed on the last layer of image of the comparison frame enhanced image, so as to obtain an image motion area based on the original resolution of the image to be processed.
Specifically, the reconstruction recovery method may be resolution sampling, such as bilinear interpolation upsampling, trilinear interpolation upsampling; the two-dimensional gaussian function may also be used to calculate the proportion of the current point in the reconstruction recovery, with the current point as a zero point and the coordinates of the current point in the horizontal and vertical directions as input values.
Taking the method of using resolution sampling as an example, for the layer 2 image of the comparison frame enhanced image, the position (x) of the point of the (x, y) coordinate in the layer 1 image is judged 0 ,y 0 ) Extracting an image difference factor of the current position, judging whether the image difference factor is a motion area or not, determining that the point has no motion when the image difference factor is 0, and then obtaining a larger area after resolution up-sampling, wherein the points in the same area of the next layer of image have no motion; if the image difference factor is 1, the image difference factor is determined to be a motion area, and a point at a corresponding position of the next layer image may move, so that the extraction and judgment of the image difference factor are required again. After traversing each point in the image of the layer 2, entering the next layer of image, and circulating until the motion judgment of the image of the bottommost layer (the Mth layer) is finished, obtaining a motion area with the resolution of the original image, and finishing the recovery and reconstruction of the motion area.
The present invention also provides a device for locating an image motion region, referring to fig. 3, including:
a motion determining module 301, configured to obtain an image to be processed, and determine whether the image to be processed meets a basic condition that a motion area exists;
an image selecting module 302, configured to select one frame of image from the to-be-processed images as a reference frame image after determining that the to-be-processed images meet a basic condition that a motion region exists, and use the remaining frames of the to-be-processed images as comparison frame images;
an enhancement processing module 303, configured to extract multi-level image information in the comparison frame image and the reference frame image, and perform image enhancement processing on the multi-level image information to obtain a comparison frame enhanced image and a reference frame enhanced image, respectively;
a difference factor calculating module 304, configured to calculate an image difference factor between the comparison frame enhanced image and the reference frame enhanced image;
a positioning module 305 for determining a motion region in the compared frame enhanced image according to the image difference factor of the multi-level image information.
It should be noted that the structure and principle of the positioning device for the image motion area correspond to the steps in the positioning method for the image motion area one by one, and therefore, the description thereof is omitted here.
The invention provides a terminal version upgrading device,
it should be noted that the division of each module of the above apparatus is only a logical division, and all or part of the actual implementation may be integrated into one physical entity or may be physically separated. And these modules can all be implemented in the form of software invoked by a processing element; or can be implemented in the form of hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the selection module may be a processing element that is set up separately, or may be implemented by being integrated in a chip of the system, or may be stored in a memory of the system in the form of program code, and the function of the above x module may be called and executed by a processing element of the system. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more Digital Signal Processors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, when some of the above modules are implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can invoke the program code. For another example, the modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
The invention also discloses a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to execute the positioning method of the image motion area.
The storage medium of the invention has stored thereon a computer program which, when being executed by a processor, carries out the above-mentioned method. The storage medium includes: a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, a usb disk, a Memory card, or an optical disk, which can store program codes.
In another embodiment of the disclosure, the disclosure further provides a chip system, which is coupled to the memory and configured to read and execute the program instructions stored in the memory to perform the steps of the method for positioning the image motion area.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, in essence or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program code, such as flash memory, removable hard drive, read-only memory, random-access memory, magnetic or optical disk, etc.
Although the embodiments of the present invention have been described in detail hereinabove, it is apparent to those skilled in the art that various modifications and variations can be made to these embodiments. However, it is to be understood that such modifications and variations fall within the scope and spirit of the present invention as set forth in the following claims. Moreover, the invention as described herein is capable of other embodiments and of being practiced or of being carried out in various ways.

Claims (12)

1. A method for locating a motion region of an image, comprising:
acquiring an image to be processed, and judging whether the image to be processed meets the basic condition of the existence of a motion area;
after determining that the to-be-processed image meets the basic condition that a motion area exists, selecting one frame of image from the to-be-processed image as a reference frame image, and taking the rest frames of the to-be-processed image as comparison frame images;
extracting multi-level image information in the comparison frame image and the reference frame image, and performing image enhancement processing on the multi-level image information to respectively obtain a comparison frame enhanced image and a reference frame enhanced image;
calculating an image difference factor of the comparison frame enhanced image and the reference frame enhanced image;
determining a motion region in the comparison frame enhanced image according to the image difference factor.
2. The method according to claim 1, wherein the determining whether the image to be processed satisfies a basic condition that a motion region exists comprises:
acquiring exposure time and histogram information of each frame of image in the image to be processed;
determining whether a scene change exists according to the exposure time and the histogram information;
after scene change is determined to exist, determining a current application scene of the image to be processed, and determining an exposure judgment threshold according to the current application scene;
comparing the maximum exposure time with the exposure judgment threshold value;
when the maximum exposure time is smaller than the exposure judgment threshold value, determining that the image to be processed does not meet the basic condition of existence of a motion area, wherein the motion area does not exist;
when the maximum exposure time is larger than or equal to the exposure judgment threshold, determining that the image to be processed meets the basic condition that a motion area exists, wherein the motion area possibly exists.
3. The method according to claim 1, wherein the selecting a frame of image as a reference frame of image in the image to be processed comprises:
calculating the evaluation parameter of each frame of image in the image to be processed;
and selecting one frame of image with the evaluation parameter meeting a preset condition as the reference frame image according to the size of the evaluation parameter, wherein the evaluation parameter comprises at least one of image noise information, signal-to-noise ratio, histogram form information, brightness information, detail information and image definition information.
4. The method according to claim 3, wherein selecting, as the reference frame image, a frame image whose evaluation parameter satisfies a preset condition according to the size of the evaluation parameter comprises:
calculating the brightness information of each frame of image in the image to be processed, and calculating the brightness validity factor of each frame of image according to the brightness information;
extracting the detail information of each frame of image in the image to be processed to calculate the detail factor of each frame of image;
calculating the detail richness of each frame of image according to the brightness effectiveness factor and the detail factor of each frame of image;
and selecting the frame image with the maximum detail richness as the reference frame image.
5. The method according to claim 4, wherein the calculating a brightness validity factor for each frame of image according to the brightness information comprises:
determining a first brightness threshold and a second brightness threshold according to the shooting scene information and the shooting equipment information of the image to be processed;
and calculating a brightness effectiveness factor according to the first brightness threshold, the second brightness threshold and the brightness information of each frame of image.
6. The method according to claim 1, wherein the extracting multi-level image information in the comparison frame image and the reference frame image comprises:
selecting the comparison frame image and the reference frame image as sampling images respectively;
dividing the sampling image into M multiplied by Q division blocks, and obtaining a down-sampling image after point fetching is carried out on points in each division block, wherein M and Q are positive integers;
and extracting information of the down-sampling image to obtain the multi-level image information.
7. The method according to claim 4, wherein the calculating the image difference factor between the comparison frame enhanced image and the reference frame enhanced image comprises:
calculating a brightness difference factor of each frame of the enhanced image of the comparison frame according to the brightness information of the enhanced image of the reference frame and the brightness information of the enhanced image of the comparison frame;
calculating detail difference factors of the enhanced images of the comparison frames of each frame according to the detail information of the enhanced images of the reference frames and the detail information of the enhanced images of the comparison frames;
calculating the image difference factor of each layer of image in the comparison frame enhanced image according to the brightness difference factor and the detail difference factor.
8. The method according to claim 7, wherein the calculating the image difference factor according to the brightness difference factor and the detail difference factor comprises:
comparing the magnitude of the brightness difference factor to a third threshold and the magnitude of the detail difference factor to a fourth threshold for the comparison frame enhanced image;
when the brightness difference factor of the comparison frame enhanced image is determined to be larger than the third threshold value and the detail difference factor is determined to be larger than the fourth threshold value, the size of the image difference factor is determined to be A, otherwise, the size of the image difference factor is determined to be B, wherein A is larger than B.
9. The method according to claim 8, wherein the determining the motion region in the compared frame enhanced image according to the image difference factor comprises:
acquiring the corresponding position of each pixel point of the current layer image in the comparison frame enhanced image in the previous layer image;
judging whether the pixel point of the current layer image is a motion area or not according to the image difference factor of the corresponding position;
judging the next layer of image after judging each pixel point of the current layer of image;
acquiring corresponding pixel points of pixel points determined as a motion area in a current layer image corresponding to corresponding positions in a next layer image;
judging whether the corresponding pixel point of the next layer image is a motion area or not according to the image difference factor of the corresponding pixel point;
and circularly executing the processes until the last layer of image of the enhanced image of the comparison frame is subjected to a motion region judgment process, so as to obtain an image motion region based on the original resolution of the image to be processed.
10. An apparatus for locating a moving region of an image, comprising:
the motion judgment module is used for acquiring an image to be processed and judging whether the image to be processed meets the basic condition of the existence of a motion area;
the image selection module is used for selecting one frame of image from the images to be processed as a reference frame image and using the rest frames of the images to be processed as comparison frame images after determining that the images to be processed meet the basic condition of the existence of motion areas;
the enhancement processing module is used for extracting multi-level image information in the comparison frame image and the reference frame image, and performing image enhancement processing on the multi-level image information to respectively obtain a comparison frame enhanced image and a reference frame enhanced image;
a difference factor calculation module for calculating an image difference factor of the comparison frame enhanced image and the reference frame enhanced image;
and the positioning module is used for determining a motion area in the enhanced image of the comparison frame according to the image difference factor.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method for locating an area of image motion as claimed in any one of claims 1 to 9.
12. A terminal, comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory to enable the terminal to perform the method for locating the image motion area according to any one of claims 1 to 9.
CN202211482042.4A 2022-11-24 2022-11-24 Image motion area positioning method and device, storage medium and terminal Pending CN115760903A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211482042.4A CN115760903A (en) 2022-11-24 2022-11-24 Image motion area positioning method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211482042.4A CN115760903A (en) 2022-11-24 2022-11-24 Image motion area positioning method and device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN115760903A true CN115760903A (en) 2023-03-07

Family

ID=85337624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211482042.4A Pending CN115760903A (en) 2022-11-24 2022-11-24 Image motion area positioning method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN115760903A (en)

Similar Documents

Publication Publication Date Title
US10339643B2 (en) Algorithm and device for image processing
US9262811B2 (en) System and method for spatio temporal video image enhancement
CN109064504B (en) Image processing method, apparatus and computer storage medium
KR20150116833A (en) Image processor with edge-preserving noise suppression functionality
JP5767064B2 (en) Image edge enhancement method
CN111340732B (en) Low-illumination video image enhancement method and device
US20140126834A1 (en) Method and device for processing of an image
CN113744294B (en) Image processing method and related device
CN112907468A (en) Image noise reduction method, device and computer storage medium
WO2024001538A1 (en) Scratch detection method and apparatus, electronic device, and readable storage medium
CN113012061A (en) Noise reduction processing method and device and electronic equipment
CN110136085B (en) Image noise reduction method and device
CN115760903A (en) Image motion area positioning method and device, storage medium and terminal
CN113269686B (en) Method and device for processing brightness noise, storage medium and terminal
JP4104475B2 (en) Contour correction device
de Villiers A comparison of image sharpness metrics and real-time sharpening methods with GPU implementations
CN104243767A (en) Method for removing image noise
CN112308812A (en) Method, terminal and storage medium for detecting picture definition
CN116051425B (en) Infrared image processing method and device, electronic equipment and storage medium
Kondo et al. Edge preserving super-resolution with details based on similar texture synthesis
CN112862726B (en) Image processing method, device and computer readable storage medium
Wang et al. Anisotropic Gaussian Side Windows Guided Filtering
TWI506590B (en) Method for image noise reduction
CN117764877A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN115937042A (en) Image processing method, system, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination