CN108924385A - A kind of video stabilization method based on width study - Google Patents

A kind of video stabilization method based on width study Download PDF

Info

Publication number
CN108924385A
CN108924385A CN201810682319.5A CN201810682319A CN108924385A CN 108924385 A CN108924385 A CN 108924385A CN 201810682319 A CN201810682319 A CN 201810682319A CN 108924385 A CN108924385 A CN 108924385A
Authority
CN
China
Prior art keywords
video
frame
output
test set
stabilization method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810682319.5A
Other languages
Chinese (zh)
Other versions
CN108924385B (en
Inventor
陈志华
李超
陈若溪
陈莉莉
盛斌
李平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China University of Science and Technology
Original Assignee
East China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China University of Science and Technology filed Critical East China University of Science and Technology
Priority to CN201810682319.5A priority Critical patent/CN108924385B/en
Publication of CN108924385A publication Critical patent/CN108924385A/en
Application granted granted Critical
Publication of CN108924385B publication Critical patent/CN108924385B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of video stabilization method based on width study, according to the currently pending frame of original video, processed video correspond to frame, non-learning-oriented processing method output video correspondence frame previous frame, obtain the input data of training set and the input data of test set, then the successional primary features of video time are extracted using mapping function, feature enhancing is carried out to primary features followed by activation primitive, obtains Enhanced feature;By primary features and Enhanced feature simultaneous, obtain all features extracted in n-th of network, building is using video time continuity and video content fidelity as the energy function of constraint condition in training set, the weight met in above-mentioned energy function is solved by the minimum angle Return Law, and the target weight of characteristic layer and output layer is connected, finally obtain the output frame of the video stabilization of test set with all features extracted according to target weight in test set.

Description

A kind of video stabilization method based on width study
Technical field
The present invention relates to computer vision and field of image processing more particularly to a kind of video debounces based on width study Dynamic method.
Background technique
Video stabilization method shows as shake present in removal video, generally comprises tone shake and brightness jitter. Video stabilization method algorithm removes existing shake between video frame by the time continuity between addition frame, exports one The time continuity video of non-jitter.
In the prior art, for video stabilization, common implementation method is based on jitter compensation technology, it is intended to be passed through Tone or brightness between aligned frame remove the flutter effect in video.Although this method can be reduced to a certain extent Flutter effect present in video, still, this method must select first several frames as key frame, and from the quilt shaken Several frames are chosen in the video of processing as key frame, whether these key frames itself have time consistency, it is difficult to guarantee;Again Other frames are aligned, not ensuring that can by person if there are flutter effects for selected key frame itself with the key frame that there is shake To remove the shake of processed video.In addition, another implementation method can also be excellent containing time consistency by minimizing Change the energy function of item to maintain the time consistency between video frame, but such methods are specifically applied mainly for certain class, Limit the generalization ability of method of video image processing.For example, such common video image processing algorithm includes:Intrinsic figure point Solution, color classification, solid colour, white balance etc..In addition, the algorithm of the removal video jitter based on specific application is not particularly suited for Most of other situations, limit the generalization ability of this kind of algorithms.
For the deficiency of above-mentioned existing method, a kind of novel video stabilization method how is designed, to improve or eliminate Many defects remove shake present in processed video can to the maximum extent, be in computer vision development process Urgent problem to be solved.
Summary of the invention
To solve deficiency present in existing video stabilization method, the present invention provides a kind of video based on width study De-jittering method, debounce movable model learn based on width can be established according to the feature in input video and processed video thus Remove video jitter.
According to one aspect of the present invention, a kind of video stabilization method based on width study, including following step are provided Suddenly:
A) according to the currently pending frame I of original videon, with based on image processing method processed video frame by frame Corresponding frame Pn, non-learning-oriented processing method output video correspondence frame previous frame On-1, obtain the input data X of training setn And the input data F of test setn, wherein Xn=[In|Pn|On-1], Fn=[In|Pn];
B) the input data X is extracted using mapping functionnFor realizing the successional primary features of video timeWherein, primary featuresIt is expressed as:
Wherein WeiAnd βeiIndicate the weight and deviation generated at random,For mapping function;
C) feature enhancing is carried out to the extracted primary features using activation primitive, obtains Enhanced featureWherein, Enhanced featureIt is expressed as:
Wherein WhjAnd βhiIndicate the weight and deviation generated at random, ξjFor activation primitive,Indicate primary features all M shared primary features in frame;
D) primary features for arriving said extractedAnd Enhanced feature Connection It is vertical, obtain all feature A extracted in n-th of networkn
WhereinIndicate p in all frames shared Enhanced feature of Enhanced feature;
E) it in the training set, constructs with video time continuity CtWith video content fidelity CfFor constraint condition Energy function E, wherein energy function E is defined as expression formula:
The weights omega for meeting above-mentioned energy function E is solved by the minimum angle Return Lawn, and by weights omeganLearn as width Network is used to the target weight of connection features layer and output layer;
F) in test set, according to target weight ωnWith all feature A extracted in n-th of networkn, obtain width Practise the output Y of the test set of networkn
Yn=An·ωn
Wherein, the output Y of test setnOutput frame for the video stabilization learnt based on width.
In an embodiment wherein, mapping functionFor sigmoid function or tangent function.
In an embodiment wherein, activation primitive ξjFor sigmoid function or tangent function.
In an embodiment wherein, weights omeganFor minimize the test set output frame and former frame difference from And calculate the energy loss work factor of the time continuity between output video consecutive frame:
Ct=| | An·ωn-On-1||2
In an embodiment wherein, weights omeganFor minimize the output video of the test set n-th video frame and The difference between n-th of video frame in processed video is to calculate the energy loss work factor of video content fidelity:
Cf=| | An·ωn-Pn||2
In an embodiment wherein, weights omeganIt is used to the target of connection features layer and output layer as width learning network When weight, while meeting the constraint condition of video time continuity and video content fidelity.
In an embodiment wherein, the image processing method that processed video uses frame by frame include color classification processing, The processing of space white balance, color harmonization processing and high dynamic range mapping processing.
Using the video stabilization method of the invention based on width study, first according to the currently pending of original video Frame, with based on image processing method frame by frame the correspondence frame of processed video, non-learning-oriented processing method output video The previous frame of corresponding frame, is obtained the input data of training set and the input data of test set, is then mentioned using mapping function Take the input data of above-mentioned training set for realizing the successional primary features of video time, followed by activation primitive to first Grade feature carries out feature enhancing, obtains Enhanced feature;Then the primary features and Enhanced feature simultaneous said extracted arrived, obtain All features extracted into n-th of network, building is in training set with video time continuity and video content fidelity For the energy function of constraint condition, the weight met in above-mentioned energy function is solved by the minimum angle Return Law, and as Width learning network is used to the target weight of connection features layer and output layer, finally according to target weight and extraction in test set All features arrived obtain the output frame of the video stabilization of the test set of width learning network.Compared with the prior art, this Shen The output video please obtained using original input video, processed video and traditional de-jittering method as inputting, with by Layer constantly extracts the width learning network that feature is established, and is constraint in video time continuity and video content fidelity Under the conditions of, to obtain eliminating the output video of shake.
Detailed description of the invention
Reader is after having read a specific embodiment of the invention referring to attached drawing, it will more clearly understands of the invention Various aspects.Wherein,
Fig. 1 shows the flow chart of the video stabilization method of the invention based on width study;
Fig. 2 shows the configuration diagrams of the width learning network of the video stabilization method for realizing Fig. 1;
Fig. 3 A shows the schematic diagram for a certain video frame that original video is Interview;
Fig. 3 B shows the schematic diagram for a certain video frame that original video is Cable;
Fig. 3 C shows the schematic diagram for a certain video frame that original video is Chicken;
Fig. 3 D shows the schematic diagram for a certain video frame that original video is CheckingEmail;
Fig. 3 E shows the schematic diagram for a certain video frame that original video is Travel;And
Fig. 4 is shown using the video stabilization method of Fig. 1 and two kinds of video stabilization methods of the prior art in original view The comparison schematic diagram of video debounce effect when frequency is respectively Fig. 3 A~Fig. 3 E.
Specific embodiment
In order to keep techniques disclosed in this application content more detailed with it is complete, be referred to the embodiment of the present invention son in Attached drawing, the technical solution and realization details implemented in the present invention will be further described in more detail in we.
Fig. 1 shows the flow chart of the video stabilization method of the invention based on width study, and Fig. 2 shows for realizing figure The configuration diagram of the width learning network of 1 video stabilization method, Fig. 3 A~Fig. 3 E are shown respectively original video and are The schematic diagram of a certain video frame of Interview, Cable, Chicken, CheckingEmail and Travel, Fig. 4, which is shown, to be adopted It in original video is respectively Fig. 3 A~Fig. 3 E with two kinds of video stabilization methods of the video stabilization method of Fig. 1 and the prior art When video debounce effect comparison schematic diagram.
Hardware condition of the invention is cpu frequency 2.40GHz, the computer of memory 8G, software tool Matlab 2014b。
Referring to Fig.1, in this embodiment, the video stabilization method based on width study of the application mainly passes through following Step is realized.
Firstly, in step sl, according to the currently pending frame I of original videon, with based on image processing method frame by frame The correspondence frame P of processed videon, non-learning-oriented processing method (that is, traditional treatment method) output video correspondence frame Previous frame On-1, obtain the input data X of training setnAnd the input data F of test setn, wherein Xn=[In|Pn|On-1], Fn =[In|Pn]。
It, be in view of corresponding output frame O in the test set data of training width learning networknAnd PnBetween video in Hold fidelity and output frame OnWith its former frame On-1Between time continuity, we are first by original video, processed Video and former output video in input X of the correspondence frame as primary features mapping functionn=[In|Pn|On-1], pass through mapping Our obtained i-th of primary features of function WhereinIt can be arbitrary activation primitive, It can be sigmoid or tangent function, WeiAnd βeiThe weight and deviation with suitable dimension being randomly generated respectively, N-th for reconstructing OnNeural network in, if there is the primary mappings characteristics of m group, Wo MenlingTo indicate The primary mappings characteristics of m group in the width learning network of n-th of video stabilization, as shown in Figure 2.
Secondly, in step s 2, to the m group primary features generated in step S1Feature enhancing is carried out, retraining obtains Enhanced featureWherein ξj() can be arbitrary sigmoid or tangent function, WhjWith βhiThe weight and deviation with suitable dimension being randomly generated respectively, at n-th for reconstructing OnNeural network in, if There are p group Enhanced feature, Wo MenlingThe p in width learning network for indicating n-th of video stabilization Group Enhanced feature, as shown in Figure 2.
M group primary features in the width learning network for obtaining n-th of video stabilizationWith p group Enhanced feature Afterwards, Wo MenlingIndicate all features extracted in the width learning network of n-th of Key dithering.Then, I Pass through target weight ω to be askednBy AnWith output layer OnIt connects.Solving target weight ωnWidth study afterwards In network, the output Y of test setn=An·ωn.It may be noted that in training set, output frame OnIt is by known by traditional What non-learning-oriented de-jittering method obtained, in the stage of training width learning network, unique unknown number is for connection to characteristic layer With the target weight ω of output layern.In test set, output frame YnIt is unknown, utilization trained width learning network It can solve, that is, Yn=An·ωn
In step S31 and step S32, in the unknown weight for solving the width learning network for realizing video stabilization ωnDuring, video time continuity and video content fidelity must be considered simultaneously.
Specifically, when considering the time continuity between video consecutive frame, we are enabled between output video consecutive frame The energy loss cost of time continuity is Ct, wherein target weight ωnIt can be used for minimizing the output frame of test set and previous The difference of frame, so as to calculate above-mentioned energy loss work factor:
Ct=| | An·ωn-On-1||2
Wherein, | | | |2Indicate L2Normal form (quadratic sum and then evolution of vector each element), On-1It indicates to use in training set (n-1) frame that traditional video stabilization method obtains indicates to have solved target weight ω in test setnWidth Practise (n-1) frame of network output.
Similarly, in order to guarantee that the content of the dynamic scene in processed video saves as much as possible in output video, When considering video content fidelity, it would be desirable to minimize processed video and export the difference between video, and enable output Energy loss cost between video and processed video is Cf.Wherein, target weight ωnIt can be used for minimizing test set The difference between n-th of video frame in n-th of the video frame and processed video of video is exported, so as to calculate above-mentioned view The energy loss work factor of frequency content fidelity:
Cf=| | An·ωn-Pn||2
Wherein, PnIndicate the n-th frame in processed video.
In step s 4, simultaneous video time continuity constraint and video content fidelity difference are constructed with video time Continuity CtWith video content fidelity CfFor the energy function E of constraint condition, above-mentioned energy is met by the solution of the minimum angle Return Law The weights omega of flow function En, and by weights omeganIt is used to the target weight of connection features layer and output layer as width learning network. Energy function E is represented by:
Wherein, the first item of above-mentioned expression formula is the output frame A obtained for minimizing training setn·ωnWith use tradition The output frame O that video stabilization method obtainsnDifference, improve width learning model accuracy, Section 2 λ1·‖ωn1With Section 3 λ2·‖ωn2It is all the regular terms for preventing over-fitting, wherein λ1And λ2It is L respectively1Normal form and L2The canonical of normal form Term coefficient.λtAnd λfIt is the coefficient of video time continuity and video content fidelity respectively.
To the unknown quantity weights omega in above formulan, we can be solved with the method that minimum angular convolution is returned, so that it is determined that being based on The video stabilization model of width study.As shown in Fig. 3 A~3E, Fig. 4, using the video stabilization method and existing view of Fig. 1 When frequency de-jittering method is compared, it will therefore be readily appreciated that Interview video, Cable video, Chicken video, On CheckingEmail video and Travel video, it is utilized respectively the video stabilization method of Lang in the prior art et al. The video stabilization method (such as curve 3) of (such as curve 2), Bonneel in the prior art et al. and the video debounce of the application Y-PSNR (Peak Signal to Noise Ratio, the PSNR) number for the output video that dynamic method (such as curve 1) obtains Value, as shown in the vertical dotted line in Fig. 4.For example, when Interview video, Cable video, the Chicken view of Fig. 3 A~Fig. 3 E Frequently, the shake in CheckingEmail video and Travel video, which is respectively derived from, uses based on figure respective original video Color classification, space white balance, the intrinsic figure of picture decompose, high dynamic range mapping and defogging method are handled frame by frame, but not In view of the video time consistency between consecutive frame.Since the value of PSNR can reflect the quality and Key dithering effect of output video Fruit, therefore, PSNR value are higher, and quality and the Key dithering effect for exporting video are also better.The view of the application it can be seen from upper figure Frequency de-jittering method (such as curve 1), based on traditional de-jittering method (such as curve 2 and curve 3), is measured compared to various in PSNR Substandard debounce performance is intended to more excellent.
Using the video stabilization method of the invention based on width study, first according to the currently pending of original video Frame, with based on image processing method frame by frame the correspondence frame of processed video, non-learning-oriented processing method output video The previous frame of corresponding frame, is obtained the input data of training set and the input data of test set, is then mentioned using mapping function Take the input data of above-mentioned training set for realizing the successional primary features of video time, followed by activation primitive to first Grade feature carries out feature enhancing, obtains Enhanced feature;Then the primary features and Enhanced feature simultaneous said extracted arrived, obtain All features extracted into n-th of network, building is in training set with video time continuity and video content fidelity For the energy function of constraint condition, the weight met in above-mentioned energy function is solved by the minimum angle Return Law, and as Width learning network is used to the target weight of connection features layer and output layer, finally according to target weight and extraction in test set All features arrived obtain the output frame of the video stabilization of the test set of width learning network.Compared with the prior art, this Shen The output video please obtained using original input video, processed video and traditional de-jittering method as inputting, with by Layer constantly extracts the width learning network that feature is established, and is constraint in video time continuity and video content fidelity Under the conditions of, to obtain eliminating the output video of shake.
Above, a specific embodiment of the invention is described with reference to the accompanying drawings.But those skilled in the art It is understood that without departing from the spirit and scope of the present invention, a specific embodiment of the invention can also be made etc. With replacement, without departing from essential core of the invention, these modifications and replacement should all fall in claims of the present invention and be limited In fixed range.

Claims (7)

1. a kind of video stabilization method based on width study, which is characterized in that the video stabilization method includes following step Suddenly:
A) according to the currently pending frame I of original videon, with the correspondence based on image processing method processed video frame by frame Frame Pn, non-learning-oriented processing method output video correspondence frame previous frame On-1, obtain the input data X of training setnAnd The input data F of test setn, wherein Xn=[In|Pn|On-1], Fn=[In|Pn];
B) the input data X is extracted using mapping functionnFor realizing the successional primary features of video timeIts In, primary featuresIt is expressed as:
Wherein WeiAnd βeiIndicate the weight and deviation generated at random,For mapping function;
C) feature enhancing is carried out to the extracted primary features using activation primitive, obtains Enhanced featureWherein, enhance FeatureIt is expressed as:
Wherein WhiAnd βhiIndicate the weight and deviation generated at random, ξjFor activation primitive,Indicate primary features in all frames M shared primary features;
D) primary features for arriving said extractedAnd Enhanced feature Simultaneous obtains All feature A extracted in n-th of networkn
WhereinIndicate p in all frames shared Enhanced feature of Enhanced feature;
E) it in the training set, constructs with video time continuity CtWith video content fidelity CfFor the energy of constraint condition Function E, wherein energy function E is defined as expression formula:
The weights omega for meeting above-mentioned energy function E is solved by the minimum angle Return Lawn, and by weights omeganAs width learning network For the target weight of connection features layer and output layer;
F) in test set, according to target weight ωnWith all feature A extracted in n-th of networkn, obtain width and learn net The output Y of the test set of networkn
Yn=An·ωn
Wherein, the output Y of test setnOutput frame for the video stabilization learnt based on width.
2. video stabilization method according to claim 1, which is characterized in that mapping functionFor sigmoid function or Tangent function.
3. video stabilization method according to claim 1, which is characterized in that activation primitive ξjFor sigmoid function or Tangent function.
4. video stabilization method according to claim 1, which is characterized in that weights omeganFor minimizing the test set Output frame and former frame difference to calculate output video consecutive frame between time continuity energy loss cost because Son
Ct=| | An·ωn-On-1||2
5. video stabilization method according to claim 1, which is characterized in that weights omeganFor minimizing the test set Output video n-th of video frame and processed video in n-th of video frame between difference to calculating video content The energy loss work factor of fidelity
Cf=| | An·ωn-Pn||2
6. video stabilization method according to claim 1, which is characterized in that weights omeganIt is used to as width learning network When the target weight of connection features layer and output layer, while meeting the constraint item of video time continuity and video content fidelity Part.
7. video stabilization method according to claim 1, which is characterized in that the image that processed video uses frame by frame Processing method includes color classification processing, the processing of space white balance, color harmonization processing and high dynamic range mapping processing.
CN201810682319.5A 2018-06-27 2018-06-27 Video de-jittering method based on width learning Active CN108924385B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810682319.5A CN108924385B (en) 2018-06-27 2018-06-27 Video de-jittering method based on width learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810682319.5A CN108924385B (en) 2018-06-27 2018-06-27 Video de-jittering method based on width learning

Publications (2)

Publication Number Publication Date
CN108924385A true CN108924385A (en) 2018-11-30
CN108924385B CN108924385B (en) 2020-11-03

Family

ID=64421608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810682319.5A Active CN108924385B (en) 2018-06-27 2018-06-27 Video de-jittering method based on width learning

Country Status (1)

Country Link
CN (1) CN108924385B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905565A (en) * 2019-03-06 2019-06-18 南京理工大学 Video stabilization method based on motor pattern separation
CN110222234A (en) * 2019-06-14 2019-09-10 北京奇艺世纪科技有限公司 A kind of video classification methods and device
CN110472741A (en) * 2019-06-27 2019-11-19 广东工业大学 A kind of small wave width study filtering system of three-domain fuzzy and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616310A (en) * 2009-07-17 2009-12-30 清华大学 The target image stabilizing method of binocular vision system of variable visual angle and resolution
CN103929568A (en) * 2013-01-11 2014-07-16 索尼公司 Method For Stabilizing A First Sequence Of Digital Image Frames And Image Stabilization Unit
US20170278223A1 (en) * 2016-03-22 2017-09-28 Kabushiki Kaisha Toshiba Image adjustment
CN107481185A (en) * 2017-08-24 2017-12-15 深圳市唯特视科技有限公司 A kind of style conversion method based on video image optimization
CN107808144A (en) * 2017-11-10 2018-03-16 深圳市唯特视科技有限公司 One kind carries out self-supervision insertion posture learning method based on video time-space relationship

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616310A (en) * 2009-07-17 2009-12-30 清华大学 The target image stabilizing method of binocular vision system of variable visual angle and resolution
CN103929568A (en) * 2013-01-11 2014-07-16 索尼公司 Method For Stabilizing A First Sequence Of Digital Image Frames And Image Stabilization Unit
US20170278223A1 (en) * 2016-03-22 2017-09-28 Kabushiki Kaisha Toshiba Image adjustment
CN107481185A (en) * 2017-08-24 2017-12-15 深圳市唯特视科技有限公司 A kind of style conversion method based on video image optimization
CN107808144A (en) * 2017-11-10 2018-03-16 深圳市唯特视科技有限公司 One kind carries out self-supervision insertion posture learning method based on video time-space relationship

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIN LI ET AL: "Video Processing Via Implicit and Mixture Motion Models", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 *
孔悦等: "基于时空一致性的立体视频稳像改进方法", 《电视技术》 *
王峰等: "利用平稳光流估计的海上视频去抖", 《中国图象图形学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109905565A (en) * 2019-03-06 2019-06-18 南京理工大学 Video stabilization method based on motor pattern separation
CN109905565B (en) * 2019-03-06 2021-04-27 南京理工大学 Video de-jittering method based on motion mode separation
CN110222234A (en) * 2019-06-14 2019-09-10 北京奇艺世纪科技有限公司 A kind of video classification methods and device
CN110472741A (en) * 2019-06-27 2019-11-19 广东工业大学 A kind of small wave width study filtering system of three-domain fuzzy and method
CN110472741B (en) * 2019-06-27 2022-06-03 广东工业大学 Three-domain fuzzy wavelet width learning filtering system and method

Also Published As

Publication number Publication date
CN108924385B (en) 2020-11-03

Similar Documents

Publication Publication Date Title
US11537873B2 (en) Processing method and system for convolutional neural network, and storage medium
Celik Spatial mutual information and PageRank-based contrast enhancement and quality-aware relative contrast measure
US9619749B2 (en) Neural network and method of neural network training
KR102166105B1 (en) Neural network and method of neural network training
CN108924385A (en) A kind of video stabilization method based on width study
CN109961396B (en) Image super-resolution reconstruction method based on convolutional neural network
CN110717953B (en) Coloring method and system for black-and-white pictures based on CNN-LSTM (computer-aided three-dimensional network-link) combination model
CN109510918A (en) Image processing apparatus and image processing method
TWI664853B (en) Method and device for constructing the sensing of video compression
CN112823379A (en) Method and device for training machine learning model and device for video style transfer
Hristova et al. Style-aware robust color transfer.
CN108288253A (en) HDR image generation method and device
CN112037144A (en) Low-illumination image enhancement method based on local contrast stretching
CN111047543A (en) Image enhancement method, device and storage medium
CN109829510A (en) A kind of method, apparatus and equipment of product quality classification
CN109064431B (en) Picture brightness adjusting method, equipment and storage medium thereof
Daga et al. Image compression using harmony search algorithm
He et al. A night low‐illumination image enhancement model based on small probability area filtering and lossless mapping enhancement
CN115908602A (en) Style migration method for converting landscape photos into Chinese landscape paintings
Tao et al. The discretization of continuous attributes based on improved SOM clustering
Vamsidhar et al. Image Enhancement Using Chicken Swarm Optimization
Celebi et al. Histogram Equalization for Grayscale Images and Comparison with OpenCV Library
CN108038828B (en) Image denoising method based on self-adaptive weighted total variation
Mcvey et al. Towards a generic neural network architecture for approximating tone mapping algorithms
US20220237412A1 (en) Method for modelling synthetic data in generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant