CN115695803B - Inter-frame image coding method based on extreme learning machine - Google Patents

Inter-frame image coding method based on extreme learning machine Download PDF

Info

Publication number
CN115695803B
CN115695803B CN202310000697.1A CN202310000697A CN115695803B CN 115695803 B CN115695803 B CN 115695803B CN 202310000697 A CN202310000697 A CN 202310000697A CN 115695803 B CN115695803 B CN 115695803B
Authority
CN
China
Prior art keywords
coding
frame
division
inter
intra
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310000697.1A
Other languages
Chinese (zh)
Other versions
CN115695803A (en
Inventor
蒋先涛
柳云夏
郭咏梅
郭咏阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Kangda Kaineng Medical Technology Co ltd
Original Assignee
Ningbo Kangda Kaineng Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Kangda Kaineng Medical Technology Co ltd filed Critical Ningbo Kangda Kaineng Medical Technology Co ltd
Priority to CN202310000697.1A priority Critical patent/CN115695803B/en
Publication of CN115695803A publication Critical patent/CN115695803A/en
Application granted granted Critical
Publication of CN115695803B publication Critical patent/CN115695803B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses an inter-frame image coding method based on an extreme learning machine, which relates to the technical field of image processing and comprises the following steps: extracting an intra-frame coding frame in a current image group in a target video sequence; extracting a characteristic vector set related to the division of each coding unit and the coding unit under each coding depth of an intra-frame coding frame and a corresponding division result as a training set; training the initialized ELM classifier under the dual optimization problem through a training set; acquiring a characteristic vector of a target coding unit after coding division under the current coding depth of a target inter-frame image in a current image group; and judging the division mode according to the feature vector by the trained ELM classifier. According to the invention, the characteristic relation is researched by utilizing the extreme learning machine, and coding division judgment is not needed by calculating the rate distortion cost under different coding division modes, so that the coding calculation complexity is greatly reduced, and the coding efficiency is improved.

Description

Inter-frame image coding method based on extreme learning machine
Technical Field
The invention relates to the technical field of image processing, in particular to an inter-frame image coding method based on an extreme learning machine.
Background
With the development of video technology, the video compression coding standard HEVC of the previous generation has been difficult to meet the increasing demands of people. The newly proposed VVC encoder is still a mainstream video encoding framework, that is, a hybrid encoding framework based on block encoding, and mainly includes intra-frame prediction, inter-frame prediction, variation, quantization, entropy encoding, loop filtering, and other modules. Although coding standards are changed, coding efficiency is improved, coding complexity and coding time are increased, and overall improvement is not ideal.
With the development of video technology and the demand for real-time coding, video coding acceleration has become a research hotspot. The coding algorithm acceleration is mainly improved aiming at the existing coding algorithm, and the algorithm complexity is reduced at the expense of some compression performance by reducing the calculation amount of the algorithm, and the coding algorithm acceleration is mainly expressed as improvement on the dividing process of coding units. This part of the algorithm can be divided into two main categories: based on statistical analysis algorithms and based on machine learning algorithms. But the existing method does not balance the coding quality and the coding calculation complexity well.
Disclosure of Invention
In order to better balance coding efficiency and coding complexity, the invention provides an inter-frame image coding method based on an extreme learning machine, which comprises the following steps:
s1: extracting an intra-frame coding frame in a current image group in a target video sequence;
s2: extracting a characteristic vector set related to the division of each coding unit and the coding unit under each coding depth of an intra-frame coding frame and a corresponding division result as a training set;
s3: training the initialized ELM classifier under the dual optimization problem through a training set;
s4: acquiring a characteristic vector of a target coding unit after coding division under the current coding depth of a target inter-frame image in a current image group;
s5: and (4) judging a division mode according to the feature vector through the trained ELM classifier, entering the next coding depth before reaching the maximum coding depth, and returning to the step (S4).
Further, in the step S2, the feature vector of the coding unit includes a rate distortion cost, a coding depth, and a prediction residual.
Further, the ELM classifier includes n input layers, L hidden layers, and m output layers, where the value of n is the number of coding units in the intra-frame coding frame, the value of L is a global optimal solution obtained by training, and the value of m is the number of coding division modes of the current video coding standard.
Further, in the step S2, the training set is expressed as
Figure 225475DEST_PATH_IMAGE001
Wherein->
Figure 902444DEST_PATH_IMAGE002
For an input set numbered i consisting of feature vectors of an i-th coding unit in an intra-coded frame,/v>
Figure 12482DEST_PATH_IMAGE003
For the desired value output set, +.>
Figure 167520DEST_PATH_IMAGE004
Is that
Figure 171248DEST_PATH_IMAGE002
A corresponding expected value.
Further, in the step S3, the dual optimization problem of the ELM classifier is expressed as the following formula:
Figure 956802DEST_PATH_IMAGE005
in the method, in the process of the invention,
Figure 616453DEST_PATH_IMAGE006
for the number of hidden layers after dual optimization, i and j are constants, +.>
Figure 637499DEST_PATH_IMAGE007
For the connection weight of the ith hidden layer neuron and the jth output layer neuron which are limited by taking L and m as matrix sizes, C is a constant, < ->
Figure 433417DEST_PATH_IMAGE002
To be intra-coded in framei input set numbered i consisting of feature vectors of i coding units,/i>
Figure 389871DEST_PATH_IMAGE008
For input set +.>
Figure 536819DEST_PATH_IMAGE002
Error between actual value and expected value of corresponding division pattern, +.>
Figure 361555DEST_PATH_IMAGE009
Lagrangian multiplier greater than zero for the ith input layer and the jth output layer, +.>
Figure 277559DEST_PATH_IMAGE010
For implicit layer all neurons for input set +.>
Figure 139336DEST_PATH_IMAGE002
Response of->
Figure 773579DEST_PATH_IMAGE011
For the connection weight of each hidden layer neuron and the jth output layer neuron,/for each hidden layer neuron>
Figure 136427DEST_PATH_IMAGE012
For the i-th input layer and j-th output layer for the desired value of the division mode, +.>
Figure 172517DEST_PATH_IMAGE013
Is the error between the actual value of the i-th input layer and the expected value of the j-th output layer.
Further, in the step S5, the trained ELM classifier is expressed as the following formula:
Figure 205195DEST_PATH_IMAGE014
in the method, in the process of the invention,
Figure 326734DEST_PATH_IMAGE015
for the output of the ELM classifier, +.>
Figure 696536DEST_PATH_IMAGE016
For training set with total n +.>
Figure 383869DEST_PATH_IMAGE017
As an output matrix of the hidden layer,
Figure 915345DEST_PATH_IMAGE018
for the set of expected values, the superscript T of the matrix indicates that the matrix to which it belongs is transposed.
Further, in the step S5, the division pattern is obtained by the following formula:
Figure 927775DEST_PATH_IMAGE019
where x is the feature vector of the target coding unit,
Figure 898005DEST_PATH_IMAGE020
and (5) a partition mode label for the target coding unit.
Compared with the prior art, the invention at least has the following beneficial effects:
(1) According to the inter-frame image coding method based on the extreme learning machine, the feature relation among the rate distortion cost, the prediction residual error, the coding depth and the coding division mode is researched by the extreme learning machine, so that the prediction of the division mode can be performed through the acquisition of the feature vector when the inter-frame coding is performed subsequently, and coding division judgment is not needed through calculating the rate distortion cost under different coding division modes, the coding calculation complexity is greatly reduced, and the coding efficiency is improved;
(2) By utilizing the characteristic that the intra-frame coding frame contains all coding information, the feature relation of the limit learning machine is researched, so that the training result can be ensured to the greatest extent to be more suitable for the partition mode prediction of the residual inter-frame images of the current picture group, and the coding quality is ensured.
Drawings
FIG. 1 is a step diagram of an extreme learning machine-based inter-frame image encoding method;
FIG. 2 is a schematic diagram of a partitioning mode of a VVC standard;
fig. 3 is a schematic diagram of the structure of an ELM classifier.
Detailed Description
The following are specific embodiments of the present invention and the technical solutions of the present invention will be further described with reference to the accompanying drawings, but the present invention is not limited to these embodiments.
Example 1
Similar to the HEVC Coding standard, the VVC Coding standard uses a block-based Coding scheme, where each frame of pictures is first divided into Coding tree Units (Coding Tree Units, CTUs), and then the CTUs are further divided into smaller Coding Units (CUs). In the intra-prediction process, the obvious difference between the two is that only Quadtree (QT) partition is allowed in HEVC, and a quadtree nested multi-tree (QTMT) partition structure is introduced in VVC. As shown in fig. 2, the CU in VVC has 6 coding division modes: non-split (NS), quadtree split (QT), horizontal binary tree split (BTH), vertical binary tree split (BTV), horizontal trigeminal tree split (TTH), vertical trigeminal tree split (TTV). It can be seen that, compared with the previous HEVC coding standard, the VVC coding standard has four more dividing modes, and if the dividing mode acquisition based on the rate distortion cost as the decision standard is continuously adopted, the computational complexity is definitely improved by three times, so that the overall coding efficiency is reduced. Therefore, in order to effectively balance the coding quality and the coding efficiency, as shown in fig. 1, the present invention proposes an inter-frame image coding method based on an extreme learning machine, comprising the steps of:
s1: extracting an intra-frame coding frame in a current image group in a target video sequence;
s2: extracting a characteristic vector set related to the division of each coding unit and the coding unit under each coding depth of an intra-frame coding frame and a corresponding division result as a training set;
s3: training the initialized ELM classifier under the dual optimization problem through a training set;
s4: acquiring a characteristic vector of a target coding unit after coding division under the current coding depth of a target inter-frame image in a current image group;
s5: and (4) judging a division mode according to the feature vector through the trained ELM classifier, entering the next coding depth before reaching the maximum coding depth, and returning to the step (S4).
The extreme learning machine (Extreme Learning Machine, ELM) is a type of machine learning system built based on a feedforward neural network, which is an improved algorithm based on a single hidden layer feedforward neural network (SLFN). The ELM algorithm has the greatest advantage over the conventional SLFN algorithm in that there is no need to update parameters in training, such as weights between input layer and hidden layer and thresholds of hidden layer neurons. Once the number of hidden layer nodes is determined, the ELM algorithm can obtain a unique global optimal solution, so that the method has good generalization and general approximation capability.
Based on the excellent characteristics of the extreme learning machine, the present invention proposes to use it as a division mode decision for inter-frame coding. In order to better enable the extreme learning machine to acquire the relation between the division mode judgment and the related information in the inter-frame images, the invention selects the intra-frame coding frame in the current image group of the target video as an acquisition source of training parameters. Since the intra reference frame does not have default coding information when compressed, it can be decompressed alone into a single complete video picture. The continuous inter-frame images in the same picture group have content continuity, so that the characteristic of complete coding information of the intra-frame coding frames can be fully utilized to train the logic relationship between relevant characteristic parameters and coding division mode judgment of the limit learning machine.
In order to make the training set more specific and avoid analysis and research on unnecessary features in the training process of the extreme learning machine, the invention selects the rate distortion cost and the prediction residual associated with coding division as feature vectors, and adds the coding depth which can influence the judgment of the division mode into the feature vectors
Figure 643108DEST_PATH_IMAGE021
(wherein i in this embodiment all refer to constants, here +.>
Figure 283167DEST_PATH_IMAGE021
An input set numbered i, represented as a feature vector of an i-th coding unit in an intra-coded frame). Therefore, when the limit learning machine is trained, the training set is composed of the feature vector set of each coding unit under each coding depth of the intra-frame coding frame and the corresponding dividing result.
As shown in fig. 3, the neuron configuration of the ELM classifier structure includes: n input layers, L hidden layer neurons, m output layer neurons, wherein the value of n depends on the number of coding units in the current intra-frame coding frame, the value of L is the global optimal solution obtained by final training, and the value of m is the number of coding division modes of the current video coding standard (taking VVC coding standard as an example, m=6). For each intra-coded frame, the extreme learning input set (i.e., the feature vector set) is set to
Figure 113720DEST_PATH_IMAGE022
,/>
Figure 622062DEST_PATH_IMAGE023
Is->
Figure 221671DEST_PATH_IMAGE024
Node bias of hidden layer->
Figure 32632DEST_PATH_IMAGE025
For the bias of the output layer node,
Figure 350481DEST_PATH_IMAGE026
as implicit layer excitation function and with a de-linear function as output layer excitation function, with +.>
Figure 396934DEST_PATH_IMAGE027
Implicit in referenceResponse of all neurons to input quantity set x, then +.>
Figure 116628DEST_PATH_IMAGE027
The expression of (2) is:
Figure 98491DEST_PATH_IMAGE028
(1)
in the formula (1),
Figure 903636DEST_PATH_IMAGE029
for each input layer neuron and +.>
Figure 753780DEST_PATH_IMAGE030
The connection weights of the neurons of the hidden layers are calculated and output can be obtained:
Figure 327981DEST_PATH_IMAGE031
(2)
in the formula (2),
Figure 480745DEST_PATH_IMAGE032
for the output matrix of the output layer, < > for>
Figure 507607DEST_PATH_IMAGE033
The connection weight of the ith hidden layer neuron and the jth output layer neuron which are limited by the matrix size with L and m is used.
Error between actual and expected value outputs of neural network according to training target of extreme learning machine
Figure 161442DEST_PATH_IMAGE034
It is desirable to satisfy the formula as much as possible:
Figure 590149DEST_PATH_IMAGE035
(3)
in the formula (3),
Figure 241710DEST_PATH_IMAGE036
for the desired value output set, +.>
Figure 959131DEST_PATH_IMAGE037
Is->
Figure 885498DEST_PATH_IMAGE038
A corresponding expected value. From this formula, it can be seen that there is a suitable +.>
Figure 434291DEST_PATH_IMAGE039
、/>
Figure 256754DEST_PATH_IMAGE040
、/>
Figure 195891DEST_PATH_IMAGE041
When they satisfy the following formula:
Figure 129212DEST_PATH_IMAGE042
(4)
in the formula (4) of the present invention,
Figure 329249DEST_PATH_IMAGE043
is the output matrix of the hidden layer,
Figure 322613DEST_PATH_IMAGE044
is the set of expected values. It should be noted that, in the present embodiment, all the superscript T is denoted as matrix transposition on the matrix to which it belongs, which has a different meaning from the expected value set T.
According to the statistical learning theory, the experience risk and the structured risk form the actual risk, so that the output weight and the actual error need to be minimized, namely, the output weight and the actual error need to be minimized during the learning of the extreme learning machine
Figure 749046DEST_PATH_IMAGE045
And->
Figure 220479DEST_PATH_IMAGE046
Expressed by the formula:
Figure 540602DEST_PATH_IMAGE047
(5)
Figure 704867DEST_PATH_IMAGE048
(6)
in the formulas (5) and (6),
Figure 344228DEST_PATH_IMAGE049
to minimize the number of hidden layers after optimization, C is constant, < >>
Figure 619351DEST_PATH_IMAGE050
For implicit layer all neurons for input set +.>
Figure 59560DEST_PATH_IMAGE051
Response of->
Figure 129147DEST_PATH_IMAGE052
For input set +.>
Figure 530173DEST_PATH_IMAGE051
An error between an actual value and an expected value of the corresponding division pattern. According to the KKT theory (for weak constraints, the minimum of the function is calculated), training ELM is equivalent to solving the following dual optimization problem:
Figure 608987DEST_PATH_IMAGE053
(7)
in the formula (7) of the present invention,
Figure 841385DEST_PATH_IMAGE054
for the number of hidden layers after dual optimization, +.>
Figure 144191DEST_PATH_IMAGE055
Lagrangian multiplier greater than zero for the ith input layer and the jth output layer, +.>
Figure 94829DEST_PATH_IMAGE056
For the connection weight of each hidden layer neuron and the jth output layer neuron,/for each hidden layer neuron>
Figure 649438DEST_PATH_IMAGE057
For the i-th input layer and j-th output layer for the desired value of the division mode, +.>
Figure 736343DEST_PATH_IMAGE058
Is the error between the actual value of the i-th input layer and the expected value of the j-th output layer.
If the input weight is minimized and the error is minimized, then there are:
Figure 475629DEST_PATH_IMAGE059
(8)
Figure 647984DEST_PATH_IMAGE060
(9)
Figure 740705DEST_PATH_IMAGE061
(10)
in the formulas (8), (9) and (10),
Figure 947696DEST_PATH_IMAGE062
is Lagrangian multiplier greater than zero corresponding between the ith input layer and each output layer, +.>
Figure 701892DEST_PATH_IMAGE063
Is->
Figure 892702DEST_PATH_IMAGE064
A Lagrangian multiplier matrix is formed. The representation (10) is represented by a matrix method, and then:
Figure 320272DEST_PATH_IMAGE065
(11)
from formulas (8) and (11), it is possible to obtain:
Figure 585032DEST_PATH_IMAGE066
(12)
the output of the ELM classifier can be expressed as:
Figure 275907DEST_PATH_IMAGE067
(13)
in the formula (13), it is set
Figure 422855DEST_PATH_IMAGE068
Output function for the ith output node, i.e.
Figure 982012DEST_PATH_IMAGE069
. Then, with the trained ELM classifier, the mode of division acquisition can be performed by selecting the ELM classifier output with the highest probability:
Figure 898015DEST_PATH_IMAGE070
(14)
in equation (14), x is the eigenvector of the target coding unit,
Figure 832561DEST_PATH_IMAGE071
and (5) a partition mode label for the target coding unit.
In general, the intra-frame coding frames in each image group are utilized to carry out feature extraction learning training on the ELM classifier to obtain a formula (13), then the feature vectors of the target coding units after coding division under different coding depths of the subsequent inter-frame images of the same image group are extracted, and the formula (14) is utilized to judge the division mode, so that the inter-frame image coding is realized with higher efficiency on the premise of ensuring the coding quality.
In summary, according to the inter-frame image coding method based on the extreme learning machine, the feature relation among the rate distortion cost, the prediction residual error, the coding depth and the coding division mode is studied by using the extreme learning machine, so that the prediction of the division mode can be performed through the acquisition of the feature vector when the inter-frame coding is performed subsequently.
By utilizing the characteristic that the intra-frame coding frame contains all coding information, the feature relation of the limit learning machine is researched, so that the training result can be ensured to the greatest extent to be more suitable for the partition mode prediction of the residual inter-frame images of the current picture group, and the coding quality is ensured.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
Furthermore, descriptions such as those referred to herein as "first," "second," "a," and the like are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or an implicit indication of the number of features being indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the present invention, unless specifically stated and limited otherwise, the terms "connected," "affixed," and the like are to be construed broadly, and for example, "affixed" may be a fixed connection, a removable connection, or an integral body; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In addition, the technical solutions of the embodiments of the present invention may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can implement the technical solutions, and when the technical solutions are contradictory or cannot be implemented, the combination of the technical solutions should be considered as not existing, and not falling within the scope of protection claimed by the present invention.

Claims (4)

1. An inter-frame image coding method based on an extreme learning machine is characterized by comprising the following steps:
s1: extracting an intra-frame coding frame in a current image group in a target video sequence;
s2: extracting a characteristic vector set related to the division of each coding unit and the coding unit under each coding depth of an intra-frame coding frame and a corresponding division result as a training set;
s3: training the initialized ELM classifier under the dual optimization problem through a training set;
s4: acquiring a characteristic vector of a target coding unit after coding division under the current coding depth of a target inter-frame image in a current image group;
s5: judging a division mode according to the feature vector through the trained ELM classifier, entering the next coding depth before reaching the maximum coding depth, and returning to the step S4;
in the step S3, the dual optimization problem of the ELM classifier is expressed as the following formula:
Figure QLYQS_1
in the method, in the process of the invention,
Figure QLYQS_3
for the number of hidden layers after dual optimization, i and j are constants, +.>
Figure QLYQS_7
For the connection weight of the ith hidden layer neuron and the jth output layer neuron which are limited by taking L and m as matrix sizes, C is a constant, < ->
Figure QLYQS_10
For an input set numbered i consisting of feature vectors of an i-th coding unit in an intra-coded frame,/v>
Figure QLYQS_4
For input set +.>
Figure QLYQS_6
Error between actual value and expected value of corresponding division pattern, +.>
Figure QLYQS_9
Lagrangian multiplier greater than zero for the ith input layer and the jth output layer, +.>
Figure QLYQS_12
For implicit layer all neurons for input set +.>
Figure QLYQS_2
Response of->
Figure QLYQS_5
For the connection weight of each hidden layer neuron and the jth output layer neuron,/for each hidden layer neuron>
Figure QLYQS_8
For the i-th input layer and j-th output layer for the desired value of the division mode, +.>
Figure QLYQS_11
Is the error between the actual value of the ith input layer and the expected value of the jth output layer;
in the step S5, the trained ELM classifier is expressed as the following formula:
Figure QLYQS_13
in the method, in the process of the invention,
Figure QLYQS_14
for the output of the ELM classifier, +.>
Figure QLYQS_15
For training set with total n +.>
Figure QLYQS_16
Output matrix for hidden layer +_>
Figure QLYQS_17
For the expected value set, the superscript T of the matrix indicates that matrix transposition is performed on the matrix to which the matrix belongs;
the partition pattern is obtained by the following formula:
Figure QLYQS_18
where x is the feature vector of the target coding unit,
Figure QLYQS_19
and (5) a partition mode label for the target coding unit.
2. The method for coding an inter image based on an extreme learning machine according to claim 1, wherein in the step S2, the feature vector of the coding unit includes a rate distortion cost, a coding depth, and a prediction residual.
3. The method for coding an inter-frame image based on an extreme learning machine according to claim 1, wherein the ELM classifier comprises n input layers, L hidden layers and m output layers, wherein the value of n is the number of coding units in an intra-frame coding frame, the value of L is a global optimal solution obtained through training, and the value of m is the number of coding division modes of a current video coding standard.
4. The method for encoding an inter-frame image based on an extreme learning machine as claimed in claim 3, wherein in said step S2, the training set is expressed as
Figure QLYQS_20
Wherein->
Figure QLYQS_21
For an input set numbered i consisting of feature vectors of an i-th coding unit in an intra-coded frame,/v>
Figure QLYQS_22
For the desired value output set, +.>
Figure QLYQS_23
Is->
Figure QLYQS_24
A corresponding expected value. />
CN202310000697.1A 2023-01-03 2023-01-03 Inter-frame image coding method based on extreme learning machine Active CN115695803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310000697.1A CN115695803B (en) 2023-01-03 2023-01-03 Inter-frame image coding method based on extreme learning machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310000697.1A CN115695803B (en) 2023-01-03 2023-01-03 Inter-frame image coding method based on extreme learning machine

Publications (2)

Publication Number Publication Date
CN115695803A CN115695803A (en) 2023-02-03
CN115695803B true CN115695803B (en) 2023-05-12

Family

ID=85057258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310000697.1A Active CN115695803B (en) 2023-01-03 2023-01-03 Inter-frame image coding method based on extreme learning machine

Country Status (1)

Country Link
CN (1) CN115695803B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115834885B (en) * 2023-02-17 2023-06-13 宁波康达凯能医疗科技有限公司 Inter-frame image coding method and system based on sparse representation
CN116634150B (en) * 2023-07-21 2023-12-12 宁波康达凯能医疗科技有限公司 Inter-frame image coding method, device and storage medium based on frequent pattern classification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020077198A1 (en) * 2018-10-12 2020-04-16 Kineticor, Inc. Image-based models for real-time biometrics and marker-less motion tracking in imaging applications
CN114513660A (en) * 2022-04-19 2022-05-17 宁波康达凯能医疗科技有限公司 Interframe image mode decision method based on convolutional neural network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5383674B2 (en) * 2007-06-27 2014-01-08 トムソン ライセンシング Method and apparatus for encoding and / or decoding video data using enhancement layer residual prediction for bit depth scalability
EP2600531A1 (en) * 2011-12-01 2013-06-05 Thomson Licensing Method for determining a modifiable element in a coded bit-stream and associated device
CN106664430A (en) * 2014-06-11 2017-05-10 Lg电子株式会社 Method and device for encodng and decoding video signal by using embedded block partitioning
CN108268941B (en) * 2017-01-04 2022-05-31 意法半导体股份有限公司 Deep convolutional network heterogeneous architecture
CN112291562B (en) * 2020-10-29 2022-06-14 郑州轻工业大学 Fast CU partition and intra mode decision method for H.266/VVC
CN114584771B (en) * 2022-05-06 2022-09-06 宁波康达凯能医疗科技有限公司 Method and system for dividing intra-frame image coding unit based on content self-adaption

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020077198A1 (en) * 2018-10-12 2020-04-16 Kineticor, Inc. Image-based models for real-time biometrics and marker-less motion tracking in imaging applications
CN114513660A (en) * 2022-04-19 2022-05-17 宁波康达凯能医疗科技有限公司 Interframe image mode decision method based on convolutional neural network

Also Published As

Publication number Publication date
CN115695803A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN115695803B (en) Inter-frame image coding method based on extreme learning machine
CN107396124B (en) Video-frequency compression method based on deep neural network
CN106713935B (en) A kind of HEVC block division fast method based on Bayesian decision
CN110087087A (en) VVC interframe encode unit prediction mode shifts to an earlier date decision and block divides and shifts to an earlier date terminating method
Li et al. A new three-step search algorithm for block motion estimation
CN112004085B (en) Video coding method under guidance of scene semantic segmentation result
CN114079779B (en) Image processing method, intelligent terminal and storage medium
CN106162167A (en) Efficient video coding method based on study
JP2004514351A (en) Video coding method using block matching processing
CN112001950B (en) Multi-target tracking algorithm based on target detection and feature extraction combined model
CN108924558A (en) A kind of predictive encoding of video method neural network based
CN110351548B (en) Stereo image quality evaluation method guided by deep learning and disparity map weighting
CN110062239A (en) A kind of reference frame selecting method and device for Video coding
CN110312130A (en) Inter-prediction, method for video coding and equipment based on triangle model
CN103327327A (en) Selection method of inter-frame predictive coding units for HEVC
CN107071421A (en) A kind of method for video coding of combination video stabilization
CN108769696A (en) A kind of DVC-HEVC video transcoding methods based on Fisher discriminates
CN107087171A (en) HEVC integer pixel motion estimation methods and device
CN110213584A (en) Coding unit classification method and coding unit sorting device based on Texture complication
CN104601992A (en) SKIP mode quickly selecting method based on Bayesian minimum hazard decision
CN115955574B (en) Method, device and storage medium for encoding intra-frame image based on weight network
CN117176960A (en) Convolutional neural network chroma prediction coding method with multi-scale position information embedded
CN102647595A (en) AVS (Audio Video Standard)-based sub-pixel motion estimation device
CN103517078A (en) Side information generating method in distribution type video code
CN109547798B (en) Rapid HEVC inter-frame mode selection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant