CN115914749A - Video self-adaptive scaling shaping processing method and device - Google Patents

Video self-adaptive scaling shaping processing method and device Download PDF

Info

Publication number
CN115914749A
CN115914749A CN202211395399.9A CN202211395399A CN115914749A CN 115914749 A CN115914749 A CN 115914749A CN 202211395399 A CN202211395399 A CN 202211395399A CN 115914749 A CN115914749 A CN 115914749A
Authority
CN
China
Prior art keywords
channel
decoding
difference value
video
video source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211395399.9A
Other languages
Chinese (zh)
Inventor
汪燕民
赵玉普
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
709th Research Institute of CSSC
Original Assignee
709th Research Institute of CSSC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 709th Research Institute of CSSC filed Critical 709th Research Institute of CSSC
Priority to CN202211395399.9A priority Critical patent/CN115914749A/en
Publication of CN115914749A publication Critical patent/CN115914749A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to the technical field of image video display, and provides a video self-adaptive scaling and shaping processing method and device. Decoding an original video source, and optimizing decoding parameters according to a target video source obtained by decoding; the optimization comprises primary optimization configuration and secondary optimization configuration, and finally the optimal target video source with consistent time delay of each channel data and minimum deformation relative to the original video source is obtained; and adjusting the resolution and the frame rate of the optimal target video source according to the optimal display resolution and the optimal frame rate of the display equipment so as to match the display equipment. The invention can output the video on the display device by converting the original video source into the target video source, and obtains the optimal decoding parameter by the primary optimization configuration and the secondary optimization configuration, so that the time delay of each channel data is consistent, and the deformation relative to the original video source is minimum, thereby improving the quality of the video output to the display device.

Description

Video self-adaptive scaling shaping processing method and device
Technical Field
The invention relates to the technical field of image video display, in particular to a video self-adaptive scaling and shaping processing method and device.
Background
With the rapid development of display and related technologies and the changing demand of users for high-performance graphic image display devices, advanced graphic image related front-end and terminal products such as various high-performance graphic image display processing devices, multi-mode display devices, ultra-high definition video switching, video conference systems, etc. are emerging and applied continuously in recent years.
In video application places such as large meeting halls, large curtain walls, ship command cabins and the like, due to various practical reasons such as expenses, places, using habits of audience groups and the like, front-end equipment and rear-end equipment of a video information system are difficult to transform or upgrade synchronously at one time, so that high-performance graphic image display equipment and high-performance terminal equipment cannot be matched and updated synchronously. The problem of consistency compatibility of new and old technologies cannot be avoided in the gradual updating iteration process of the technologies and the equipment.
The compatibility problem is mainly reflected in the aspects of inconsistency of display interfaces of new and old equipment, inconsistency between low-resolution receiving capability of old video display equipment and high-resolution output requirement of new graphic image processing equipment, standard inconsistency between video output of old graphic image processing equipment and video input of a high-performance universal standard display terminal, inconsistency between relatively low performance indexes of the original mature technology and the high standard requirement of new generation users on video display quality, and the like.
Aiming at the situation that an active graphic image processing device (which can be understood as an old-model video providing end) is matched with a newly added display device/terminal or a video post-application processing device (which can be understood as a new-model video receiving end), the main contradictions are that: the problem that the video output by the active graphic image processing equipment does not meet the VESA standard is possibly caused by the limitation of the technical level and the testing means in the equipment development period, and the newly added display equipment/terminal development and testing mainly aims at the commercial or general video standard, adopts a general video interface chip or module and does not support the display or processing of non-standard videos. And the video output by the active graphic image processing equipment is normally displayed on the newly-added display equipment.
The existing display equipment/terminal has low receiving video resolution, fixed length-width ratio of the received video, poor self-adaptive capability, high resolution of the output video of the newly-added graphic image processing equipment and poor downward compatibility (particularly VGA interface), so that the phenomenon that the existing display terminal cannot display the video source of the newly-added graphic image processing equipment or cannot self-adaptively display the input video source on a full screen can be caused.
In view of this, overcoming the drawbacks of the prior art is a problem to be solved urgently in the art.
Disclosure of Invention
The invention aims to solve the technical problem that the video of the old equipment and the new equipment is incompatible, so that the video of the old image processing equipment cannot be normally displayed on the new equipment, or the video of the new image processing equipment cannot be normally displayed on the old equipment.
In a first aspect, the present invention provides a video adaptive scaling shaping processing method, including:
decoding an original video source, and optimizing decoding parameters according to a target video source obtained by decoding, so that an optimal target video source is obtained by decoding according to the optimized final decoding parameters;
the optimization of the decoding parameters comprises the steps of carrying out primary optimization configuration on the decoding parameters according to a target video source obtained by decoding, enabling the time delay of each channel data in the target video source obtained by decoding according to the primary decoding parameters after the primary optimization configuration to be consistent,
selecting a corresponding configuration parameter library, and performing secondary optimization configuration on the primary decoding parameters to decode according to the final decoding parameters subjected to secondary optimization configuration to obtain a target video source with minimum deformation relative to the original video source; the target video source with consistent time delay of each channel data and minimum deformation relative to the original video source is the optimal target video source;
and after the optimal target video source with the consistent time delay of each channel data is obtained through decoding, according to the optimal display resolution and the optimal frame rate of the display equipment, adjusting the resolution and the frame rate of the optimal target video source so as to match the display equipment.
Preferably, the decoding parameters are configured optimally once according to the target video source obtained by decoding, so that the time delays of data of each channel in the target video source obtained by decoding according to the once configured optimally once decoding parameters are consistent, thereby solving the problem of transmission inconsistency of each video channel caused by hardware characteristics such as printed circuit board wiring, chip difference and the like, and specifically comprising:
decoding the original video test sequences with different resolutions by using the original video test sequences with different resolutions;
in the decoding process, adjusting decoding parameters to make the time delay among the channel data of the target video test sequence obtained by decoding consistent;
taking decoding parameters used when the time delay among the channel data of the target video test sequence obtained by decoding is consistent as standard decoding parameters under corresponding resolution;
and selecting the standard decoding parameters under the resolution as the primary decoding parameters after primary optimization configuration according to the resolution of the original video source.
Preferably, in the decoding process, the decoding parameters are adjusted to make the time delays between the channel data of the target video test sequence obtained by decoding consistent, and the method specifically includes:
acquiring a first target video test sequence obtained by decoding a first original video test sequence with a first resolution;
acquiring channel data of an R channel, channel data of a G channel, channel data of a B channel and a pixel clock from the first target video test sequence;
obtaining a reference clock according to the pixel clock and the clock phase parameter;
according to the reference clock, printing reference data on the first original video test sequence according to a reference beat parameter;
carrying out numerical value benchmarking on the channel data of the R channel and the reference data to obtain an R channel benchmarking difference value; carrying out numerical value benchmarking on the channel data of the G channel and the reference data to obtain a G channel benchmarking difference value; carrying out numerical value benchmarking on the channel data of the channel B and the reference data to obtain a B channel benchmarking difference value;
and adjusting decoding parameters according to the R channel calibration standard difference value, the G channel calibration standard difference value and the B channel calibration standard difference value, so that the R channel calibration standard difference value, the G channel calibration standard difference value and the B channel calibration standard difference value are in a preset difference value range.
Preferably, the adjusting the decoding parameter according to the R channel calibration difference value, the G channel calibration difference value, and the B channel calibration difference value to make the R channel calibration difference value, the G channel calibration difference value, and the B channel calibration difference value within a preset difference range specifically includes:
when the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value are all located outside a preset difference value range, adjusting a clock phase parameter or a reference beat parameter so as to adjust the reference clock until at least one of the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value is located within the preset difference value range;
when at least one of the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value is within a preset difference value range and the calibration difference value of the corresponding channel is outside the preset difference value range, adjusting the output phase parameters of the corresponding channel until the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value are within the preset difference value range; the output phase parameter of the channel is one of decoding parameters used when a first target video test sequence is obtained by decoding a first original video test sequence.
Preferably, the performing the second optimization configuration on the first decoding parameter to decode the final decoding parameter after the second optimization configuration to obtain the target video source with the minimum deformation relative to the original video source specifically includes:
according to a target video source obtained by decoding, acquiring channel data of an R channel, channel data of a G channel, channel data of a B channel, a pixel clock, a line synchronization signal and a field synchronization signal from the target video source;
judging the polarity of the horizontal synchronizing signal and the polarity of the field synchronizing signal;
sampling an input video source by using a same source clock of the pixel clock according to the polarity of the line synchronizing signal and the polarity of the field synchronizing signal;
obtaining line effective parameter values and field effective parameter values of the input video source according to sampling data obtained by sampling;
using the line effective parameter values and the field effective parameter values to carry out video format matching in a corresponding video format library;
and performing secondary optimization configuration on the primary decoding parameters by using the configuration parameters of the video format obtained by matching.
Preferably, the using the line effective parameter value and the field effective parameter value to perform video format matching in a corresponding video format library specifically includes:
performing video format matching in a standard video format library corresponding to a target video source, and if the corresponding video format can not be matched;
reading an external video format library in the external equipment, and performing video format matching in the external video format library; and if the current video format can not be matched with the existing video format library, adding the current video format to an external video format library of the external equipment.
Preferably, the row effective parameter value includes one or more of a row effective pixel value, a row total pixel value and a row sync header width;
the field effective parameter value comprises one or more of the field effective line number, the field total line number and the field sync header broadband.
Preferably, the adjusting the resolution and the frame rate of the optimal target video source according to the optimal display resolution and the optimal frame rate of the display device to match the display device specifically includes:
acquiring the optimal resolution and the optimal frame rate of the equipment according to the identification data information of the extended display of the display equipment;
and carrying out video scaling on an optimal target video source according to the optimal resolution, and adjusting the frame rate of the optimal target video source according to the optimal frame rate to obtain the optimal target video source with the optimal resolution and the optimal frame rate.
Preferably, the resolution of the original video source suitable for the method is one or more of 640 × 480, 800 × 600, 1024 × 768, 1280 × 720, 1280 × 1024, 1440 × 900, 1600 × 1200, 1920 × 1080.
In a second aspect, the present invention further provides a video adaptive scaling shaping processing apparatus, configured to implement the video adaptive scaling shaping processing method in the first aspect, where the apparatus includes:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the processor for performing the video adaptive scaling shaping processing method of the first aspect.
In a third aspect, the present invention further provides a non-transitory computer storage medium storing computer-executable instructions, which are executed by one or more processors, for performing the video adaptive scaling and shaping method according to the first aspect.
The method and the device have the advantages that the original video source is decoded to be converted into the target video source, so that the target video source can be output on the display device, meanwhile, the optimal decoding parameters are obtained through the primary optimization configuration and the secondary optimization configuration, the time delay of each channel data of the target video source obtained through decoding is consistent, the deformation relative to the original video source is minimum, and the quality of the video output to the display device is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flowchart of a video adaptive scaling shaping processing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a video adaptive scaling shaping processing method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a video adaptive scaling shaping processing method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a video adaptive scaling shaping processing method according to an embodiment of the present invention;
FIG. 5 is a block diagram of a video adaptive scaling shaping system according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a video adaptive scaling reshaping method according to an embodiment of the present invention;
fig. 7 is a schematic flowchart of a video adaptive scaling shaping processing method according to an embodiment of the present invention;
fig. 8 is a schematic flowchart of a video adaptive scaling shaping processing method according to an embodiment of the present invention;
fig. 9 is a schematic flowchart of a video adaptive scaling shaping processing method according to an embodiment of the present invention;
fig. 10 is a schematic diagram illustrating an effect of a video adaptive scaling shaping processing method according to an embodiment of the present invention;
fig. 11 is a schematic diagram illustrating an effect of a video adaptive scaling shaping processing method according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a video adaptive scaling shaping processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
In the description of the present invention, the terms "inner", "outer", "longitudinal", "lateral", "upper", "lower", "top", "bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are for convenience only to describe the present invention without requiring the present invention to be necessarily constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1:
with the gradual update and iteration process of the display technology and the display device, it may happen that the video standards of the old and new devices are inconsistent and incompatible, for example, the old device uses the analog VGA video standard, and the new device uses the VESA digital video standard, so that the video of the old device cannot be normally displayed on the new device, or the video of the new device cannot be normally displayed on the old device.
To solve this problem, embodiment 1 of the present invention provides a video adaptive scaling shaping processing method, as shown in fig. 1, including:
in step 201, the original video source is decoded, and the decoding parameters are optimized according to the target video source obtained by decoding, so that the optimal target video source is obtained by decoding according to the optimized final decoding parameters.
In this embodiment, an original video source is decoded to obtain a target video source, a decoding parameter corresponding to the target video source is optimized according to the target video source, the foregoing process is iterated until an optimized final decoding parameter is obtained, and a subsequent video source is decoded according to the final decoding parameter to obtain an optimal target video source.
And the decoding parameters are dynamically updated according to the target video source obtained by the previous decoding until the final decoding parameters are obtained.
The original video source is a video source acquired from a video providing end, and the target video source is a video obtained by decoding the original video source and conforms to the video standard of a video receiving end. For example, the original video source is a VGA analog video source provided by an active graphic image processing device, and the target video source is a VESA digital video source, so that it can be displayed in a new display device.
The decoding is typically implemented by an ADC decoder. A video source generally refers to a video source at a certain time or a video source from a certain time, which contains data at a plurality of times. The process of decoding the video source is a continuous process, in the continuous process, data in the target video source can be obtained through continuous decoding, and after the decoding parameters are adjusted at a certain moment, the data can take effect after the moment, so that the data obtained through decoding after the moment is influenced. The decoding parameter is a general term of a plurality of parameters used in one decoding.
The optimization of the decoding parameters comprises the steps of carrying out primary optimization configuration on the decoding parameters according to a target video source obtained by decoding, and enabling the time delay of each channel data in the target video source obtained by decoding according to the primary decoding parameters after the primary optimization configuration to be consistent, wherein the primary optimization configuration may include a process of multiple secondary optimization decoding parameters until the time delay of each channel data in the target video source obtained by decoding the optimized decoding parameters is consistent, so that the problem of transmission inconsistency of each video channel caused by hardware characteristics such as printed board wiring, chip difference and the like is solved.
Selecting a corresponding configuration parameter library, and performing secondary optimization configuration on the primary decoding parameters to decode the final decoding parameters subjected to secondary optimization configuration to obtain a target video source with minimum deformation relative to the original video source; the target video source with consistent time delay of each channel data and minimum deformation relative to the original video source is the optimal target video source; the secondary optimization configuration may include a process of multiple suboptimal decoding parameters until the time delay of each channel data in the target video source obtained by decoding the optimized decoding parameters is consistent and the target video source has the minimum deformation relative to the original video source.
Before the first optimization configuration and the second optimization configuration, the decoding parameters used are default initialization parameters obtained by a person skilled in the art according to analysis of an original video source.
As an optional implementation manner, the primary optimization configuration is firstly decoded through an original video test sequence, and the decoding parameters are iteratively optimized to obtain primary decoding parameters, the primary optimization configuration process can be understood as adjustment made for transmission inconsistency of each video channel caused by hardware characteristics such as printed board wiring, chip difference and the like, and the secondary optimization configuration is parameter adjustment made for a real video source on the basis of ensuring that time delay of the hardware channels is consistent, so as to ensure that deformation of a target video source obtained by decoding is minimum relative to the original video source.
In step 202, after the optimal target video source with the consistent time delay of each channel data is obtained through decoding (i.e., the final decoding parameter is obtained, and the original video source is decoded according to the final decoding parameter), the resolution and the frame rate of the optimal target video source are adjusted according to the optimal display resolution and the optimal frame rate of the display device to match the display device. The display equipment is a video receiving end.
In the embodiment, the original video source is decoded to be converted into the target video source, so that the target video source can be output on the display device, and the optimal decoding parameters are obtained through the primary optimization configuration and the secondary optimization configuration, so that the time delay of each channel data of the target video source obtained through decoding is consistent, and the deformation relative to the original video source is minimum, thereby improving the quality of the video output to the display device.
In practical use, there may be a plurality of original video sources with different resolutions, and for the original video sources with different resolutions, the time delay of each channel data brought by the hardware characteristic may be different, so different decoding parameters need to be used, and for this problem, there is also a preferred embodiment that, according to a target video source obtained by decoding, a first optimal configuration is performed on the decoding parameters, so that the time delays of each channel data in the target video source obtained by decoding according to the first decoding parameter after the first optimal configuration are consistent, as shown in fig. 2, specifically including:
in step 301, original video test sequences of different resolutions are decoded using the original video test sequences of the different resolutions.
In step 302, in the decoding process, the decoding parameters are adjusted to make the time delays between the channel data of the target video test sequence obtained by decoding consistent.
In step 303, the decoding parameters used when the time delays between the channel data of the target video test sequence obtained by decoding are consistent are used as the standard decoding parameters under the corresponding resolution.
In step 304, according to the resolution of the original video source, the standard decoding parameter under the resolution is selected as the primary decoding parameter after the primary optimization configuration.
Wherein the resolution of the original video source suitable for the method is one or more of 640 × 480, 800 × 600, 1024 × 768, 1280 × 720, 1280 × 1024, 1440 × 900, 1600 × 1200, 1920 × 1080. The original video test sequence of the different selected resolutions can also be selected from the above resolutions.
The present embodiment also provides an optional implementation manner for adjusting the time delay of each channel data to be consistent, that is, in the decoding process, the decoding parameter is adjusted to make the time delays of the channel data of the target video test sequence obtained by decoding consistent, as shown in fig. 3, specifically including:
in step 401, a first target video test sequence decoded from a first original video test sequence at a first resolution is obtained.
In step 402, channel data of an R channel, channel data of a G channel, channel data of a B channel, and a pixel clock are obtained from the first target video test sequence.
In step 403, obtaining a reference clock according to the pixel clock and the clock phase parameter;
in step 404, according to the reference clock, the first original video test sequence is printed with reference data according to a reference beat parameter.
In step 405, performing numerical value benchmarking on the channel data of the R channel and the reference data to obtain an R channel benchmarking difference value; carrying out numerical value benchmarking on the channel data of the G channel and the reference data to obtain a G channel benchmarking difference value; and carrying out numerical value benchmarking on the channel data of the channel B and the reference data to obtain a B channel benchmarking difference value.
In step 406, the decoding parameters are adjusted according to the R channel calibration difference, the G channel calibration difference, and the B channel calibration difference, so that the R channel calibration difference, the G channel calibration difference, and the B channel calibration difference are within the preset difference range.
Wherein the decoding parameters include the reference beat and the clock phase parameters.
The time delay consistency among the channel data does not mean that the channel data are completely synchronous, but means that the R channel benchmarking difference value, the G channel benchmarking difference value and the B channel benchmarking difference value are within a preset difference value range.
The preset difference range is obtained by analyzing according to the requirement of time delay consistency of the display device by a person skilled in the art.
The adjusting the decoding parameters according to the R channel calibration difference, the G channel calibration difference, and the B channel calibration difference to make the R channel calibration difference, the G channel calibration difference, and the B channel calibration difference within a preset difference range, as shown in fig. 4, specifically includes:
in step 501, when the R channel calibration difference value, the G channel calibration difference value, and the B channel calibration difference value are all outside the preset difference range, the clock phase parameter or the reference beat parameter is adjusted, so as to adjust the reference clock until at least one of the R channel calibration difference value, the G channel calibration difference value, and the B channel calibration difference value is within the preset difference range.
In step 502, when at least one of the R channel benchmarking difference value, the G channel benchmarking difference value, and the B channel benchmarking difference value is within a preset difference range and the benchmarking difference value of the corresponding channel is outside the preset difference range, adjusting the output phase parameter of the corresponding channel until the R channel benchmarking difference value, the G channel benchmarking difference value, and the B channel benchmarking difference value are all within the preset difference range; the output phase parameter of the channel is one of decoding parameters used when a first target video test sequence is obtained by decoding a first original video test sequence.
Step 501 may be regarded as coarse adjustment of time delay, and step 502 may be regarded as fine adjustment of time delay, where at least one benchmarking difference value of each channel is within a preset difference value range through the coarse adjustment, and then the benchmarking difference value of each channel is adjusted to be within the preset difference value range through the fine adjustment, so that each channel realizes numerical benchmarking. When the coarse tuning is performed, the channel with the minimum benchmarking difference value can be preferentially selected, and the benchmarking difference value of the channel is adjusted to be within the preset difference value range. And adjusting the benchmarking difference values of the other two channels during fine tuning.
The secondary optimization configuration is performed on the primary decoding parameters, so that a target video source with minimum deformation relative to an original video source is obtained by decoding the final decoding parameters after the secondary optimization configuration, and the method specifically includes:
according to a target video source obtained through decoding, channel data of an R channel, channel data of a G channel, channel data of a B channel, a pixel clock, a line synchronization signal and a field synchronization signal are obtained from the target video source.
And judging the polarity of the line synchronizing signal and the polarity of the field synchronizing signal.
And sampling an input video source by using a source clock of the pixel clock according to the polarity of the line synchronizing signal and the polarity of the field synchronizing signal. Wherein the input video source is relative to the ADC decoder for decoding, which in this embodiment can be regarded as the original video source.
And obtaining the line effective parameter value and the field effective parameter value of the input video source according to the sampled data obtained by sampling.
And matching the video formats in the corresponding video format library by using the line effective parameter values and the field effective parameter values.
And performing secondary optimization configuration on the primary decoding parameters by using the configuration parameters of the video format obtained by matching.
Wherein the row effective parameter values comprise one or more of row effective pixel values, row total pixel values and row sync header widths; the field effective parameter value comprises one or more of the field effective line number, the field total line number and the field sync header broadband.
Wherein, according to the difference of the ADC decoder used for decoding, some ADC decoders can directly output the polarity of the synchronization signal and the polarity of the field synchronization signal, some ADC decoders cannot output the polarity, and at this time, the polarity of the line synchronization signal and the polarity of the field synchronization signal need to be determined, which specifically includes:
and counting the high-level periods of the horizontal synchronizing signals and the field synchronizing signals by using the pixel clock to obtain the number h1count of the high-level signals in the horizontal synchronizing signals in the sampling period and the number v1count of the high-level signals in the field synchronizing signals in the sampling period, and judging and obtaining the polarity of the horizontal synchronizing signals and the polarity of the field synchronizing signals of the VADC output video according to the h1count and the v1 count.
It should be noted that the parameters of the first optimization configuration and the second optimization configuration are different, the first optimization configuration mainly adjusts the first parameter related to the delay for the delay of each channel, and the second optimization configuration mainly adjusts the second parameter other than the first parameter related to the delay according to the resolution and the frame rate, so that the first optimization configuration and the second optimization configuration are not affected by each other, and in the actual decoding process, the first optimization configuration and the second optimization configuration are performed simultaneously.
In practical use, a matching format may not be found in the video format library, and in response to this problem, a preferred embodiment is further provided, that is, the performing, in the corresponding video format library, video format matching using the line valid parameter value and the field valid parameter value specifically includes:
and performing video format matching in a standard video format library corresponding to the target video source, if the corresponding video format can not be matched.
Reading an external video format library in the external equipment, and performing video format matching in the external video format library, if the external video format library cannot be matched with the existing video format library; the current video format is appended to the external video format library of the external device.
The external video format library of the external equipment can be manually added and modified by a user, wherein the external video format library is obtained by analyzing the format of an original video source by the user and is used for adding or modifying the content of the external equipment.
The adding the current video format to an external video format library of an external device specifically includes: and manually adjusting and optimizing the parameters to take the parameters which finally meet the use requirements of the user as final decoding parameters, matching the final decoding parameters obtained by manual adjustment with the current video format and adding the final decoding parameters and the current video format into an external video format library so as to decode the video in the current video format subsequently.
In practical use, resolutions or frame rates of a display device and a target video source may be different, which may cause that a video cannot be normally displayed, and for this problem, there is a preferred embodiment that, according to an optimal display resolution and an optimal frame rate of the display device, the resolution and frame rate of the optimal target video source are adjusted to match the display device, which specifically includes:
and acquiring the optimal resolution and the optimal frame rate of the equipment according to the identification data information of the extended display of the display equipment.
And carrying out video scaling on an optimal target video source according to the optimal resolution, and adjusting the frame rate of the optimal target video source according to the optimal frame rate to obtain the optimal target video source with the optimal resolution and the optimal frame rate.
The display device in this embodiment may exist in various forms, including but not limited to:
(1) A mobile communication device: such devices are characterized by mobile communication functions and mainly aimed at providing voice and data communication, and with the development of technology, most of them have video playing or video projection functions. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, etc.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has video playing or video projection characteristics. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play video content. This type of device comprises: video players, handheld game consoles, and intelligent toys and portable car navigation devices.
(4) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) And other electronic equipment with a video playing function.
Example 2:
based on the method described in embodiment 1, the invention combines with a specific application scenario and uses technical expressions in a related scenario to describe an implementation process in a characteristic scenario.
The video adaptive scaling and shaping processing method described in embodiment 1 is applied to a video adaptive scaling and shaping processing system as shown in fig. 5, and the system mainly includes a general video ADC decoder, a programmable digital video processor, a programmable read-write storage unit, a dynamic storage unit, a video output driver, an equalizer, a human-computer interaction upper computer, and the like, and is mainly used for decoding and converting a VGA video source (which can be understood as an original video source in embodiment 1) of an active graphics image processing device into a VESA video source (which can be understood as a target video source in embodiment 1), so that the video adaptive scaling and shaping processing method can be normally displayed on a newly added display device.
The hardware system should meet the following basic requirements: a configuration interface of a universal Video three-channel ADC decoder (which will be abbreviated as VADC in the subsequent embodiment, namely, a Video Analog-to-Digital Converter) is connected with a programmable Digital Video processor (which will be abbreviated as FPGA in the subsequent embodiment) and meets the electrical characteristics of a serial communication bus (such as I2C) line; signals such as parallel video, clock, synchronous output, reset input and the like of the VADC are accessed to a corresponding bank (logic block) of the FPGA, and the corresponding impedance matching requirements are met; the configuration interface of the programmable read-write storage unit is connected with the FPGA and meets the electrical characteristics of a serial communication bus (such as I2C); the dynamic storage unit is connected with the FPGA, comprises address signals (block selection, row selection, column selection, address buses and the like), data signals (parallel data buses and clock signals) and the like, and meets the transmission characteristics of corresponding bandwidth data; a configuration interface of a video output driving or balancing unit (hereinafter referred to as VDAC) is connected with the FPGA and meets the electrical characteristics of a serial communication bus; signals such as parallel videos, clocks, synchronous input, reset input and the like of the VDAC are connected with the corresponding bank of the FPGA, and the impedance matching requirement is met; the FPGA is connected with a display terminal or video later-period application equipment (hereinafter referred to as a video receiving end) and meets the electrical characteristics of a serial communication bus; the FPGA is connected with an upper computer through a serial port or other buses, and specifically:
the universal video ADC decoder is mainly used for analog-to-digital conversion of analog VGA videos and related processing of synchronous signals, wherein three channels of the video ADC decoder can be respectively configured so as to adjust time delay inconsistency of red, green and blue channels in video transmission or processing.
The programmable digital video processor mainly comprises a digital video signal receiving and identifying unit, a VESA standard video format and ADC configuration parameter library, a video format and configuration parameter read-write control unit, a video format standardization adjusting control unit, a man-machine interaction unit, a video resolution conversion and frame rate adjusting unit, a current display terminal equipment optimal resolution detecting unit, a dynamic storage management unit and the like.
The programmable read-write control unit is a video format mode library and a storage and management unit of an ADC configuration parameter library corresponding to each video format, and the stored data has the characteristic of permanent storage, namely, the stored data can still be used in the subsequent decoding process of other VGA videos.
The dynamic storage unit is mainly used for caching intermediate data generated in the video resolution conversion and frame rate adjustment processes, and the stored data does not have the characteristic of permanent storage, namely the cached data is cleared after the equipment is powered off or reset.
The video output driving or balancing unit is mainly used for performing digital-to-analog conversion, driving or balancing on the video data after the standardization and shaping, and is used for driving a display terminal or video later-stage application equipment.
In this system, the video adaptive scaling shaping processing method is shown in fig. 6, and specifically includes:
in step 601, analog VGA video is first a/D converted with a general purpose video ADC decoder (which can be understood as the decoding process described in embodiment 1).
In step 602, the digital video signal converted by the ADC decoder is detected in real time, and the decoding parameters are adjusted according to the real-time detection result for adaptive adjustment, which is completed by the ADC decoder configuration unit, the video format real-time monitoring unit, and the video signal quality evaluation unit.
The video format and configuration parameter read-write control unit calls default initialization parameters of an ADC decoder from an ADC configuration parameter library; an ADC decoder configuration unit initializes an ADC decoder by using default parameters; the video format real-time monitoring unit completes the real-time monitoring of the video format; the video signal quality evaluation unit evaluates video quality, that is, inter-channel delay consistency, and the ADC decoder configuration unit adjusts parameters of the ADC decoder (which may be understood as decoding parameters in embodiment 1) according to the inter-channel delay consistency to perform optimization configuration.
In step 603, format recognition of the input video is completed by the video format pattern recognition unit.
In step 604, the identified format information is matched with the existing video mode, and the matching is completed by the video format and configuration parameter read-write control unit and the video format mode library in a cooperative manner.
In step 605, it is determined whether the input video meets the VESA standard, and the input video that does not meet the VESA standard is subjected to video format standardization adjustment, where the processing units participating in the process include a VESA standard video ADC configuration parameter library, a VESA standard video format library (which can be understood as a standard video format library corresponding to a target video source), a video format standardization adjustment control unit, a micro-control and storage management unit, and a dynamic storage unit.
Specifically, the method comprises the following steps: the video format real-time monitoring unit completes the real-time monitoring of the video format; the video format mode identification unit completes format identification of the input video; judging whether the input video meets the VESA standard or not by referring to the VESA standard video format library, wherein the following three conditions exist specifically:
in the first case: and for the video meeting the VESA standard, the ADC decoder configuration unit calls the VESA standard video ADC configuration parameter library to adjust the parameters of the ADC decoder, so that the secondary optimization configuration of the ADC decoder is completed.
In the second case: and for the video which does not meet the VESA standard but has a matching mode in the video format mode library, calling the ADC configuration parameter library by the ADC decoder configuration unit to adjust the parameters of the ADC decoder, and finishing the secondary optimization configuration of the ADC decoder.
In the third case: for videos which do not meet VESA standards and do not have matching modes in a video format mode library, online configuration of ADC decoder parameters is carried out through a video decoder parameter setting unit and an ADC decoder configuration unit of a human-computer interaction unit, the video format mode library and the ADC configuration parameter library are updated after online configuration, the ADC configuration parameter library is used for adjusting the parameters of the ADC decoder, and secondary optimization configuration of the ADC decoder is completed. And sets the video parameter as a default initialization parameter for power-up.
In step 606, it is determined whether the current video format can match the current display device (video receiving end), the best resolution identification of the display device (video receiving end) is completed by the best resolution control unit (in which the extended display identification data information is stored) of the current display terminal device, and if the current video format is not the best resolution of the video receiving end, the video format scaling or frame rate adjustment is completed cooperatively by the video resolution conversion and frame rate adjustment unit, the micro control and storage management unit, the dynamic storage unit, and the like.
The Extended display identification data information (EDID) will be directly described as EDID in the subsequent embodiments.
In step 607, when the output video format meets the system requirement, the video output driving or equalizing unit outputs the video signal to the display terminal or the video post-application device.
As shown in fig. 7, the primary optimization configuration specifically includes:
in step 701, the original video test sequence is accessed to the system as a video source, and the flags are adjusted one by one, wherein the original video test sequence should be videos meeting the VESA standard, such as videos with resolutions of 640 × 480, 800 × 600, 1024 × 768, 1280 × 720, 1280 × 1024, 1440 × 900, 1600 × 1200, 1920 × 1080, and the video refresh rate is generally 60 or 75.
In step 702, the FPGA resets the VADC, and the VADC is initialized with reference to the original video test sequence parameters by an ADC decoder configuration unit within the FPGA.
In step 703, the output signal of VADC (which may be understood as the target video test sequence in embodiment 1 herein) including parallel R, G, B data (which may be understood as the channel data of R channel, the channel data of G channel, and the channel data of B channel in embodiment 1, respectively) and pixel clock is received by the video format real-time monitoring unit in FPGA, the pixel clock is used as an input of the FPGA internal clock management unit, the single-time clock output by the clock management unit is used as an input clock of the video signal quality evaluation unit (which may be understood as the reference clock in embodiment 1), the original video test sequence built in the video signal quality evaluation unit is printed out in reference beats (which may be understood as the reference beat parameters in embodiment 1), and the original video test sequence is numerically aligned with the parallel R, G, and B data from VADC.
In step 704, performing time delay consistency synchronization processing according to the parallel benchmarking difference values of the three channels R, G, and B (which may be understood as the R channel benchmarking difference value, the G channel benchmarking difference value, and the B channel benchmarking difference value), that is, if the benchmarking difference values are all within an acceptable range, no adjustment is needed, and if the benchmarking difference value of a certain video channel exceeds the range, the video output phase of a single channel in the test sequence in the VADC configuration parameters (that is, the parameters of the ADC decoder described above, which may also be understood as the decoding parameters in embodiment 1) is adjusted separately, and the parallel data of the video channel is adjusted in the FPGA (for example, the input clock phase is adjusted or the reference beat is increased or decreased) until the three channels all meet the benchmarking; specifically, the method comprises the following steps:
and when the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value are all out of the preset difference value range, adjusting a clock phase parameter or a reference beat parameter so as to adjust the reference clock until at least one of the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value is in the preset difference value range.
When at least one of the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value is within a preset difference value range and the calibration difference value of the corresponding channel is outside the preset difference value range, adjusting the output phase parameters of the corresponding channel until the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value are within the preset difference value range; the output phase parameter of the channel is one of decoding parameters used when a first target video test sequence is obtained by decoding a first original video test sequence.
In step 705, the VGA test sequence is changed, and step 702 is returned until all the original video test sequences are aligned.
In step 706, the input video source is changed from the original video test sequence to the actual video input, the VADC is reset by the FPGA, and the VADC is initialized according to the calibrated VADC configuration parameters.
As shown in fig. 8, the secondary optimization configuration specifically includes:
in step 801, the output signals of the VADC, including parallel R, G, B data, pixel clock, and line-field synchronization signals (or data enable signals) are received by a video format real-time monitoring unit within the FPGA. For the VADC which can not set the polarity of the output synchronizing signal, firstly, the polarity of the line synchronizing signal and the field synchronizing signal needs to be judged, the high-level period of the line synchronizing signal and the high-level period of the field synchronizing signal can be counted by using a pixel clock to obtain h1count and v1count, and the polarity of the line synchronizing signal and the field synchronizing signal of the VADC output video is judged according to the h1count and the v1 count.
In step 802, according to the output video line and field synchronization polarity of the VADC, sampling the input video signal by using a clock that is the same as the output pixel clock of the VADC, and obtaining a line effective pixel value (hcount), a field effective line number (vcount), a total line pixel value (total _ hcount), a total field line number (total _ vcount), a line synchronization header width (hs _ cnt) and a field synchronization header wideband (vs _ cnt), where the line effective pixel value, the total line pixel value and the line synchronization header width can be understood as line effective parameter values in embodiment 1; the number of field effective lines, the number of field total lines, and the field sync head width can be understood as field effective parameter values in embodiment 1.
In step 803, the FPGA internal video format pattern recognition unit performs video pattern matching on the line effective parameter value and the field effective parameter value obtained in step 802 in the built-in VESA standard video format library, and if matching is successful, calls a corresponding configuration parameter in the VESA standard video ADC configuration parameter library to perform secondary optimization configuration on the VADC.
In step 804, if the matched VESA standard video mode cannot be obtained in step 803, the video format mode library in the external programmable read-write storage unit is read, and the mode matching is performed again, and if the existing video format can be matched, the corresponding parameters in the ADC configuration parameter library are called to perform secondary optimization configuration on the VADC.
In step 805, if the matching with the existing video mode is still not successful in step 804, the video mode is added to the video format mode library as a new video format, the VADC channel setting parameters are manually adjusted by the upper computer until the video quality meets the user requirement, and finally the manually adjusted VADC channel setting parameters are added to the ADC configuration parameter library.
As shown in fig. 9, the video format scaling or frame rate adjustment specifically includes:
in step 901, the EDID information of the video receiving end is read by the optimal resolution checking unit of the current display terminal device inside the FPGA, and the optimal display resolution of the video receiving end is determined.
In step 902, the resolution video parameters are called from the built-in VESA standard video format library of the FPGA as the output video format according to the optimal display resolution obtained in step 901.
In step 903, the input video is normalized and shaped by the FPGA internal video resolution conversion and frame rate adjustment unit, and the video output driver or the equalization unit drives the video receiving end, thereby improving the user experience.
Fig. 10 illustrates that the video source of the active graphics image processing apparatus respectively corresponds to the display effect on the newly added display terminal before performing the video adaptive scaling and shaping processing method and after performing the video adaptive scaling and shaping processing method, where the video cannot be effectively displayed on the newly added display terminal before performing the video adaptive scaling and shaping processing method, and the video is normally displayed on the newly added display terminal after performing the video adaptive scaling and shaping processing method.
Fig. 11 illustrates that the video source of the newly added graphic image processing device respectively corresponds to the display effect on the active display terminal before the video adaptive scaling and shaping processing method and after the video adaptive scaling and shaping processing method, wherein the video cannot be effectively displayed on the active display terminal before the video adaptive scaling and shaping processing method is not performed, and the video is normally displayed on the active display terminal after the video adaptive scaling and shaping processing method is performed.
Example 3:
fig. 12 is a schematic structural diagram of a video adaptive scaling and shaping apparatus according to an embodiment of the present invention. The video adaptive scaling reshaping processing apparatus of the present embodiment includes one or more processors 21 and a memory 22. Fig. 12 illustrates an example of one processor 21.
The processor 21 and the memory 22 may be connected by a bus or other means, and fig. 12 illustrates the connection by a bus as an example.
The memory 22, as a non-volatile readable and writable storage medium, can be used to store non-volatile software program object codes and non-volatile computer executable programs, such as the video adaptive scaling shaping processing method in embodiment 1. The processor 21 executes the video adaptive scaling shaping processing method by executing non-volatile software programs and instructions stored in the memory 22.
The memory 22 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 22 may optionally include memory located remotely from the processor 21, which may be connected to the processor 21 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 22 and when executed by the one or more processors 21, perform the video adaptive scaling shaping processing methods of embodiments 1 and 2 described above.
It should be noted that, for the information interaction, execution process and other contents between the modules and units in the apparatus and system, the specific contents may refer to the description in the embodiment of the method of the present invention because the same concept is used as the embodiment of the processing method of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is intended to be illustrative of the preferred embodiment of the present invention and should not be taken as limiting the invention, but rather, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

Claims (10)

1. A video adaptive scaling shaping processing method is characterized by comprising the following steps:
decoding an original video source, and optimizing decoding parameters according to a target video source obtained by decoding, so that an optimal target video source is obtained by decoding according to the optimized final decoding parameters;
the optimization of the decoding parameters comprises the steps of carrying out primary optimization configuration on the decoding parameters according to a target video source obtained by decoding, enabling the time delay of each channel data in the target video source obtained by decoding according to the primary decoding parameters after the primary optimization configuration to be consistent,
selecting a corresponding configuration parameter library, and performing secondary optimization configuration on the primary decoding parameters to decode according to the final decoding parameters subjected to secondary optimization configuration to obtain a target video source with minimum deformation relative to the original video source; the target video source with consistent time delay of each channel data and minimum deformation relative to the original video source is the optimal target video source;
and after the optimal target video source with the consistent time delay of each channel data is obtained through decoding, according to the optimal display resolution and the optimal frame rate of the display equipment, adjusting the resolution and the frame rate of the optimal target video source so as to match the display equipment.
2. The method according to claim 1, wherein the performing a first optimization configuration on the decoding parameters according to the decoded target video source to make time delays of channel data in the target video source decoded according to the first decoding parameter after the first optimization configuration consistent includes:
decoding the original video test sequences with different resolutions by using the original video test sequences with different resolutions;
in the decoding process, the decoding parameters are adjusted to make the time delay among the channel data of the target video test sequence obtained by decoding consistent;
taking decoding parameters used when the time delay among the channel data of the target video test sequence obtained by decoding is consistent as standard decoding parameters under corresponding resolution;
and selecting the standard decoding parameters under the resolution as the primary decoding parameters after primary optimization configuration according to the resolution of the original video source.
3. The method of claim 2, wherein in the decoding process, the decoding parameters are adjusted to make the time delays of the channel data of the target video test sequence obtained by decoding consistent, and the method specifically comprises:
acquiring a first target video test sequence obtained by decoding a first original video test sequence with a first resolution;
acquiring channel data of an R channel, channel data of a G channel, channel data of a B channel and a pixel clock from the first target video test sequence;
obtaining a reference clock according to the pixel clock and the clock phase parameter;
according to the reference clock, printing reference data on the first original video test sequence according to a reference beat parameter;
carrying out numerical value benchmarking on the channel data of the R channel and the reference data to obtain an R channel benchmarking difference value; carrying out numerical value benchmarking on the channel data of the G channel and the reference data to obtain a G channel benchmarking difference value; carrying out numerical value benchmarking on the channel data of the channel B and the reference data to obtain a B channel benchmarking difference value;
and adjusting decoding parameters according to the R channel calibration standard difference value, the G channel calibration standard difference value and the B channel calibration standard difference value, so that the R channel calibration standard difference value, the G channel calibration standard difference value and the B channel calibration standard difference value are in a preset difference value range.
4. The method of claim 3, wherein the adjusting the decoding parameters according to the R-channel logo-matching difference value, the G-channel logo-matching difference value and the B-channel logo-matching difference value to make the R-channel logo-matching difference value, the G-channel logo-matching difference value and the B-channel logo-matching difference value within a preset difference range specifically comprises:
when the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value are all located outside a preset difference value range, adjusting a clock phase parameter or a reference beat parameter so as to adjust the reference clock until at least one of the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value is located within the preset difference value range;
when at least one of the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value is within a preset difference value range and the calibration difference value of the corresponding channel is outside the preset difference value range, adjusting the output phase parameters of the corresponding channel until the R channel calibration difference value, the G channel calibration difference value and the B channel calibration difference value are within the preset difference value range; the output phase parameter of the channel is one of decoding parameters used when a first target video test sequence is obtained by decoding a first original video test sequence.
5. The method according to claim 1, wherein the performing a second optimization configuration on the first decoding parameter to obtain a target video source with minimum deformation relative to an original video source by decoding a final decoding parameter after the second optimization configuration specifically comprises:
according to a target video source obtained by decoding, acquiring channel data of an R channel, channel data of a G channel, channel data of a B channel, a pixel clock, a line synchronization signal and a field synchronization signal from the target video source;
judging the polarity of the horizontal synchronizing signal and the polarity of the field synchronizing signal;
sampling an input video source by using a homologous clock of the pixel clock according to the polarity of the line synchronizing signal and the polarity of the field synchronizing signal;
obtaining line effective parameter values and field effective parameter values of the input video source according to sampling data obtained by sampling;
using the line effective parameter values and the field effective parameter values to carry out video format matching in a corresponding video format library;
and performing secondary optimization configuration on the primary decoding parameters by using the configuration parameters of the video format obtained by matching.
6. The method according to claim 5, wherein the using the line significant parameter value and the field significant parameter value to perform video format matching in a corresponding video format library specifically comprises:
performing video format matching in a standard video format library corresponding to a target video source, and if the corresponding video format can not be matched;
reading an external video format library in the external equipment, and performing video format matching in the external video format library; and if the current video format can not be matched with the existing video format library, adding the current video format to an external video format library of the external equipment.
7. The method of claim 5, wherein the line effective parameter values comprise one or more of line effective pixel values, line total pixel values and line sync header widths;
the field effective parameter values include one or more of the number of field effective lines, the number of field total lines, and the field sync header width.
8. The method according to any one of claims 1 to 7, wherein the adjusting the resolution and the frame rate of the optimal target video source according to the optimal display resolution and the optimal frame rate of a display device to match the display device specifically comprises:
acquiring the optimal resolution and the optimal frame rate of the equipment according to the identification data information of the extended display of the display equipment;
and carrying out video scaling on an optimal target video source according to the optimal resolution, and adjusting the frame rate of the optimal target video source according to the optimal frame rate to obtain the optimal target video source with the optimal resolution and the optimal frame rate.
9. The method of any of claims 1-7, wherein the original video source has a resolution of one or more of 640 x 480, 800 x 600, 1024 x 768, 1280 x 720, 1280 x 1024, 1440 x 900, 1600 x 1200, 1920 x 1080.
10. A video adaptive scaling reshaping apparatus, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor for performing the method of video adaptive scaling and shaping as claimed in any of claims 1-9.
CN202211395399.9A 2022-11-09 2022-11-09 Video self-adaptive scaling shaping processing method and device Pending CN115914749A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211395399.9A CN115914749A (en) 2022-11-09 2022-11-09 Video self-adaptive scaling shaping processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211395399.9A CN115914749A (en) 2022-11-09 2022-11-09 Video self-adaptive scaling shaping processing method and device

Publications (1)

Publication Number Publication Date
CN115914749A true CN115914749A (en) 2023-04-04

Family

ID=86490674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211395399.9A Pending CN115914749A (en) 2022-11-09 2022-11-09 Video self-adaptive scaling shaping processing method and device

Country Status (1)

Country Link
CN (1) CN115914749A (en)

Similar Documents

Publication Publication Date Title
CN109783178B (en) Color adjusting method, device, equipment and medium for interface component
WO2022068487A1 (en) Styled image generation method, model training method, apparatus, device, and medium
CN102314671B (en) Graphics processing unit and information processing apparatus
US7173635B2 (en) Remote graphical user interface support using a graphics processing unit
CN107437272B (en) Interactive entertainment method and device based on augmented reality and terminal equipment
CN103841389B (en) A kind of video broadcasting method and player
EP4231650A1 (en) Picture display method and apparatus, and electronic device
US10049498B2 (en) Video conversion method, apparatus and system
CA2455043A1 (en) Method and apparatus for facilitating control of a target computer by a remote computer
US20170054942A1 (en) Device for playing audio and video
CN114025219B (en) Rendering method, device, medium and equipment for augmented reality special effects
US11259029B2 (en) Method, device, apparatus for predicting video coding complexity and storage medium
US20170055034A1 (en) Audio and video player
WO2023207801A1 (en) Video stream frame rate adjustment method and apparatus, and device, medium and product
CN110070495B (en) Image processing method and device and electronic equipment
KR20210110852A (en) Image deformation control method, device and hardware device
US11562772B2 (en) Video processing method, electronic device, and storage medium
WO2023143217A1 (en) Special effect prop display method, apparatus, device, and storage medium
US20220382053A1 (en) Image processing method and apparatus for head-mounted display device as well as electronic device
CN108847066A (en) A kind of content of courses reminding method, device, server and storage medium
KR102436020B1 (en) Electronic device and control method thereof
CN112786032A (en) Display content control method, device, computer device and readable storage medium
WO2020037754A1 (en) Method and device for enhancing image quality of video
CN115914749A (en) Video self-adaptive scaling shaping processing method and device
CN110197459B (en) Image stylization generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination