CN113286057A - High-definition environment-friendly video processing method and system - Google Patents

High-definition environment-friendly video processing method and system Download PDF

Info

Publication number
CN113286057A
CN113286057A CN202110373240.6A CN202110373240A CN113286057A CN 113286057 A CN113286057 A CN 113286057A CN 202110373240 A CN202110373240 A CN 202110373240A CN 113286057 A CN113286057 A CN 113286057A
Authority
CN
China
Prior art keywords
data
video
image
decoding
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110373240.6A
Other languages
Chinese (zh)
Inventor
白金涛
王月娥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Xinyun Technology Co ltd
Original Assignee
Hainan Xinyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Xinyun Technology Co ltd filed Critical Hainan Xinyun Technology Co ltd
Priority to CN202110373240.6A priority Critical patent/CN113286057A/en
Publication of CN113286057A publication Critical patent/CN113286057A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a high-definition environment-friendly video processing method and system, and relates to the field of video data processing. The high-definition ring-through video processing method comprises the following steps: selecting an interested area in the video image, wherein the width of the pixel of the interested area reaches a preset range, and oversampling the interested area in the video image; receiving the over-sampled data, restoring the over-sampled data into parallel restored data, performing data decoding on the parallel restored data, and filtering; and acquiring the filtered vertical edge features, and fusing the circular vertical edge features to form a composite feature image. The method can greatly reduce the error rate of video pixel data and improve the stability of video control signals. In addition, the invention also provides a high-definition environment-friendly video processing system, which comprises: the device comprises an oversampling module, a decoding and filtering module and an output module.

Description

High-definition environment-friendly video processing method and system
Technical Field
The invention relates to the field of video data processing, in particular to a high-definition environment-friendly video processing method and system.
Background
The DVI (digital video interface) or HDMI (high-definition multimedia interface) based on the minimized differential signal coding has a frequency channel (channel C) for transmitting a frequency signal and three color channels (channel 0: 2) for transmitting serial data of red (R), green (G) and blue (B), each color channel converts an 8-bit original digital video signal into a 10-bit serial signal sequence with minimized transmission differential by using TMDS coding, the clock channel and the line-field blanking interval transmit special 10-bit data, the operable clock frequency is 25-165 MHz, the data transmission rate of the color channel is ten times of the frequency signal, that is, each color channel in a frequency signal period has ten-bit serial data to be transmitted, and the DVI or HDMI receiving end must recover the ten-bit serial data of each color channel by using the above relationship and can decode the recovered data Raw 8-bit pixel data and a refined pixel data enable signal (DE), a line synchronization signal (HSync), and a field synchronization signal (VSync).
The traditional digital video decoding circuit has an excessively high error rate in the process of recovering the over-sampled data, and the traditional phase jump circuit is based on four phase cycle jumps, so that some over-sampled data are lost or some data are selected repeatedly; the traditional data detection circuit only determines error, left shift, right shift signals and finally recovered data based on currently input 4-bit data, so that the error rate is very high; a frame synchronization unit in a traditional digital video decoding circuit performs frame synchronization once in each clock period, which can cause unstable data after frame synchronization and excessive extracted video pixel enable signal DE glitch signals; a channel synchronization unit and a video control signal filtering unit are not introduced into a traditional digital video decoding circuit, the two units are necessary in the actual verification process, the channel synchronization unit solves the problem that data skew exists between every two channels in the high-speed transmission process, and the video signal filtering unit solves the problem that extracted video control signals have burrs.
In an image recognition system, image positioning is a prerequisite of the whole recognition module, and the positioning accuracy of the image positioning greatly influences the final recognition performance of the whole system. The high-definition image video has a large viewing range, large data amount to be processed and more complex background, and the existing positioning method based on the standard-definition image has simple detection of the target background and single identification, and is difficult to be directly applied to the high-definition image. The image positioning is the largest difference between the high-definition image and the standard-definition image, and in view of the requirements on real-time performance and accuracy, the problem of real-time positioning of the high-definition video must be solved.
Disclosure of Invention
The invention aims to provide a high-definition ring-through video processing method, which is a digital video interface decoding circuit and a method, wherein the digital video interface decoding circuit can recover ten-bit serial data of each color channel, decode original 8-bit pixel data from the recovered data and extract a pixel data enable signal, a line synchronizing signal and a field synchronizing signal, and can greatly reduce the error rate of video pixel data and improve the stability of a video control signal.
Another objective of the present invention is to provide a high definition ring-through video processing system, which is capable of operating a high definition ring-through video processing method.
The embodiment of the invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a method for processing a high-definition environment-friendly video, which includes selecting an interested area in a video image, wherein the width of an interested area pixel reaches a preset range, and oversampling the interested area in the video image; receiving the over-sampled data, restoring the over-sampled data into parallel restored data, performing data decoding on the parallel restored data, and filtering; and acquiring the filtered vertical edge features, and fusing the circular vertical edge features to form a composite feature image.
In some embodiments of the present invention, the receiving the over-sampled data and restoring the over-sampled data into parallel restored data includes: and receiving recovery data output by the over-sampling data, and synchronizing the frames of the recovery data by utilizing the special encoding rule of the HDMI in the video blanking interval.
In some embodiments of the present invention, the above further includes: and receiving the ring-through data after frame synchronization output by frame synchronization, and outputting the ring-through data after clock synchronization.
In some embodiments of the present invention, the data decoding the parallel recovery data includes: and receiving loop-through synchronization data, and decoding the data per clock of each channel into original video pixel data by using an HDMI decoding rule.
In some embodiments of the invention, the above and filtering comprises: receiving the video control signal and filtering the burr signal.
In some embodiments of the present invention, the above further includes: setting the aspect ratio and the sizes of the video image as a plurality of matching templates in the synthesized characteristic image, acquiring the corresponding image position in the synthesized characteristic image through convolution operation by using each matching template, and restoring the corresponding image position to the video image.
In some embodiments of the present invention, the above further includes: and in the image positions acquired by each matching template, if the image positions are overlapped, the positioned image positions are obtained after the image positions are subjected to de-overlapping according to the convolution operation result.
In some embodiments of the present invention, the above further includes: if the image positions do not overlap, the plurality of image positions are reserved as the positioned image positions.
In a second aspect, an embodiment of the present application provides a high definition all-around video processing system, which includes an oversampling module, configured to select an area of interest in a video image, where a width of an area of interest pixel reaches a preset range, and perform oversampling on the area of interest in the video image; the decoding and filtering module is used for receiving the over-sampled data, restoring the over-sampled data into parallel restored data, decoding the data of the parallel restored data and filtering the data; and the output module is used for acquiring the filtered vertical edge features and fusing the circular vertical edge features to form a composite feature image.
In some embodiments of the invention, the above includes: at least one memory for storing computer instructions; at least one processor in communication with the memory, wherein the at least one processor, when executing the computer instructions, causes the system to: the device comprises an oversampling module, a decoding and filtering module and an output module.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects:
the digital video interface decoding circuit and the method can recover ten-bit serial data of each color channel, can decode original 8-bit pixel data from the recovered data and extract a pixel data enable signal, a line synchronization signal and a field synchronization signal, greatly reduce the error rate of video pixel data and improve the stability of a video control signal. The method has the advantages that the processing speed can be improved without space change and image coding, the background interference is reduced by selecting the region of interest, the processing speed can be improved, the edge is subjected to differential operation without binarization processing, the purpose of rapid detection is also realized, and the positioning precision and the robustness are greatly improved by fusing color information.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic step diagram of a method for processing a high definition environment-friendly video according to an embodiment of the present invention;
fig. 2 is a detailed step diagram of a method for processing a high definition environment-friendly video according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of a high definition environment-friendly video processing system according to an embodiment of the present invention.
Icon: 10-an oversampling module; 20-a decoding filter module; 30-output module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the individual features of the embodiments can be combined with one another without conflict.
Example 1
Referring to fig. 1, fig. 1 is a schematic diagram illustrating steps of a method for processing a high definition environment-friendly video according to an embodiment of the present invention, which is shown as follows:
s100, selecting an interested area in the video image, wherein the width of the pixel of the interested area reaches a preset range, and oversampling the interested area in the video image;
in some embodiments, a region of interest roi (regionointerval) is selected. In the actual image positioning module, the size of the image in the image is affected by the installation distance and the size of the acquired scene. The ROI is an image region selected from the image, which is delineated for further processing, and can be used to reduce processing time and increase accuracy.
In some embodiments, in an image positioning practical application scene, the image pixel width is generally 100 pixels to 300 pixels. In the present invention, a range of 72 to 600 widths is provided, and a user can set an image positioning range as desired. Given that the camera is fixed, the image can only appear in a certain area in the image and reach a detectable pixel size. Therefore, a user can set the ROI according to an actual application scene, only a specific area in the high-definition picture is processed, a large amount of background area interference can be eliminated, and image candidate areas with different sizes can be extracted quickly and effectively.
Step S110, receiving the over-sampled data, restoring the over-sampled data into parallel restored data, performing data decoding on the parallel restored data, and filtering;
in some embodiments, 40-bit data per clock cycle formed by oversampling HDMI serial data with a multiphase clock from an analog front end is received, and the 40-bit data is recovered into parallel ten-bit recovered data; receiving ten-bit recovery data of two clock periods output by the quadruple oversampling data recovery unit to form 20-bit data, and synchronizing the frames of the ten-bit recovery data by using a special encoding rule of the HDMI in a video blanking interval; that is, it is determined which group of consecutive ten-bit data of the 20-bit data is HDMI ten-bit data encoded by the digital video interface encoder for 8-bit data per chroma in one complete clock cycle, and a pixel data enable signal DE is simultaneously given, where the pixel data enable signal DE is a high active video interval and the pixel data enable signal DE is a low line-field blanking interval.
And step S120, acquiring the filtered vertical edge features, and fusing the circular vertical edge features to form a composite feature image.
In some embodiments, horizontal difference operation is performed on the ring-through images in the down-sampled images respectively to obtain vertical edge features of each channel image;
wherein, the horizontal difference calculation formula in each channel image is as follows:
edgeIm(i,j)=pucIm(i,j)-pucIm(i-1,j)
edgeIm is the vertical edge feature of the channel after downsampling, and pucIm is the channel image after downsampling; (i, j) represents the current pixel location of the channel, and (i-1, j) represents the previous pixel location of the j row.
Forming a composite feature image is obtained by:
edgeMerge=edgeY+|edgeU-edgeV|
edgeMerge is the composite feature image, edgeY, edgeU, edgeV are the ring channel vertical edge features, respectively.
Example 2
Referring to fig. 2, fig. 2 is a detailed step diagram of a method for processing a high definition environment-friendly video according to an embodiment of the present invention, which is shown as follows:
and step S200, receiving the recovery data output by the over-sampling data, and synchronizing the frames of the recovery data by using a special encoding rule of the HDMI in a video blanking interval.
Step S210, receiving the frame-synchronized ring-through data outputted by the frame synchronization, and outputting the ring-through data after clock synchronization.
Step S220, receiving the circular synchronization data, and decoding the data per clock of each channel into the original video pixel data by using the HDMI decoding rule.
In step S230, the video control signal is received and the glitch signal is filtered.
And step S240, setting the aspect ratio and the sizes of the video image as a plurality of matching templates in the synthesized characteristic image, acquiring the corresponding image position in the synthesized characteristic image through convolution operation by using each matching template, and restoring the corresponding image position to the video image.
And step S250, if the image positions are overlapped in the image positions acquired by each matching template, the positioned image positions are obtained after the image positions are subjected to de-overlapping according to the convolution operation result.
In step S260, if the image positions do not overlap, the plurality of image positions are retained as the positioned image positions.
In some embodiments, three chrominance channel data after frame synchronization are received, and the data of the three chrominance channels are output after clock synchronization; because different channel data of the HDMI may have data skew in the transmission process or the over-sampling data recovery process, channel synchronization is required; receiving loop-through synchronization data output by a channel synchronization unit, and decoding ten-bit data of each clock of each channel into original 8-bit video pixel data by using an HDMI decoding rule; receiving video control signals (DE, HSync and VSync), filtering burr signals, filtering burrs below three clock cycles, and filtering long pulses above three clock cycles without filtering.
Because the shooting distance and angle of the camera are different, the size of the image is different in the image, and the aspect ratio of the image is about 1: 3. the image template is set according to this ratio to 8 pixels in height and 24 pixels in width. In the present invention, a scale factor is set, which has a value of 3 to 25, for a total of 23 levels. Thus, 23 levels of image matching templates were formed, with a minimum height of 3x 8-24, a maximum of 25x 8-200, a minimum width of 3x 24-72, and a maximum of 25x 24-600. In actual use, only a few levels are needed, such as: 3 to 6 levels, corresponding to pixel heights 24 to 48, and pixel widths 72 to 144.
For the synthesized feature image edgeMerge, dividing the synthesized feature image into a plurality of rectangular boxes with the size larger than that of the matching template; the width and height of the rectangular frame are twice of the size of the template respectively. Performing convolution operation in each rectangular frame by using a matching template to obtain a convolution result; the convolution B' is found in the current rectangular frame B using the image template W.
<math><mrow><msup><mi>B</mi><mo>&prime;</mo></msup><mo>=</mo><mi>B</mi><mo>&CircleTimes;</mo><mi>W</mi><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow></mrow></math>
And (3) solving the local maximum value of the convolution result in each rectangular frame, and when the value is greater than a threshold Tplate, recording the value as an image position characteristic value Feature, and taking the position (i, j) as a candidate image position. Considering that a plurality of images may exist in the same image, any image candidate position is located, and the located image position is restored into the video image. And matching templates with different sizes are subjected to convolution operation in sequence to obtain corresponding characteristic values Feature and candidate image positions, and the candidate image positions are restored into the video image.
And normalizing the corresponding characteristic values of the image templates with different sizes, namely dividing the characteristic values by the size of the image template to form new characteristic values Newfeature. For the same image, the templates with different sizes all obtain characteristic values, namely the candidate positions are overlapped. And if the candidate image positions positioned by the templates with different sizes have overlapping areas, comparing the normalized feature values Newfeature, and selecting the candidate image position corresponding to the largest Newfeature as the finally positioned image position.
Example 3
Referring to fig. 3, fig. 3 is a schematic diagram of a high definition environment-friendly video processing system module according to an embodiment of the present invention, which is shown as follows:
the oversampling module 10 is configured to select an interested region in the video image, where the width of a pixel of the interested region reaches a preset range, and perform oversampling on the interested region in the video image;
a decoding and filtering module 20, configured to receive the oversampled data, recover the oversampled data into parallel recovered data, perform data decoding on the parallel recovered data, and perform filtering;
and the output module 30 is configured to obtain the filtered vertical edge features, and fuse the circular vertical edge features to form a composite feature image.
Also included are a memory, a processor, and a communication interface, which are electrically connected, directly or indirectly, to each other to enable transmission or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by executing the software programs and modules stored in the memory. The communication interface may be used for communicating signaling or data with other node devices.
The Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), etc.; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
It will be appreciated that the configuration shown in fig. 3 is merely illustrative and may include more or fewer components than shown in fig. 3, or have a different configuration than shown in fig. 3. The components shown in fig. 3 may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, the present application provides a method and a system for processing high definition environment-friendly video, which can recover ten-bit serial data of each color channel, decode original 8-bit pixel data from the recovered data, and extract a pixel data enable signal, a line synchronization signal, and a field synchronization signal, and thereby greatly reduce an error rate of video pixel data and improve stability of a video control signal. The method has the advantages that the processing speed can be improved without space change and image coding, the background interference is reduced by selecting the region of interest, the processing speed can be improved, the edge is subjected to differential operation without binarization processing, the purpose of rapid detection is also realized, and the positioning precision and the robustness are greatly improved by fusing color information.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A high-definition ring-through video processing method is characterized by comprising the following steps:
selecting an interested area in the video image, wherein the width of the pixel of the interested area reaches a preset range, and oversampling the interested area in the video image;
receiving the over-sampled data, restoring the over-sampled data into parallel restored data, performing data decoding on the parallel restored data, and filtering;
and acquiring the filtered vertical edge features, and fusing the circular vertical edge features to form a composite feature image.
2. The method for processing the high definition ring-through video as claimed in claim 1, wherein the receiving the over-sampled data and restoring the over-sampled data into parallel restored data comprises:
and receiving recovery data output by the over-sampling data, and synchronizing the frames of the recovery data by utilizing the special encoding rule of the HDMI in the video blanking interval.
3. The method for processing the high definition environment-friendly video according to claim 2, further comprising:
and receiving the ring-through data after frame synchronization output by frame synchronization, and outputting the ring-through data after clock synchronization.
4. The method for processing the high definition ring-through video according to claim 1, wherein the data decoding the parallel recovered data comprises:
and receiving loop-through synchronization data, and decoding the data per clock of each channel into original video pixel data by using an HDMI decoding rule.
5. The method for processing the high definition environment-friendly video according to claim 1, wherein the filtering comprises:
receiving the video control signal and filtering the burr signal.
6. The method for processing the high definition environment-friendly video according to claim 1, further comprising:
setting the aspect ratio and the sizes of the video image as a plurality of matching templates in the synthesized characteristic image, acquiring the corresponding image position in the synthesized characteristic image through convolution operation by using each matching template, and restoring the corresponding image position to the video image.
7. The method for processing the high definition environment-friendly video according to claim 6, further comprising:
and in the image positions acquired by each matching template, if the image positions are overlapped, the positioned image positions are obtained after the image positions are subjected to de-overlapping according to the convolution operation result.
8. The method for processing the high definition environment-friendly video according to claim 7, further comprising:
if the image positions do not overlap, the plurality of image positions are reserved as the positioned image positions.
9. A high definition environment-friendly video processing system, comprising:
the oversampling module is used for selecting an interested area in the video image, and oversampling the interested area in the video image when the width of the interested area reaches a preset range;
the decoding and filtering module is used for receiving the over-sampled data, restoring the over-sampled data into parallel restored data, decoding the data of the parallel restored data and filtering the data;
and the output module is used for acquiring the filtered vertical edge features and fusing the circular vertical edge features to form a composite feature image.
10. The high definition surround-through video processing system of claim 9, comprising:
at least one memory for storing computer instructions;
at least one processor in communication with the memory, wherein the at least one processor, when executing the computer instructions, causes the system to perform: the device comprises an oversampling module, a decoding and filtering module and an output module.
CN202110373240.6A 2021-04-07 2021-04-07 High-definition environment-friendly video processing method and system Pending CN113286057A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110373240.6A CN113286057A (en) 2021-04-07 2021-04-07 High-definition environment-friendly video processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110373240.6A CN113286057A (en) 2021-04-07 2021-04-07 High-definition environment-friendly video processing method and system

Publications (1)

Publication Number Publication Date
CN113286057A true CN113286057A (en) 2021-08-20

Family

ID=77276418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110373240.6A Pending CN113286057A (en) 2021-04-07 2021-04-07 High-definition environment-friendly video processing method and system

Country Status (1)

Country Link
CN (1) CN113286057A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144344A (en) * 2013-05-10 2014-11-12 成都国腾电子技术股份有限公司 Digital video interface decoding circuit and method
CN104780329A (en) * 2014-01-14 2015-07-15 南京视威电子科技股份有限公司 Multi-picture separator capable of playing high-definition and standard-definition videos based on FPGA and multi-picture separation method based on FPGA
CN105374005A (en) * 2014-08-11 2016-03-02 Arm有限公司 Data processing systems
CN105512649A (en) * 2016-01-22 2016-04-20 大连楼兰科技股份有限公司 Method for positioning high-definition video real-time number plate based on color space

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144344A (en) * 2013-05-10 2014-11-12 成都国腾电子技术股份有限公司 Digital video interface decoding circuit and method
CN104780329A (en) * 2014-01-14 2015-07-15 南京视威电子科技股份有限公司 Multi-picture separator capable of playing high-definition and standard-definition videos based on FPGA and multi-picture separation method based on FPGA
CN105374005A (en) * 2014-08-11 2016-03-02 Arm有限公司 Data processing systems
CN105512649A (en) * 2016-01-22 2016-04-20 大连楼兰科技股份有限公司 Method for positioning high-definition video real-time number plate based on color space

Similar Documents

Publication Publication Date Title
US6738528B1 (en) Block noise detector and block noise eliminator
EP2434416B1 (en) Device for generating video descriptor
CN103475935A (en) Method and device for retrieving video segments
US11184509B2 (en) Cadence analysis for a video signal having an interlaced format
JP2015088805A (en) Encoding device, decoding device, encoded data, encoding method, decoding method and program
US8600185B1 (en) Systems and methods for restoring color and non-color related integrity in an image
JP2000050275A (en) Block noise detecting device and block noise eliminating device
EP0918313A1 (en) Signal processing method for an analogue picture signal
CN113808242A (en) Image synthesis method and device and image processing equipment
US10284838B2 (en) Method and apparatus for transmitting images captured by first and second image sensors
WO2010084736A1 (en) Video descriptor generator
CN113286057A (en) High-definition environment-friendly video processing method and system
CN103533287A (en) Video processing method and device
US20080002915A1 (en) Image processing apparatus, method and medium
US10929960B2 (en) Method and image processing device for detecting a portion of an image
KR101083179B1 (en) Audio signal delay apparatus and method
KR100971859B1 (en) Detection and repair of mpeg-2 chroma upconversion artifacts
CN103650491A (en) Method and system for decoding a stereoscopic video signal
JP3980261B2 (en) Video playback apparatus and method
CN113194317A (en) Image processing method and device
CA2354780A1 (en) Digital display jitter correction apparatus and method
CN103828355B (en) The method and equipment being filtered to disparity map
KR20160045420A (en) A decoder, an application processor inclding the decoder, and a method of operating the decoder
Mollison Video Formats
EP4210335A1 (en) Image processing device, image processing method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210820

RJ01 Rejection of invention patent application after publication