CN108377353B - Video processing method applied to embedded system - Google Patents

Video processing method applied to embedded system Download PDF

Info

Publication number
CN108377353B
CN108377353B CN201810105365.9A CN201810105365A CN108377353B CN 108377353 B CN108377353 B CN 108377353B CN 201810105365 A CN201810105365 A CN 201810105365A CN 108377353 B CN108377353 B CN 108377353B
Authority
CN
China
Prior art keywords
image
image data
frame
synthesizing
combined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810105365.9A
Other languages
Chinese (zh)
Other versions
CN108377353A (en
Inventor
黄艺山
王宇
朱宏
宋利伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XIAMEN LENZ COMMUNICATION Inc
Original Assignee
XIAMEN LENZ COMMUNICATION Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XIAMEN LENZ COMMUNICATION Inc filed Critical XIAMEN LENZ COMMUNICATION Inc
Priority to CN201810105365.9A priority Critical patent/CN108377353B/en
Publication of CN108377353A publication Critical patent/CN108377353A/en
Application granted granted Critical
Publication of CN108377353B publication Critical patent/CN108377353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • H04N7/012Conversion between an interlaced and a progressive signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)

Abstract

The invention discloses a video processing method applied to an embedded system, which synthesizes image data of an odd field and image data of an even field into a frame of image; obtaining a full luminance signal by synthesizing a half luminance signal of one frame image; the method of the invention can be used for synthesizing a complete image by combining all brightness signals and all color difference signals of a frame of image.

Description

Video processing method applied to embedded system
Technical Field
The invention relates to the field of video processing, in particular to a video processing method applied to an embedded system.
Background
A moving picture is composed of a series of consecutive still pictures, each of which is called a frame (frame), and the number of still pictures per second in the moving picture is called a frame rate (fps). Interlaced scanning is a method of dividing pixels in odd lines and pixels in even lines of a frame of image into two fields (fields), and scanning the fields formed by the odd lines and the fields formed by the even lines alternately.
The conventional digital cameras cannot continuously capture progressive images due to the limitation of hardware speed and buffer memory size, so that the conventional digital cameras capture interlaced images, and the interlaced images are half less in information amount than the progressive images, so that the requirements for hardware speed reduction and buffer memory size reduction are nearly half. But the time each field was shot was not the same, representing that we never had a perfect de-interlacing. For example, with a digital camera taking sixty fields per second, the first field being taken at 1/60 seconds and the second field being taken at 2/60 seconds, the two fields are combined together and the combined image appears perfect if there is no movement of the object being taken; however, if the subject moves and the contents of the two fields are quite different, the combined image will produce a "jaggy" effect.
The floating point number reading operation capability of the existing embedded equipment is very poor, the existing scheme has a plurality of soft algorithms which can have good image processing effect, but the embedded equipment has poor operation capability, so that the realization cannot be realized. The de-interlacing algorithm of the video can not meet the requirement, so that the video always has the defect of saw teeth or horizontal stripes, and the visual effect of the video is poor.
Disclosure of Invention
The present invention is directed to solving the problems of the prior art, and provides a video processing method applied to an embedded system for solving the problems of aliasing and horizontal streaks,
the method comprises the following steps:
step S1, scanning alternately and progressively to obtain odd field image data and even field image data;
step S2, synthesizing the image data of odd field obtained by progressive scanning at last moment and the image data of even field obtained by progressive scanning at present moment into a frame of image;
step S3, obtaining a whole luminance signal by half luminance signal processing of synthesizing one frame image;
in step S4, the entire luminance signal and the entire color difference signals obtained and combined as one frame image are combined into a complete image.
In the embodiment, preferably, in step S3, half of the luminance signals that are combined into one frame of image are taken and processed by the quadratic linear interpolation algorithm, so as to obtain all the luminance signals.
In step S2, the even field image data obtained by progressive scanning at the previous time and the odd field image data obtained by progressive scanning at the current time may be combined into one frame image.
The invention synthesizes odd field and even field signals, extracts half of the brightness signals to perform secondary linear interpolation algorithm processing to obtain all brightness signals, extracts half of the brightness signals to perform secondary linear interpolation algorithm processing to accelerate algorithm operation speed, synthesizes color difference signals and brightness signals to form a complete picture, and can eliminate sawtooth and horizontal stripes of the picture. Compared with the prior art, the video processing method applied to the embedded system solves the defect that the embedded device has poor floating point number reading operation capability and cannot effectively process the sawtooth or the horizontal stripe of the video, and performs soft algorithm processing on half of the brightness signals, thereby obtaining all the brightness signals, greatly improving the operation speed, saving the color difference signals of the original image, leading the image to maximally save the authenticity of the original data, and effectively solving the problem of the sawtooth and the horizontal stripe.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic diagram of a video processing method applied to an embedded system according to the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
The invention provides a video processing method applied to an embedded system, the specific principle is as shown in figure 1,
the method specifically comprises the following steps:
step S1, scanning alternately and progressively to obtain odd field image data and even field image data;
step S2, synthesizing the image data of odd field obtained by progressive scanning at last moment and the image data of even field obtained by progressive scanning at present moment into a frame of image;
step S3, obtaining a whole luminance signal by half luminance signal processing of synthesizing one frame image;
in step S4, the entire luminance signal and the entire color difference signals obtained and combined as one frame image are combined into a complete image.
In the embodiment of the present invention, preferably, all the luminance signals obtained in step S3, specifically, half of the luminance signals that are synthesized into one frame of image are subjected to a quadratic linear interpolation algorithm to obtain all the luminance signals, and it should be noted that those skilled in the art may also obtain all the luminance signals in other manners.
Example 2
The invention provides a video processing method applied to an embedded system, which specifically comprises the following steps:
step S1, scanning alternately and progressively to obtain odd field image data and even field image data;
step S2, synthesizing the even field image data obtained by the progressive scanning at the last moment and the odd field image data obtained by the progressive scanning at the current moment into a frame image;
step S3, taking a half of the brightness signals of a frame of image to perform secondary linear interpolation algorithm processing to obtain all brightness signals;
in step S4, the entire luminance signal and the entire color difference signals obtained and combined as one frame image are combined into a complete image.
The above description describes preferred embodiments of the invention, but it should be understood that the invention is not limited to the above embodiments, and should not be viewed as excluding other embodiments. Modifications made by those skilled in the art in light of the teachings of this disclosure, which are well known or are within the skill and knowledge of the art, are also to be considered as within the scope of this invention.

Claims (2)

1. A video processing method applied to an embedded system is characterized by comprising the following steps:
step S1, scanning alternately and progressively to obtain odd field image data and even field image data;
step S2, synthesizing the image data of odd field obtained by progressive scanning at last moment and the image data of even field obtained by progressive scanning at present moment into a frame of image;
step S3, processing a quadratic linear interpolation algorithm by synthesizing a half of the brightness signals of a frame of image to obtain all the brightness signals;
in step S4, the entire luminance signal and the entire color difference signals obtained and combined as one frame image are combined into a complete image.
2. A video processing method applied to an embedded system is characterized by comprising the following steps:
step S1, scanning alternately and progressively to obtain odd field image data and even field image data;
step S2, synthesizing the even field image data obtained by the progressive scanning at the last moment and the odd field image data obtained by the progressive scanning at the current moment into a frame image;
step S3, processing a quadratic linear interpolation algorithm by synthesizing a half of the brightness signals of a frame of image to obtain all the brightness signals;
in step S4, the entire luminance signal and the entire color difference signals obtained and combined as one frame image are combined into a complete image.
CN201810105365.9A 2018-02-02 2018-02-02 Video processing method applied to embedded system Active CN108377353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810105365.9A CN108377353B (en) 2018-02-02 2018-02-02 Video processing method applied to embedded system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810105365.9A CN108377353B (en) 2018-02-02 2018-02-02 Video processing method applied to embedded system

Publications (2)

Publication Number Publication Date
CN108377353A CN108377353A (en) 2018-08-07
CN108377353B true CN108377353B (en) 2020-06-12

Family

ID=63017176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810105365.9A Active CN108377353B (en) 2018-02-02 2018-02-02 Video processing method applied to embedded system

Country Status (1)

Country Link
CN (1) CN108377353B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101060640A (en) * 2006-02-02 2007-10-24 三星电子株式会社 Apparatus and methods for processing video signals
WO2008051343A3 (en) * 2006-10-23 2008-07-03 Lsi Corp Reduced memory and bandwidth motion adaptive video deinterlacing
CN103475838A (en) * 2013-06-21 2013-12-25 青岛海信信芯科技有限公司 Deinterlacing method based on edge self adaption

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8405672B2 (en) * 2009-08-24 2013-03-26 Samsung Display Co., Ltd. Supbixel rendering suitable for updating an image with a new portion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101060640A (en) * 2006-02-02 2007-10-24 三星电子株式会社 Apparatus and methods for processing video signals
WO2008051343A3 (en) * 2006-10-23 2008-07-03 Lsi Corp Reduced memory and bandwidth motion adaptive video deinterlacing
CN101529923A (en) * 2006-10-23 2009-09-09 Lsi公司 Reduced memory and bandwidth motion adaptive video deinterlacing
CN103475838A (en) * 2013-06-21 2013-12-25 青岛海信信芯科技有限公司 Deinterlacing method based on edge self adaption

Also Published As

Publication number Publication date
CN108377353A (en) 2018-08-07

Similar Documents

Publication Publication Date Title
US7652721B1 (en) Video interlacing using object motion estimation
US6118488A (en) Method and apparatus for adaptive edge-based scan line interpolation using 1-D pixel array motion detection
US5465119A (en) Pixel interlacing apparatus and method
US8139123B2 (en) Imaging device and video signal generating method employed in imaging device
US8749703B2 (en) Method and system for selecting interpolation as a means of trading off judder against interpolation artifacts
WO2009145201A1 (en) Image processing device, image processing method, and imaging device
US8749699B2 (en) Method and device for video processing using a neighboring frame to calculate motion information
US20160094803A1 (en) Content adaptive telecine and interlace reverser
US20100177239A1 (en) Method of and apparatus for frame rate conversion
US7130347B2 (en) Method for compressing and decompressing video data in accordance with a priority array
US6947094B2 (en) Image signal processing apparatus and method
US20060061690A1 (en) Unit for and method of sharpness enchancement
CN104820966B (en) Asynchronous many video super-resolution methods of registration deconvolution during a kind of sky
JP2008148315A (en) Method and apparatus for reconstructing image
US20030218621A1 (en) Method and system for edge-adaptive interpolation for interlace-to-progressive conversion
JPH01318376A (en) Method of suppressing static picture flicker and method of detecting movement of displayed object
CN101662681B (en) A method of determining field dominance in a sequence of video frames
KR20060135667A (en) Image format conversion
JP2005318623A (en) Film mode extrapolation method, film mode detector, and motion compensation apparatus
CN108377353B (en) Video processing method applied to embedded system
WO2017101348A1 (en) Method and device for deinterlacing interlaced videos
US20040066466A1 (en) Progressive conversion of interlaced video based on coded bitstream analysis
KR20030019244A (en) Methods and apparatus for providing video still frame and video capture features from interlaced video signals
JP2006332904A (en) Contour emphasizing circuit
KR20060132877A (en) Method and apparatus for deinterlacing of video using motion compensated temporal interpolation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant