WO2021119968A1 - Video fusion method and apparatus, and storage medium - Google Patents

Video fusion method and apparatus, and storage medium Download PDF

Info

Publication number
WO2021119968A1
WO2021119968A1 PCT/CN2019/125801 CN2019125801W WO2021119968A1 WO 2021119968 A1 WO2021119968 A1 WO 2021119968A1 CN 2019125801 W CN2019125801 W CN 2019125801W WO 2021119968 A1 WO2021119968 A1 WO 2021119968A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
signal
component
data
fusion
Prior art date
Application number
PCT/CN2019/125801
Other languages
French (fr)
Chinese (zh)
Inventor
杨剑
Original Assignee
威创集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 威创集团股份有限公司 filed Critical 威创集团股份有限公司
Priority to PCT/CN2019/125801 priority Critical patent/WO2021119968A1/en
Priority to CN201980003225.3A priority patent/CN111095919B/en
Publication of WO2021119968A1 publication Critical patent/WO2021119968A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Definitions

  • the present invention relates to the technical field of image processing, in particular to a video fusion method, device and storage medium.
  • FPGA Field Programmable Gate Array
  • Video fusion technology is a branch of virtual reality technology, and it can also be said to be a development stage of virtual reality.
  • Video fusion technology refers to the fusion of one or more image sequence videos about a certain scene or model collected by a video acquisition device with a virtual scene related to it to generate a new virtual scene or model about this scene.
  • the FPGA for signal processing puts forward higher and higher requirements, such as bandwidth, signal quality, and the complexity of logic processing is also accompanied by further improvement. Therefore, in order to better display the information displayed by the video, it is extremely necessary to improve the fusion problem of the FPGA processing video images and improve the quality of the fusion video under the premise of ensuring the picture quality and real-time performance.
  • the embodiments of the present invention provide a video fusion method, device, and storage medium, which are used to solve the technical problem that the edge abnormality caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video.
  • a video fusion method includes the following steps:
  • the video signal is an RGB video signal
  • the processing step of obtaining the video data includes:
  • the YC444 signal having the Y component and the C component is converted into a YC422 video signal, and the data in the YC422 video signal is the processed video data.
  • the three components of R, G, and B in the RGB video signal are within a preset value range of the controller, and the Y component of the video signal is processed by the luminance Y component.
  • the component value is set to 0;
  • the lowest bit is set to 1, otherwise the Y component value of the YC444 signal remains the original value and output;
  • the controller controls the converted YC444 signal to output the Y component
  • the controller controls the converted YC444 signal to process the luminance Y component to output the Y component;
  • the YC444 signal is provided with an odd point and an even point. If the Y component of the YC444 signal of the odd point is 0, the C component of the YC444 signal takes the C component of the next point as the two number points. The average of the C component. If the Y component of the YC444 signal at an even point is 0, the C component of the YC444 signal takes the C component of the previous point as the average of the C components of the two points. When the Y component When it is not 0, no odd and even points are distinguished, and the C component of the YC444 signal takes the C component of the next number point as the average of the number points C component.
  • the processing step of obtaining the video fusion data includes:
  • the Y component value of the YC422 video signal is 0 and the control signal sent by the controller is 0, remove the YC422 video signal data whose Y component value is 0 to obtain the YC422 whose Y component is not 0
  • the video signal data is the processed YC422 video signal data; if the Y component value in the YC422 video signal is 0 and the control signal sent by the controller is 1, remove the Y component value that is not 0
  • the YC422 video signal data obtains the YC422 video signal data whose Y component is 0, which is the processed YC422 video signal data.
  • fusion processing is performed according to the YC422 video signal data and the RGB signal data after the controller control and processing, to obtain the video fusion data.
  • the present invention also provides a video fusion device, which includes a controller and a video signal processing module to be fused and a video fusion processing module connected to the controller respectively;
  • the to-be-fused video signal processing module is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
  • the video fusion processing module is used to perform fusion processing on the video data to obtain video fusion data
  • the controller is used to control whether the video fusion data is fused with the video.
  • the to-be-fused video signal processing module includes a video signal acquisition unit, a first signal conversion unit, a brightness processing unit, a first selection unit, a second signal conversion unit, and a first output unit;
  • the video signal to be fused obtained by the video signal acquisition unit is an RGB video signal
  • the video signal acquisition unit is respectively connected to the first signal conversion unit and the brightness processing unit
  • the first signal conversion unit is connected to the first signal conversion unit and the brightness processing unit respectively.
  • the first selection unit is connected to the brightness processing unit
  • the brightness processing unit is further connected to the first selection unit
  • the first selection unit is further connected to the second signal conversion unit
  • the second signal The conversion unit is connected to the first output unit
  • the controller is respectively connected to the brightness processing unit and the first selection unit
  • the first output unit is connected to the video fusion processing module.
  • the first signal conversion unit is used to convert the RGB video signal into a YC444 signal
  • the brightness processing unit uses the brightness Y component for signal processing
  • the controller controls the first selection unit to output a YC444 signal.
  • the second signal conversion unit is used to convert the YC444 signal with Y component and C component into YC422 video signal
  • the YC422 video signal is sent from the first output unit to the video Fusion processing module.
  • the video fusion processing module includes a third signal conversion unit, a video fusion processing unit, a second selection unit, and a second output unit;
  • the first output unit is respectively connected to the third signal conversion unit and the video fusion processing unit, the third signal conversion unit is also connected to the second selection unit, and the video fusion processing unit is also connected to the
  • the second selection unit is connected, the second selection unit is also connected to the second output unit, and the controller is also connected to the video fusion processing unit and the second selection unit respectively;
  • the third signal conversion unit converts the YC422 video signal into an RGB signal
  • the video fusion processing unit removes or retains the YC422 video signal data whose value is 0 when the Y component value in the YC422 signal is 0
  • the controller controls whether the second selection unit selects the processed YC422 video signal data and the RGB signal for fusion processing to obtain the video fusion data, the first The second output unit outputs the RGB signal after video fusion.
  • the present invention also provides a storage medium, including a processor and a memory;
  • the memory is used to store program code and transmit the program code to the processor
  • the processor is configured to execute the aforementioned video fusion method according to the instructions in the program code.
  • the video fusion method obtains the video signal and video to be fused; processes the brightness and chrominance of the video signal to obtain the processed video data; performs fusion processing on the video data to obtain the video fusion data; combines the video fusion data with
  • the video is fused, the YC422 format video brightness and chroma in the video signal are processed, and the brightness data is used to separate the required fused video data through the brightness data.
  • the chroma component is in the RGB, YC conversion process. Data loss will occur during the YC conversion process, special processing is performed on the odd and even points; and based on the YCbCr chrominance and brightness processing, the quality of the fusion content signal is effectively improved.
  • This method can solve the fusion under the lower bandwidth YC422 video format transmission and processing.
  • the abnormal edge problem caused by the change of video color format effectively improves the quality of the fusion video, and solves the technical problem of the abnormal edge caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video;
  • the to-be-fused video signal processing module of the video fusion device utilizes the characteristics of no loss of data during the conversion of RGB signals into YC by brightness, and separates the required fusion data through brightness Y data; chrominance components are also converted into YC from RGB signals In the process of performing special processing on the odd and even points, data loss will occur in the signal in the video signal, and the video signal of the video data will be output.
  • the video signal of the output video data is input to the video fusion processing module.
  • the video fusion processing module is in the process of converting the YC signal to the RGB signal, when the three components of the converted RGB signal, R, G, and B, are set in the controller.
  • the video fusion data is fused with other videos to obtain a high-quality video after fusion, which improves the quality of the fusion video, and solves the technical problem of the abnormal edge caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video .
  • FIG. 1 is a flowchart of steps of a video fusion method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of steps of video data processing in a video fusion method according to an embodiment of the present invention
  • FIG. 3 is a flowchart of steps of video fusion data processing in a video fusion method according to an embodiment of the present invention
  • Fig. 4 is a frame diagram of a video fusion device according to an embodiment of the present invention.
  • the described RGB video signal is a video signal in the RGB format
  • the YC422 video signal is a video signal in the YC422 format
  • the RGB signal is a video signal in the RGB format
  • the YC444 signal is a video signal in the YC444 format.
  • the Y letter represents the brightness signal
  • the C letter represents the color difference signal
  • the YC letter represents the composite signal of brightness and color difference.
  • the embodiments of the present application provide a video fusion method, device, and storage medium, which are used to solve the technical problem that the edge abnormality caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video.
  • FIG. 1 is a flowchart of the steps of the video fusion method according to the embodiment of the present invention.
  • an embodiment of the present invention provides a video fusion method, which includes the following steps:
  • the video signal to be fused is an RGB video signal.
  • step S2 in the embodiment of the present invention the RGB video signal is converted into YC444 signal, YCbCr performs luminance and chrominance processing on YC444 signal to obtain processed video data, and converts the processed video data of YC444 signal into YC422 video Signal output.
  • step S3 in the embodiment of the present invention the YC422 video signal is converted into an RGB signal.
  • the YC422 video signal is changed to The component value is set to 0, and the YC422 video signal data with the value of 0 is removed or retained when the Y component value in the YC422 video signal is 0, to obtain the processed YC422 video signal data, and the processed YC422 video signal data and RGB signal The data is fused to obtain video fusion data.
  • step S4 in the embodiment of the present invention the controller controls the video fusion data to be fused with other videos to obtain a fused video.
  • the video fusion method obtaineds the video signal and video to be fused; processes the brightness and chroma of the video signal to obtain the processed video data; performs fusion processing on the video data to obtain the video fusion data;
  • the fusion data is fused with the video, the brightness and chroma of the YC422 format video in the video signal are processed, and the brightness data is used to separate the required fusion video data through the brightness data without data loss during the RGB and YC conversion process; In the RGB, YC conversion process, data loss will occur for the degree component. Special processing is performed on the odd and even points; and based on the YCbCr chrominance and brightness processing, the quality of the fusion content signal is effectively improved.
  • This method can transmit and transmit in the lower bandwidth YC422 video format. Under processing, it solves the problem of edge abnormality caused by the color format change of the fusion video, effectively improves the quality of the fusion video, and solves the technical problem of the poor quality of the fusion video caused by the edge abnormality caused by the color format change in the existing processing video fusion process.
  • Fig. 2 is a flowchart of steps of video data processing in a video fusion method according to an embodiment of the present invention.
  • the video signal is an RGB video signal.
  • the processing steps for obtaining video data include:
  • the YC444 signal with the Y component and the C component is converted into a YC422 video signal, and the data in the YC422 video signal is the processed video data.
  • step S21 the RGB video signal is converted into a YC444 signal
  • step S22 the three components of R, G, and B in the RGB video signal are within the preset value range of the controller.
  • the Y component value of the video signal is set to 0. If the Y component value of the YC444 signal is 0, the lowest bit is set to 1, and the Y component value of the YC444 signal remains the original value in other cases.
  • the Y component has 8 bits of data, and the lowest bit is the 0th bit of the Y component.
  • the controller controls the converted YC444 signal to output the Y component. If the Y component value in the RGB video signal is 1, the controller controls the converted YC444 signal to process the luminance Y component to output the Y component. YC444 signal outputs C component. There are odd and even points on the YC444 signal. If the Y component of the YC444 signal of the odd point is 0, the C component of the YC444 signal takes the C component of the next point as the average of the C components of the two points.
  • the C component of the YC444 signal takes the C component of the previous number as the average of the two C components.
  • the C component in the YC444 signal takes the C component of the next number point as the average of the number point C component.
  • YC444 signal is 1920*1080@60HZ signal resolution. Among them, the number of pixels in a row is 1920, the first pixel is odd, the second pixel is even, the third is odd, and the fourth is odd.
  • Points are even-numbered points, and so on; when the Y component of the second pixel is 0, the C component of the second pixel is the average of the C component of the first pixel and the C component of the second pixel When the Y component of the third pixel is 0, the C component of the third pixel is the average of the C component of the third pixel and the C component of the fourth pixel. When the Y component is not 0, the odd and even points are not distinguished, and the C component of the next point is taken as the average of the C components of the points.
  • step S2 in the embodiment of the present invention the data is not lost during the conversion process of the RGB video signal to the YC signal by using the brightness, and the required fused video data is separated by the brightness Y data; for the YC444 signal Special processing of the parity points causes data loss in the conversion process of the chrominance components. Based on the above-mentioned brightness and chrominance processing of the video signal, the quality of the fused video signal is effectively improved, thereby improving the quality of the fused video.
  • the video In the process of signal processing, it also solves the problem of fusion video quality caused by the chrominance loss of the processed video signal.
  • Fig. 3 is a flowchart of steps of video fusion data processing in the video fusion method according to an embodiment of the present invention.
  • the processing step of obtaining video fusion data includes:
  • the fusion processing is performed according to the YC422 video signal data and the RGB signal data processed by the controller to obtain the video fusion data.
  • step S3 there is a YC signal in the process of obtaining video fusion data.
  • the YC422 video signal when the three components of R, G, and B of the converted RGB signal are within the controller setting range value, the YC422 video signal The Y component value in the YC422 video signal is set to 0, and the YC422 video signal data whose value is 0 is removed or retained when the Y component value in the YC422 video signal is 0, to obtain the processed YC422 video signal data.
  • step S4 in the embodiment of the present invention the obtained video fusion data is fused with other videos to obtain a video with high quality after fusion.
  • Fig. 4 is a frame diagram of a video fusion device according to an embodiment of the present invention.
  • an embodiment of the present invention provides a video fusion device, including a controller 10, and a video signal processing module 20 and a video fusion processing module 30 to be fused that are respectively connected to the controller 10;
  • the to-be-fused video signal processing module 20 is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
  • the video fusion processing module 30 is configured to perform fusion processing on video data to obtain video fusion data
  • the controller 10 is used to control whether the video fusion data is fused with the video.
  • the to-be-fused video signal processing module 20 utilizes the feature of no loss of data during the conversion of RGB signals into YC by brightness, and separates the required fused data through brightness Y data; chrominance components are also converted into YC from RGB signals In the process of performing special processing on the odd and even points, data loss will occur in the signal in the video signal, and the video signal of the video data will be output.
  • the video signal of the output video data is input to the video fusion processing module 30.
  • the video fusion processing module 30 is in the process of converting the YC signal to the RGB signal.
  • the to-be-fused video signal processing module 20 in the embodiment of the present invention includes a video signal acquisition unit 21, a first signal conversion unit 22, a brightness processing unit 23, a first selection unit 24, a second signal conversion unit 25, and a first output unit 26;
  • the video signal acquisition unit 21 acquires the video signal to be fused as an RGB video signal.
  • the video signal acquisition unit 21 is respectively connected to the first signal conversion unit 22 and the brightness processing unit 23, and the first signal conversion unit 22 is respectively connected to the first selection unit 24.
  • the brightness processing unit 23 is also connected to the first selection unit 24, the first selection unit 24 is also connected to the second signal conversion unit 25, the second signal conversion unit 25 is connected to the first output unit 26, the controller 10 is respectively connected to the brightness processing unit 23 and the first selection unit 24, and the first output unit 26 is connected to the video fusion processing module 30.
  • the video signal acquisition unit 21 is mainly used to acquire the RGB video signal of the video
  • the first signal conversion unit 22 is used to convert the RGB video signal into a YC444 signal
  • the brightness processing unit 23 uses the brightness Y component for signal processing and control
  • the device 10 controls the first selection unit 22 to output YC444 signals with Y and C components
  • the second signal conversion unit 25 is used to convert the YC444 signals with Y and C components into YC422 video signals
  • the YC422 video signals are output from the first
  • the unit 26 is delivered to the video fusion processing module 30.
  • the Y component value of the video signal in the brightness processing unit 23 is set to 0, and when the first signal conversion unit 22.
  • the Y component value of the output YC444 signal is 0, set its lowest bit to 1.
  • the Y component value of the YC444 signal is maintained and output as the first signal conversion unit 22; the first selection unit 24 is the Y component of the YC444 signal The output selection is controlled by the controller 10.
  • the first signal conversion unit 22 When the Y component of the YC444 signal is 0, the first signal conversion unit 22 outputs the Y component of the YC444 signal, and when the Y component of the RGB video signal is 1, it is processed by the brightness processing unit 23.
  • the first selection unit 24 After outputting the Y component value in the RGB video signal, the first selection unit 24 outputs the chrominance component CbCr (C component) which is output by the YC444 signal in the first signal conversion unit 22; the second signal conversion unit 25 converts the YC444 signal into YC422 Signal, when the Y component of the odd point in the YC444 signal is 0, the chrominance component CbCr (C component) takes the C component of the next point as the average of the two C components, when the even point Y in the YC444 signal When the component is 0, the chrominance component CbCr (C component) takes the C component of the previous number point as the average of the two number points C components.
  • the chrominance component CbCr (C component) is taken regardless of the odd and even points.
  • the C component of the next number of points is taken as the average of the number of C components;
  • the signal output by the first output unit 26 is the YC422 signal output, and the YC422 signal output by the first output unit 26 is used as the input signal of the video fusion processing module 30.
  • the video fusion processing module 30 in the embodiment of the present invention includes a third signal conversion unit 31, a video fusion processing unit 32, a second selection unit 33, and a second output unit 34; the first output unit 26 and the third signal conversion unit respectively 31 is connected to the video fusion processing unit 32, the third signal conversion unit 31 is also connected to the second selection unit 33, the video fusion processing unit 32 is also connected to the second selection unit 33, and the second selection unit 33 is also connected to the second output unit 34
  • the controller 10 is also connected to the video fusion processing unit 32 and the second selection unit 33 respectively.
  • the third signal conversion unit 31 converts the YC422 video signal into an RGB signal
  • the video fusion processing unit 32 removes or retains the YC422 video signal data whose value is 0 when the Y component value in the YC422 signal is 0, to obtain a processed YC422 video signal
  • the controller 10 controls the second selection unit 33 whether to select the processed YC422 video signal data and the RGB signal for fusion processing to obtain the video fusion data
  • the second output unit 34 outputs the video fusion RGB signal.
  • the YC422 signal output by the first output unit 26 is transmitted to the signal input end of the video fusion processing module 30 and then enters the video fusion processing module 30, and the third signal conversion unit 31 transfers the output signal to the signal input end of the video fusion processing module 30.
  • the YC422 signal is converted to RGB video signal, and the video fusion processing unit 32 performs fusion data processing.
  • the Y component of the signal input terminal of the video fusion processing module 30 is 0 and the control signal of the controller 10 is 0, the Y component of the YC422 signal is removed.
  • the 0 data keeps the Y component data in the YC422 signal that is not 0.
  • the second selection unit 33 is The selector selects whether to perform fusion processing, and is controlled by the controller 10, 0 does not perform fusion processing, and 1 performs fusion processing; the second output unit 34 outputs RGB fusion signals.
  • the setting range is determined by the video data to be fused in actual application.
  • the embodiment of the present invention provides a storage medium including a processor and a memory
  • the memory is used to store the program code and transmit the program code to the processor
  • the processor is configured to execute the aforementioned video fusion method according to the instructions in the program code.
  • the processor is configured to execute the steps in the foregoing embodiment of the video fusion method according to the instructions in the program code, such as steps S1 to S4 shown in FIG. 1. Or, when the processor executes the computer program, the functions of the modules/units in the foregoing device embodiments, such as the functions of the units 101 to 103 shown in FIG. 4, are realized.
  • the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory and executed by the processor to complete the application.
  • One or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the terminal device.
  • the computer program can be divided to include the controller 10 and the to-be-fused video signal processing module 20 and the video fusion processing module 30 respectively connected to the controller 10:
  • the to-be-fused video signal processing module 20 is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
  • the video fusion processing module 30 is used to perform fusion processing on video data to obtain video fusion data;
  • the controller 10 is used to control whether the video fusion data is fused with the video.
  • the terminal device can be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor and a memory. Those skilled in the art can understand that it does not constitute a limitation on the terminal device. It may include more or fewer components than shown in the figure, or combine certain components, or different components. For example, the terminal device may also include input and output devices, Network access equipment, bus, etc.
  • the so-called processor can be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory may be an internal storage unit of the terminal device, such as the hard disk or memory of the terminal device.
  • the memory can also be an external storage device of the terminal device, such as a plug-in hard disk equipped on the terminal device, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), etc.
  • the memory may also include both an internal storage unit of the terminal device and an external storage device.
  • the memory is used to store computer programs and other programs and data required by the terminal device.
  • the memory can also be used to temporarily store data that has been output or will be output.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • a computer device which can be a personal computer, a server, or a network device, etc.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Of Color Television Signals (AREA)
  • Image Processing (AREA)

Abstract

A video fusion method and apparatus, and a storage medium. The method comprises: acquiring a video signal to be fused and a video; carrying out brightness and chromaticity processing on the video signal to obtain processed video data; carrying out fusion processing on the video data to obtain video fusion data; fusing the video fusion data and the video, processing the brightness and the chromaticity of a YC422-format video in the video signal, and by utilizing the characteristic that data is not lost by the brightness during an RGB-YC conversion process, separating, by means of brightness data, video data needing to be fused; chrominance components losing data during the RGB-YC conversion process, and performing special processing on odd and even points; and YCbCr processing the chromaticity and brightness, such that the signal quality of the fused content is effectively improved. The method can solve the problem of an edge abnormality caused by a color format change in a fused video under relatively-low bandwidth YC422 video format transmission and processing, thereby effectively improving the quality of the fused video.

Description

一种视频融合方法、装置及存储介质Video fusion method, device and storage medium 技术领域Technical field
本发明涉及图像处理技术领域,尤其涉及一种视频融合方法、装置及存储介质。The present invention relates to the technical field of image processing, in particular to a video fusion method, device and storage medium.
背景技术Background technique
FPGA(Field Programmable Gate Array)是在PAL、GAL等可编程器件的基础上进一步发展的产物,它是作为专用集成电路(ASIC)领域中的一种半定制电路而出现的,既解决了定制电路的不足,又克服了原有可编程器件门电路数有限的缺点。FPGA (Field Programmable Gate Array) is a product of further development on the basis of programmable devices such as PAL and GAL. It appears as a semi-custom circuit in the field of application-specific integrated circuits (ASIC), which not only solves the problem of custom circuits It overcomes the shortcomings of the limited number of gate circuits of the original programmable device.
视频融合技术是虚拟现实技术的一个分支,也可以说是虚拟现实的一个发展阶段。视频融合技术指将一个或多个由视频采集设备采集的关于某场景或模型的图像序列视频与一个与之相关的虚拟场景加以融合,以生成一个新的关于此场景的虚拟场景或模型。Video fusion technology is a branch of virtual reality technology, and it can also be said to be a development stage of virtual reality. Video fusion technology refers to the fusion of one or more image sequence videos about a certain scene or model collected by a video acquisition device with a virtual scene related to it to generate a new virtual scene or model about this scene.
随着电子计算机技术的进步,计算机图像处理近年来得到飞跃的发展,已经成功的应用于几乎所有与成像有关的领域,并正发挥着相当重要的作用。人类传递的信息有70%是视觉信息,图像信息是传递信息的重要媒体和手段。With the advancement of electronic computer technology, computer image processing has been developed by leaps and bounds in recent years. It has been successfully applied to almost all imaging-related fields and is playing a very important role. 70% of the information transmitted by humans is visual information, and image information is an important medium and means for transmitting information.
在消费品电子、市政、交通、军工等各种需要用到视频显示的环境中,所要求处理的信号规模越来越大,对功能、图像质量要求也越来越高,这无疑给处理视频画面的FPGA进行信号处理提出了越来越高的要求,如带宽、信号质量,逻辑处理的复杂度也伴随进一步提高。因此,为了更好地展现视频显示的信息,在保证画面质量和实时性的前提下,提高FPGA处理视频画面的融合问题,提升融合视频质量是极其必要的。In consumer electronics, municipal, transportation, military and other environments that require video display, the scale of the signals required to be processed is getting larger and larger, and the requirements for functions and image quality are also getting higher and higher, which undoubtedly increases the processing of video images. The FPGA for signal processing puts forward higher and higher requirements, such as bandwidth, signal quality, and the complexity of logic processing is also accompanied by further improvement. Therefore, in order to better display the information displayed by the video, it is extremely necessary to improve the fusion problem of the FPGA processing video images and improve the quality of the fusion video under the premise of ensuring the picture quality and real-time performance.
因此,针对上述情况,如何避免视频融合过程中数据色度损失带来的融合视频质量差成为本领域技术人员亟待解决的重要技术问题。Therefore, in view of the above situation, how to avoid the poor quality of the fused video caused by the loss of data chrominance in the video fusion process has become an important technical problem to be solved urgently by those skilled in the art.
发明内容Summary of the invention
本发明实施例提供了一种视频融合方法、装置及存储介质,用于解决现有处理视频融合过程中色彩格式变化所带来的边缘异常,导致融合视频质量差的技术问题。The embodiments of the present invention provide a video fusion method, device, and storage medium, which are used to solve the technical problem that the edge abnormality caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video.
为了实现上述目的,本发明实施例提供如下技术方案:In order to achieve the foregoing objectives, the embodiments of the present invention provide the following technical solutions:
一种视频融合方法,包括以下步骤:A video fusion method includes the following steps:
S1.获取待融合的视频信号以及视频;S1. Obtain the video signal and video to be fused;
S2.对所述视频信号进行亮度、色度处理,得到处理后的视频数据;S2. Perform brightness and chroma processing on the video signal to obtain processed video data;
S3.对所述视频数据进行融合处理,得到视频融合数据;S3. Perform fusion processing on the video data to obtain video fusion data;
S4.将所述视频融合数据与所述视频进行融合。S4. Fusion of the video fusion data and the video.
优选地,所述视频信号为RGB视频信号,在所述步骤S2中,获得所述视频数据的处理步骤包括:Preferably, the video signal is an RGB video signal, and in the step S2, the processing step of obtaining the video data includes:
S21.对所述RGB视频信号进行信号转换,并转换成YC444信号;S21. Perform signal conversion on the RGB video signal and convert it into a YC444 signal;
S22.对所述YC444信号中的亮度进行亮度Y分量处理,对所述YC444信号中的色度进行处理,输出Y分量和C分量;S22. Perform luminance Y component processing on the luminance in the YC444 signal, and perform processing on the chrominance in the YC444 signal, and output the Y component and the C component;
S23.处理后具有所述Y分量和C分量的所述YC444信号转换成YC422视频信号,所述YC422视频信号中的数据即为处理后的所述视频数据。S23. After processing, the YC444 signal having the Y component and the C component is converted into a YC422 video signal, and the data in the YC422 video signal is the processed video data.
优选地,在所述步骤S22中,在所述RGB视频信号中的R、G、B这三个分量在控制器的预设值范围内,经过所述亮度Y分量处理后将视频信号的Y分量值设置为0;Preferably, in the step S22, the three components of R, G, and B in the RGB video signal are within a preset value range of the controller, and the Y component of the video signal is processed by the luminance Y component. The component value is set to 0;
若所述YC444信号中的Y分量值为0时将其最低位设置为1,其它情况下YC444信号中的Y分量值保持原值输出;If the Y component value of the YC444 signal is 0, the lowest bit is set to 1, otherwise the Y component value of the YC444 signal remains the original value and output;
若所述RGB视频信号中的Y分量值为0时,所述控制器控制转换后的所述YC444信号输出Y分量;If the value of the Y component in the RGB video signal is 0, the controller controls the converted YC444 signal to output the Y component;
若所述RGB视频信号中的Y分量值为1时,所述控制器控制转换后的所述YC444信号经过所述亮度Y分量处理输出Y分量;If the Y component value in the RGB video signal is 1, the controller controls the converted YC444 signal to process the luminance Y component to output the Y component;
所述YC444信号上设置有奇数点和偶数点,若奇数点的所述YC444信号中Y分量为0时,所述YC444信号中的C分量取其下一数点的C分量作为两个数点C分量的平均,若偶数点的所述YC444信号中Y分量为0时,所述YC444信号中的C分量取其上一数点的C分量作为两个数点C 分量的平均,当Y分量不为0时,不分奇偶数点,所述YC444信号中的C分量取其下一数点的C分量作为数点C分量的平均。The YC444 signal is provided with an odd point and an even point. If the Y component of the YC444 signal of the odd point is 0, the C component of the YC444 signal takes the C component of the next point as the two number points. The average of the C component. If the Y component of the YC444 signal at an even point is 0, the C component of the YC444 signal takes the C component of the previous point as the average of the C components of the two points. When the Y component When it is not 0, no odd and even points are distinguished, and the C component of the YC444 signal takes the C component of the next number point as the average of the number points C component.
优选地,在所述步骤S3中,获得所述视频融合数据的处理步骤包括:Preferably, in the step S3, the processing step of obtaining the video fusion data includes:
S31.对于所述视频数据中的YC422视频信号进行信号转换,并转换成RGB信号;S31. Perform signal conversion on the YC422 video signal in the video data, and convert it into an RGB signal;
S32.若所述YC422视频信号中的Y分量值为0,控制器发出的控制信号为0时,去除Y分量值为0的所述YC422视频信号数据,得到Y分量不为0的所述YC422视频信号数据,即为处理后的所述YC422视频信号数据;若所述YC422视频信号中的Y分量值为0,所述控制器发出的控制信号为1时,去除Y分量值不为0的所述YC422视频信号数据,得到Y分量为0的所述YC422视频信号数据,即为处理后的所述YC422视频信号数据。S32. If the Y component value of the YC422 video signal is 0 and the control signal sent by the controller is 0, remove the YC422 video signal data whose Y component value is 0 to obtain the YC422 whose Y component is not 0 The video signal data is the processed YC422 video signal data; if the Y component value in the YC422 video signal is 0 and the control signal sent by the controller is 1, remove the Y component value that is not 0 The YC422 video signal data obtains the YC422 video signal data whose Y component is 0, which is the processed YC422 video signal data.
优选地,在所述步骤S3中,根据所述控制器控制处理后的所述YC422视频信号数据与所述RGB信号的数据进行融合处理,得到所述视频融合数据。Preferably, in the step S3, fusion processing is performed according to the YC422 video signal data and the RGB signal data after the controller control and processing, to obtain the video fusion data.
本发明还提供一种视频融合装置,包括控制器以及分别与所述控制器连接的待融合视频信号处理模块和视频融合处理模块;The present invention also provides a video fusion device, which includes a controller and a video signal processing module to be fused and a video fusion processing module connected to the controller respectively;
所述待融合视频信号处理模块用于对待融合的视频信号进行亮度、色度处理,得到视频数据;The to-be-fused video signal processing module is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
所述视频融合处理模块用于对所述视频数据进行融合处理,得到视频融合数据;The video fusion processing module is used to perform fusion processing on the video data to obtain video fusion data;
所述控制器用于控制所述视频融合数据是否与所述视频进行融合。The controller is used to control whether the video fusion data is fused with the video.
优选地,所述待融合视频信号处理模块包括视频信号获取单元、第一信号转换单元、亮度处理单元、第一选择单元、第二信号转换单元和第一输出单元;Preferably, the to-be-fused video signal processing module includes a video signal acquisition unit, a first signal conversion unit, a brightness processing unit, a first selection unit, a second signal conversion unit, and a first output unit;
所述视频信号获取单元中获取待融合的视频信号为RGB视频信号,所述视频信号获取单元分别与所述第一信号转换单元和所述亮度处理单元连接,所述第一信号转换单元分别与所述第一选择单元和所述亮度处理单元连接,所述亮度处理单元还与所述第一选择单元,所述第一选择单元还与 所述第二信号转换单元连接,所述第二信号转换单元与所述第一输出单元连接,所述控制器分别与所述亮度处理单元和所述第一选择单元连接,所述第一输出单元与所述视频融合处理模块连接。The video signal to be fused obtained by the video signal acquisition unit is an RGB video signal, the video signal acquisition unit is respectively connected to the first signal conversion unit and the brightness processing unit, and the first signal conversion unit is connected to the first signal conversion unit and the brightness processing unit respectively. The first selection unit is connected to the brightness processing unit, the brightness processing unit is further connected to the first selection unit, the first selection unit is further connected to the second signal conversion unit, and the second signal The conversion unit is connected to the first output unit, the controller is respectively connected to the brightness processing unit and the first selection unit, and the first output unit is connected to the video fusion processing module.
优选地,所述第一信号转换单元用于将所述RGB视频信号转换成YC444信号,所述亮度处理单元采用亮度Y分量进行信号处理,所述控制器控制所述第一选择单元输出具有Y分量和C分量的YC444信号,所述第二信号转换单元用于将具有Y分量和C分量的YC444信号转换成YC422视频信号,所述YC422视频信号从所述第一输出单元输送至所述视频融合处理模块上。Preferably, the first signal conversion unit is used to convert the RGB video signal into a YC444 signal, the brightness processing unit uses the brightness Y component for signal processing, and the controller controls the first selection unit to output a YC444 signal. YC444 signal of component and C component, the second signal conversion unit is used to convert the YC444 signal with Y component and C component into YC422 video signal, and the YC422 video signal is sent from the first output unit to the video Fusion processing module.
优选地,所述视频融合处理模块包括第三信号转换单元、视频融合处理单元、第二选择单元和第二输出单元;Preferably, the video fusion processing module includes a third signal conversion unit, a video fusion processing unit, a second selection unit, and a second output unit;
所述第一输出单元分别与所述第三信号转换单元和所述视频融合处理单元连接,所述第三信号转换单元还与所述第二选择单元连接,所述视频融合处理单元还与所述第二选择单元连接,所述第二选择单元还与所述第二输出单元连接,所述控制器还分别与所述视频融合处理单元和所述第二选择单元连接;The first output unit is respectively connected to the third signal conversion unit and the video fusion processing unit, the third signal conversion unit is also connected to the second selection unit, and the video fusion processing unit is also connected to the The second selection unit is connected, the second selection unit is also connected to the second output unit, and the controller is also connected to the video fusion processing unit and the second selection unit respectively;
所述第三信号转换单元将所述YC422视频信号转换成RGB信号,所述视频融合处理单元根据所述YC422信号中的Y分量值为0时去除或保留值为0的所述YC422视频信号数据,得到处理后的所述YC422视频信号数据,所述控制器控制所述第二选择单元是否选择处理后所述YC422视频信号数据与所述RGB信号进行融合处理,得到视频融合数据,所述第二输出单元输出视频融合后的RGB信号。The third signal conversion unit converts the YC422 video signal into an RGB signal, and the video fusion processing unit removes or retains the YC422 video signal data whose value is 0 when the Y component value in the YC422 signal is 0 To obtain the processed YC422 video signal data, the controller controls whether the second selection unit selects the processed YC422 video signal data and the RGB signal for fusion processing to obtain the video fusion data, the first The second output unit outputs the RGB signal after video fusion.
本发明还提供一种存储介质,包括处理器以及存储器;The present invention also provides a storage medium, including a processor and a memory;
所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;The memory is used to store program code and transmit the program code to the processor;
所述处理器用于根据所述程序代码中的指令执行上述所述的视频融合方法。The processor is configured to execute the aforementioned video fusion method according to the instructions in the program code.
从以上技术方案可以看出,本发明实施例具有以下优点:It can be seen from the above technical solutions that the embodiments of the present invention have the following advantages:
1.该视频融合方法通过获取待融合的视频信号以及视频;对视频信号进行亮度、色度处理,得到处理后的视频数据;对视频数据进行融合处理, 得到视频融合数据;将视频融合数据与视频进行融合,对视频信号中的YC422格式视频亮度、色度进行处理,利用亮度在RGB、YC转换过程中数据不损失特性,通过亮度数据分离出所需融合视频数据;色度分量在RGB、YC转换过程中会出现数据损失,对奇偶点做特殊处理;并基于对YCbCr色度亮度做处理,有效提高融合内容信号质量,该方法能在较低带宽YC422视频格式传输和处理下,解决融合视频色彩格式变化带来的边缘异常问题,有效提升融合视频质量,解决了现有处理视频融合过程中色彩格式变化所带来的边缘异常,导致融合视频质量差的技术问题;1. The video fusion method obtains the video signal and video to be fused; processes the brightness and chrominance of the video signal to obtain the processed video data; performs fusion processing on the video data to obtain the video fusion data; combines the video fusion data with The video is fused, the YC422 format video brightness and chroma in the video signal are processed, and the brightness data is used to separate the required fused video data through the brightness data. The chroma component is in the RGB, YC conversion process. Data loss will occur during the YC conversion process, special processing is performed on the odd and even points; and based on the YCbCr chrominance and brightness processing, the quality of the fusion content signal is effectively improved. This method can solve the fusion under the lower bandwidth YC422 video format transmission and processing. The abnormal edge problem caused by the change of video color format effectively improves the quality of the fusion video, and solves the technical problem of the abnormal edge caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video;
2.该视频融合装置待融合视频信号处理模块利用亮度在RGB信号转换成YC的过程中数据不损失特性,并通过亮度Y数据分离出所需融合数据;色度分量也在RGB信号转换成YC的过程中对奇偶点做特殊处理会使得视频信号中的信号出现数据损失,使得输出视频数据的视频信号。输出视频数据的视频信号输入到视频融合处理模块上,视频融合处理模块在YC信号转换RGB信号过程中,当转换后的RGB信号的R、G、B这三个个分量在控制器设置范围值内,将YC422视频信号中的Y分量值置0,并对YC422视频信号中的Y分量值为0时去除或者保留值为0的YC422视频信号数据,从而得到视频融合数据,通过控制器将得到视频融合数据与其它视频进行融合,得到融合后质量高的视频,提高融合视频的质量,解决了现有处理视频融合过程中色彩格式变化所带来的边缘异常,导致融合视频质量差的技术问题。2. The to-be-fused video signal processing module of the video fusion device utilizes the characteristics of no loss of data during the conversion of RGB signals into YC by brightness, and separates the required fusion data through brightness Y data; chrominance components are also converted into YC from RGB signals In the process of performing special processing on the odd and even points, data loss will occur in the signal in the video signal, and the video signal of the video data will be output. The video signal of the output video data is input to the video fusion processing module. The video fusion processing module is in the process of converting the YC signal to the RGB signal, when the three components of the converted RGB signal, R, G, and B, are set in the controller. Inside, set the Y component value in the YC422 video signal to 0, and remove or retain the YC422 video signal data with a value of 0 when the Y component value in the YC422 video signal is 0, so as to obtain the video fusion data, which will be obtained by the controller The video fusion data is fused with other videos to obtain a high-quality video after fusion, which improves the quality of the fusion video, and solves the technical problem of the abnormal edge caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video .
附图说明Description of the drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative labor.
图1为本发明实施例所述的视频融合方法的步骤流程图;FIG. 1 is a flowchart of steps of a video fusion method according to an embodiment of the present invention;
图2为本发明实施例所述的视频融合方法视频数据处理的步骤流程 图;2 is a flowchart of steps of video data processing in a video fusion method according to an embodiment of the present invention;
图3为本发明实施例所述的视频融合方法视频融合数据处理的步骤流程图;FIG. 3 is a flowchart of steps of video fusion data processing in a video fusion method according to an embodiment of the present invention;
图4为本发明实施例所述的视频融合装置的框架图。Fig. 4 is a frame diagram of a video fusion device according to an embodiment of the present invention.
具体实施方式Detailed ways
为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,下面所描述的实施例仅仅是本发明一部分实施例,而非全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。In order to make the objectives, features, and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the following The described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
在本发明具体实施例中,描述的RGB视频信号为RGB格式的视频信号,YC422视频信号为YC422格式的视频信号,RGB信号为RGB格式的视频信号,YC444信号为YC444格式的视频信号。其中,Y字母表示亮度信号,C字母表示色差信号,YC字母表示亮度和色差的复合信号。In the specific embodiment of the present invention, the described RGB video signal is a video signal in the RGB format, the YC422 video signal is a video signal in the YC422 format, the RGB signal is a video signal in the RGB format, and the YC444 signal is a video signal in the YC444 format. Among them, the Y letter represents the brightness signal, the C letter represents the color difference signal, and the YC letter represents the composite signal of brightness and color difference.
本申请实施例提供了一种视频融合方法、装置及存储介质,用于解决现有处理视频融合过程中色彩格式变化所带来的边缘异常,导致融合视频质量差的技术问题。The embodiments of the present application provide a video fusion method, device, and storage medium, which are used to solve the technical problem that the edge abnormality caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video.
本发明实施例提供了一种视频融合方法,图1为本发明实施例所述的视频融合方法的步骤流程图。The embodiment of the present invention provides a video fusion method. FIG. 1 is a flowchart of the steps of the video fusion method according to the embodiment of the present invention.
如图1所示,本发明实施例提供了一种视频融合方法,包括以下步骤:As shown in Figure 1, an embodiment of the present invention provides a video fusion method, which includes the following steps:
S1.获取待融合的视频信号以及视频;S1. Obtain the video signal and video to be fused;
S2.对视频信号进行亮度、色度处理,得到处理后的视频数据;S2. Perform brightness and chroma processing on the video signal to obtain processed video data;
S3.对视频数据进行融合处理,得到视频融合数据;S3. Perform fusion processing on the video data to obtain video fusion data;
S4.将视频融合数据与视频进行融合。S4. Fusion of video fusion data and video.
需要说明的是,待融合的视频信号为RGB视频信号。It should be noted that the video signal to be fused is an RGB video signal.
在本发明实施例中的步骤S2,基于RGB视频信号转换成YC444信号,YCbCr对YC444信号进行亮度、色度处理,得到处理后的视频数据,并将处理后YC444信号的视频数据转换成YC422视频信号输出。In step S2 in the embodiment of the present invention, the RGB video signal is converted into YC444 signal, YCbCr performs luminance and chrominance processing on YC444 signal to obtain processed video data, and converts the processed video data of YC444 signal into YC422 video Signal output.
需要说明的是,利用亮度在RGB、YC转换过程中数据不损失特性,通过亮度数据分离出所需融合视频数据;色度分量在RGB、YC转换过程中会出现数据损失,对奇偶点做特殊处理,有效提高融合视频信号质量。It should be noted that using the feature of no loss of data during the conversion of RGB and YC of brightness, the required fusion video data is separated through brightness data; data loss will occur during the conversion of RGB and YC for chrominance components, and special parity points are applied. Processing, effectively improve the quality of the fusion video signal.
在本发明实施例中的步骤S3中,将YC422视频信号转换成RGB信号,当RGB信号中的R、G、B这三个个分量在控制器设置范围值内,将YC422视频信号中的Y分量值置0,并对YC422视频信号中的Y分量值为0时去除或者保留值为0的YC422视频信号数据,得到处理后的YC422视频信号数据,将处理后的YC422视频信号数据与RGB信号的数据进行融合处理,得到视频融合数据。In step S3 in the embodiment of the present invention, the YC422 video signal is converted into an RGB signal. When the three components of R, G, and B in the RGB signal are within the controller setting range, the YC422 video signal is changed to The component value is set to 0, and the YC422 video signal data with the value of 0 is removed or retained when the Y component value in the YC422 video signal is 0, to obtain the processed YC422 video signal data, and the processed YC422 video signal data and RGB signal The data is fused to obtain video fusion data.
在本发明实施例中的步骤S4中,控制器控制视频融合数据与其他视频进行融合,得到融合后的视频。In step S4 in the embodiment of the present invention, the controller controls the video fusion data to be fused with other videos to obtain a fused video.
本发明提供的一种视频融合方法通过获取待融合的视频信号以及视频;对视频信号进行亮度、色度处理,得到处理后的视频数据;对视频数据进行融合处理,得到视频融合数据;将视频融合数据与所述视频进行融合,对视频信号中的YC422格式视频亮度、色度进行处理,利用亮度在RGB、YC转换过程中数据不损失特性,通过亮度数据分离出所需融合视频数据;色度分量在RGB、YC转换过程中会出现数据损失,对奇偶点做特殊处理;并基于对YCbCr色度亮度做处理,有效提高融合内容信号质量,该方法能在较低带宽YC422视频格式传输和处理下,解决融合视频色彩格式变化带来的边缘异常问题,有效提升融合视频质量,解决了现有处理视频融合过程中色彩格式变化所带来的边缘异常,导致融合视频质量差的技术问题。The video fusion method provided by the present invention obtains the video signal and video to be fused; processes the brightness and chroma of the video signal to obtain the processed video data; performs fusion processing on the video data to obtain the video fusion data; The fusion data is fused with the video, the brightness and chroma of the YC422 format video in the video signal are processed, and the brightness data is used to separate the required fusion video data through the brightness data without data loss during the RGB and YC conversion process; In the RGB, YC conversion process, data loss will occur for the degree component. Special processing is performed on the odd and even points; and based on the YCbCr chrominance and brightness processing, the quality of the fusion content signal is effectively improved. This method can transmit and transmit in the lower bandwidth YC422 video format. Under processing, it solves the problem of edge abnormality caused by the color format change of the fusion video, effectively improves the quality of the fusion video, and solves the technical problem of the poor quality of the fusion video caused by the edge abnormality caused by the color format change in the existing processing video fusion process.
图2为本发明实施例所述的视频融合方法视频数据处理的步骤流程图。Fig. 2 is a flowchart of steps of video data processing in a video fusion method according to an embodiment of the present invention.
如图2所示,本发明的一个实施例中,视频信号为RGB视频信号,在步骤S2中,获得视频数据的处理步骤包括:As shown in FIG. 2, in an embodiment of the present invention, the video signal is an RGB video signal. In step S2, the processing steps for obtaining video data include:
S21.对RGB视频信号进行信号转换,并转换成YC444信号;S21. Carry out signal conversion on RGB video signal and convert it into YC444 signal;
S22.对YC444信号中的亮度进行亮度Y分量处理,对YC444信号中的色度进行处理,输出Y分量和C分量;S22. Process the luminance Y component of the luminance in the YC444 signal, process the chrominance in the YC444 signal, and output the Y component and the C component;
S23.处理后具有Y分量和C分量的YC444信号转换成YC422视频信号,YC422视频信号中的数据即为处理后的视频数据。S23. After processing, the YC444 signal with the Y component and the C component is converted into a YC422 video signal, and the data in the YC422 video signal is the processed video data.
需要说明的是,在步骤S21中,将RGB视频信号转换为YC444信号,在步骤S22中,RGB视频信号中的R、G、B这三个分量在控制器的预设值范围内,经过亮度Y分量处理后将视频信号的Y分量值设置为0。若YC444信号中的Y分量值为0时将其最低位设置为1,其它情况下YC444信号中的Y分量值保持原值输出。例:Y分量有8位数据,最低位是Y分量的第0位。如Y分量有8位数据,8位数据的值为8'b00000000时,将最低位设置为1,则其值为8'b00000001。若RGB视频信号中的Y分量值为0时,控制器控制转换后的YC444信号输出Y分量。若RGB视频信号中的Y分量值为1时,控制器控制转换后的YC444信号经过亮度Y分量处理输出Y分量。YC444信号输出C分量。YC444信号上设置有奇数点和偶数点,若奇数点的YC444信号中Y分量为0时,YC444信号中的C分量取其下一数点的C分量作为两个数点C分量的平均,若偶数点的YC444信号中Y分量为0时,YC444信号中的C分量取其上一数点的C分量作为两个数点C分量的平均,当Y分量不为0时,不分奇偶数点,YC444信号中的C分量取其下一数点的C分量作为数点C分量的平均。例如:YC444信号为1920*1080@60HZ信号分辨率,其中,一行像素个数为1920,第1个像素为奇数点,第2个像素点为偶数点,第3个点为奇数点,第4个点为偶数点,以此类推;当第2个像素点的Y分量为0时,第2个像素点的C分量取第1个像素点的C分量和第2个像素点C分量的平均数;当第3个像素点的Y分量为0时,第3个像素点的C分量取第3个像素点的C分量和第4个像素点的C分量的平均数。当Y分量不为0时,不区分奇偶数点,C分量都取其下一数点的C分量作为数点C分量的平均。It should be noted that in step S21, the RGB video signal is converted into a YC444 signal, and in step S22, the three components of R, G, and B in the RGB video signal are within the preset value range of the controller. After the Y component is processed, the Y component value of the video signal is set to 0. If the Y component value of the YC444 signal is 0, the lowest bit is set to 1, and the Y component value of the YC444 signal remains the original value in other cases. Example: The Y component has 8 bits of data, and the lowest bit is the 0th bit of the Y component. If the Y component has 8-bit data and the value of the 8-bit data is 8'b00000000, the lowest bit is set to 1, and its value is 8'b00000001. If the Y component of the RGB video signal is 0, the controller controls the converted YC444 signal to output the Y component. If the Y component value in the RGB video signal is 1, the controller controls the converted YC444 signal to process the luminance Y component to output the Y component. YC444 signal outputs C component. There are odd and even points on the YC444 signal. If the Y component of the YC444 signal of the odd point is 0, the C component of the YC444 signal takes the C component of the next point as the average of the C components of the two points. When the Y component of the even-numbered YC444 signal is 0, the C component of the YC444 signal takes the C component of the previous number as the average of the two C components. When the Y component is not 0, the odd and even points are not distinguished. , The C component in the YC444 signal takes the C component of the next number point as the average of the number point C component. For example: YC444 signal is 1920*1080@60HZ signal resolution. Among them, the number of pixels in a row is 1920, the first pixel is odd, the second pixel is even, the third is odd, and the fourth is odd. Points are even-numbered points, and so on; when the Y component of the second pixel is 0, the C component of the second pixel is the average of the C component of the first pixel and the C component of the second pixel When the Y component of the third pixel is 0, the C component of the third pixel is the average of the C component of the third pixel and the C component of the fourth pixel. When the Y component is not 0, the odd and even points are not distinguished, and the C component of the next point is taken as the average of the C components of the points.
由此,在本发明实施例中的步骤S2中,利用亮度在RGB视频信号转换到YC信号的转换过程中数据不损失特性,通过亮度Y数据分离出所需融合视频数据;对YC444信号中的奇偶点做特殊处理使得色度分量在转换过程中会出现数据损失,基于上述对视频信号进行亮度、色度处理,有效提高融合视频信号质量,从而提高融合视频的质量,在步骤S2中对视频信 号进行处理过程中同时为解决处理视频信号色度损失带来的融合视频质量问题。Therefore, in step S2 in the embodiment of the present invention, the data is not lost during the conversion process of the RGB video signal to the YC signal by using the brightness, and the required fused video data is separated by the brightness Y data; for the YC444 signal Special processing of the parity points causes data loss in the conversion process of the chrominance components. Based on the above-mentioned brightness and chrominance processing of the video signal, the quality of the fused video signal is effectively improved, thereby improving the quality of the fused video. In step S2, the video In the process of signal processing, it also solves the problem of fusion video quality caused by the chrominance loss of the processed video signal.
图3为本发明实施例所述的视频融合方法视频融合数据处理的步骤流程图。Fig. 3 is a flowchart of steps of video fusion data processing in the video fusion method according to an embodiment of the present invention.
如图3所示,本发明的一个实施例中,在所述步骤S3中,获得视频融合数据的处理步骤包括:As shown in FIG. 3, in an embodiment of the present invention, in the step S3, the processing step of obtaining video fusion data includes:
S31.对于视频数据中的YC422视频信号进行信号转换,并转换成RGB信号;S31. Perform signal conversion on the YC422 video signal in the video data and convert it into an RGB signal;
S32.若YC422视频信号中的Y分量值为0,控制器发出的控制信号为0时,去除Y分量值为0的YC422视频信号数据,得到Y分量不为0的YC422视频信号数据,即为处理后的YC422视频信号数据;若YC422视频信号中的Y分量值为0,控制器发出的控制信号为1时,去除Y分量值不为0的YC422视频信号数据,得到Y分量为0的YC422视频信号数据,即为处理后的YC422视频信号数据。S32. If the Y component value of the YC422 video signal is 0 and the control signal sent by the controller is 0, remove the YC422 video signal data whose Y component value is 0 to obtain YC422 video signal data whose Y component is not 0, which is The processed YC422 video signal data; if the Y component of the YC422 video signal is 0 and the control signal sent by the controller is 1, remove the YC422 video signal data whose Y component value is not 0, and obtain YC422 with a Y component of 0 The video signal data is the processed YC422 video signal data.
需要说明的是,根据控制器控制处理后的YC422视频信号数据与RGB信号的数据进行融合处理,得到视频融合数据。在步骤S3中,得到视频融合数据过程中存在YC信号转换RGB信号过程中,当转换后的RGB信号的R、G、B这三个个分量在控制器设置范围值内,并对YC422视频信号中的Y分量值置0,并对YC422视频信号中的Y分量值为0时去除或者保留值为0的YC422视频信号数据,得到处理后的YC422视频信号数据。It should be noted that the fusion processing is performed according to the YC422 video signal data and the RGB signal data processed by the controller to obtain the video fusion data. In step S3, there is a YC signal in the process of obtaining video fusion data. In the process of converting RGB signals, when the three components of R, G, and B of the converted RGB signal are within the controller setting range value, the YC422 video signal The Y component value in the YC422 video signal is set to 0, and the YC422 video signal data whose value is 0 is removed or retained when the Y component value in the YC422 video signal is 0, to obtain the processed YC422 video signal data.
在本发明的实施例中的步骤S4中,将得到视频融合数据与其它视频进行融合,得到融合后质量高的视频。In step S4 in the embodiment of the present invention, the obtained video fusion data is fused with other videos to obtain a video with high quality after fusion.
实施例二:Embodiment two:
图4为本发明实施例所述的视频融合装置的框架图。Fig. 4 is a frame diagram of a video fusion device according to an embodiment of the present invention.
如图4所示,本发明实施例提供了一种视频融合装置,包括控制器10以及分别与控制器10连接的待融合视频信号处理模块20和视频融合处理模块30;As shown in FIG. 4, an embodiment of the present invention provides a video fusion device, including a controller 10, and a video signal processing module 20 and a video fusion processing module 30 to be fused that are respectively connected to the controller 10;
待融合视频信号处理模块20用于对待融合的视频信号进行亮度、色度处理,得到视频数据;The to-be-fused video signal processing module 20 is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
视频融合处理模块30用于对视频数据进行融合处理,得到视频融合数据;The video fusion processing module 30 is configured to perform fusion processing on video data to obtain video fusion data;
控制器10用于控制视频融合数据是否与视频进行融合。The controller 10 is used to control whether the video fusion data is fused with the video.
需要说明的是,待融合视频信号处理模块20利用亮度在RGB信号转换成YC的过程中数据不损失特性,并通过亮度Y数据分离出所需融合数据;色度分量也在RGB信号转换成YC的过程中对奇偶点做特殊处理会使得视频信号中的信号出现数据损失,使得输出视频数据的视频信号。输出视频数据的视频信号输入到视频融合处理模块30上,视频融合处理模块30在YC信号转换RGB信号过程中,当转换后的RGB信号的R、G、B这三个个分量在控制器设置范围值内,将YC422视频信号中的Y分量值置0,并对YC422视频信号中的Y分量值为0时去除或者保留值为0的YC422视频信号数据,得到处理后的YC422视频信号数据,并将处理后的YC422视频信号数据与RGB信号的数据进行融合,得到视频融合数据;通过控制器将得到视频融合数据与其它视频进行融合,得到融合后质量高的视频,提高融合视频的质量,解决了现有处理视频融合过程中色彩格式变化所带来的边缘异常,导致融合视频质量差的技术问题。It should be noted that the to-be-fused video signal processing module 20 utilizes the feature of no loss of data during the conversion of RGB signals into YC by brightness, and separates the required fused data through brightness Y data; chrominance components are also converted into YC from RGB signals In the process of performing special processing on the odd and even points, data loss will occur in the signal in the video signal, and the video signal of the video data will be output. The video signal of the output video data is input to the video fusion processing module 30. The video fusion processing module 30 is in the process of converting the YC signal to the RGB signal. When the three components of the converted RGB signal, R, G, and B, are set in the controller Within the range value, set the Y component value in the YC422 video signal to 0, and remove or retain the YC422 video signal data with a value of 0 when the Y component value in the YC422 video signal is 0, to obtain the processed YC422 video signal data, The processed YC422 video signal data and RGB signal data are fused to obtain video fusion data; the video fusion data is fused with other videos through the controller to obtain a high-quality video after fusion, which improves the quality of the fusion video. It solves the technical problem that the edge abnormality caused by the color format change in the existing processing video fusion process, which causes the poor quality of the fusion video.
在本发明实施例中的待融合视频信号处理模块20包括视频信号获取单元21、第一信号转换单元22、亮度处理单元23、第一选择单元24、第二信号转换单元25和第一输出单元26;The to-be-fused video signal processing module 20 in the embodiment of the present invention includes a video signal acquisition unit 21, a first signal conversion unit 22, a brightness processing unit 23, a first selection unit 24, a second signal conversion unit 25, and a first output unit 26;
视频信号获取单元21中获取待融合的视频信号为RGB视频信号,视频信号获取单元21分别与第一信号转换单元22和亮度处理单元23连接,第一信号转换单元22分别与第一选择单元24和亮度处理单元23连接,亮度处理单元23还与第一选择单元24,第一选择单元24还与第二信号转换单元25连接,第二信号转换单元25与第一输出单元26连接,控制器10分别与亮度处理单元23和第一选择单元24连接,第一输出单元26与视频融合处理模块30连接。The video signal acquisition unit 21 acquires the video signal to be fused as an RGB video signal. The video signal acquisition unit 21 is respectively connected to the first signal conversion unit 22 and the brightness processing unit 23, and the first signal conversion unit 22 is respectively connected to the first selection unit 24. Connected to the brightness processing unit 23, the brightness processing unit 23 is also connected to the first selection unit 24, the first selection unit 24 is also connected to the second signal conversion unit 25, the second signal conversion unit 25 is connected to the first output unit 26, the controller 10 is respectively connected to the brightness processing unit 23 and the first selection unit 24, and the first output unit 26 is connected to the video fusion processing module 30.
需要说明的是,视频信号获取单元21主要用于获取视频的RGB视频信号,第一信号转换单元22用于将RGB视频信号转换成YC444信号,亮度处理单元23采用亮度Y分量进行信号处理,控制器10控制第一选择单 元22输出具有Y分量和C分量的YC444信号,第二信号转换单元25用于将具有Y分量和C分量的YC444信号转换成YC422视频信号,YC422视频信号从第一输出单元26输送至视频融合处理模块30上。具体地,当视频信号获取单元21信号的R、G、B这三个分量在控制器10设置范围内时,亮度处理单元23中视频信号的Y分量值设置为0,当第一信号转换单元22输出YC444信号的Y分量值为0时将其最低位设置为1,其它情况下YC444信号的Y分量值保持并作为第一信号转换单元22输出;第一选择单元24为YC444信号的Y分量输出选择,由控制器10控制,YC444信号中的Y分量值为0时由第一信号转换单元22输出YC444信号的Y分量,RGB视频信号中的Y分量值为1时由亮度处理单元23处理后输出RGB视频信号中的Y分量值,第一选择单元24输出色度分量CbCr(C分量)由第一信号转换单元22中YC444信号的输出;第二信号转换单元25将YC444信号转换成YC422信号,当YC444信号中的奇数点Y分量为0时,色度分量CbCr(C分量)取其下一数点的C分量作为两个数点C分量的平均,当YC444信号中的偶数点Y分量为0时,色度分量CbCr(C分量)取其上一数点的C分量作为两个数点C分量的平均,当YC444信号中的Y分量不为0时,不分奇偶数点取其下一数点的C分量作为数点C分量的平均;第一输出单元26输出的信号为YC422信号输出,第一输出单元26输出的YC422信号作为视频融合处理模块30的输入信号。It should be noted that the video signal acquisition unit 21 is mainly used to acquire the RGB video signal of the video, the first signal conversion unit 22 is used to convert the RGB video signal into a YC444 signal, and the brightness processing unit 23 uses the brightness Y component for signal processing and control The device 10 controls the first selection unit 22 to output YC444 signals with Y and C components, and the second signal conversion unit 25 is used to convert the YC444 signals with Y and C components into YC422 video signals, and the YC422 video signals are output from the first The unit 26 is delivered to the video fusion processing module 30. Specifically, when the R, G, and B components of the video signal acquisition unit 21 are within the setting range of the controller 10, the Y component value of the video signal in the brightness processing unit 23 is set to 0, and when the first signal conversion unit 22. When the Y component value of the output YC444 signal is 0, set its lowest bit to 1. In other cases, the Y component value of the YC444 signal is maintained and output as the first signal conversion unit 22; the first selection unit 24 is the Y component of the YC444 signal The output selection is controlled by the controller 10. When the Y component of the YC444 signal is 0, the first signal conversion unit 22 outputs the Y component of the YC444 signal, and when the Y component of the RGB video signal is 1, it is processed by the brightness processing unit 23. After outputting the Y component value in the RGB video signal, the first selection unit 24 outputs the chrominance component CbCr (C component) which is output by the YC444 signal in the first signal conversion unit 22; the second signal conversion unit 25 converts the YC444 signal into YC422 Signal, when the Y component of the odd point in the YC444 signal is 0, the chrominance component CbCr (C component) takes the C component of the next point as the average of the two C components, when the even point Y in the YC444 signal When the component is 0, the chrominance component CbCr (C component) takes the C component of the previous number point as the average of the two number points C components. When the Y component in the YC444 signal is not 0, the chrominance component CbCr (C component) is taken regardless of the odd and even points. The C component of the next number of points is taken as the average of the number of C components; the signal output by the first output unit 26 is the YC422 signal output, and the YC422 signal output by the first output unit 26 is used as the input signal of the video fusion processing module 30.
在本发明实施例中的视频融合处理模块30包括第三信号转换单元31、视频融合处理单元32、第二选择单元33和第二输出单元34;第一输出单元26分别与第三信号转换单元31和视频融合处理单元32连接,第三信号转换单元31还与第二选择单元33连接,视频融合处理单元32还与第二选择单元33连接,第二选择单元33还与第二输出单元34连接,控制器10还分别与视频融合处理单元32和第二选择单元33连接。第三信号转换单元31将YC422视频信号转换成RGB信号,视频融合处理单元32根据YC422信号中的Y分量值为0时去除或保留值为0的YC422视频信号数据,得到处理后的YC422视频信号数据,控制器10控制第二选择单元33是否选择处理后的YC422视频信号数据与RGB信号进行融合处理,得到视频 融合数据,第二输出单元34输出视频融合后的RGB信号。The video fusion processing module 30 in the embodiment of the present invention includes a third signal conversion unit 31, a video fusion processing unit 32, a second selection unit 33, and a second output unit 34; the first output unit 26 and the third signal conversion unit respectively 31 is connected to the video fusion processing unit 32, the third signal conversion unit 31 is also connected to the second selection unit 33, the video fusion processing unit 32 is also connected to the second selection unit 33, and the second selection unit 33 is also connected to the second output unit 34 In connection, the controller 10 is also connected to the video fusion processing unit 32 and the second selection unit 33 respectively. The third signal conversion unit 31 converts the YC422 video signal into an RGB signal, and the video fusion processing unit 32 removes or retains the YC422 video signal data whose value is 0 when the Y component value in the YC422 signal is 0, to obtain a processed YC422 video signal According to the data, the controller 10 controls the second selection unit 33 whether to select the processed YC422 video signal data and the RGB signal for fusion processing to obtain the video fusion data, and the second output unit 34 outputs the video fusion RGB signal.
需要说明的是,第一输出单元26输出的YC422信号传输到视频融合处理模块30的信号输入端后进入视频融合处理模块30中,第三信号转换单元31将视频融合处理模块30的信号输入端的YC422信号转换为RGB视频信号,视频融合处理单元32做融合数据处理,当视频融合处理模块30的信号输入端的Y分量为0,控制器10的控制信号为0时去除YC422信号中的Y分量为0数据保持非0的YC422信号中的Y分量数据,控制器10的控制信号为1时保持YC422信号中的Y分量为0数据去除非0YC422信号中的Y分量数据保存;第二选择单元33为选择器选择是否做融合处理,由控制器10控制,0不做融合处理,1做融合处理;第二输出单元34为RGB的融合信号输出。其中,当R、G、B这三个分量在控制器10设置范围,设置范围由实际应用时待融合视频数据决定的。It should be noted that the YC422 signal output by the first output unit 26 is transmitted to the signal input end of the video fusion processing module 30 and then enters the video fusion processing module 30, and the third signal conversion unit 31 transfers the output signal to the signal input end of the video fusion processing module 30. The YC422 signal is converted to RGB video signal, and the video fusion processing unit 32 performs fusion data processing. When the Y component of the signal input terminal of the video fusion processing module 30 is 0 and the control signal of the controller 10 is 0, the Y component of the YC422 signal is removed. The 0 data keeps the Y component data in the YC422 signal that is not 0. When the control signal of the controller 10 is 1, the Y component in the YC422 signal is kept as 0 data and the Y component data in the YC422 signal is saved unless the 0YC422 signal is saved; the second selection unit 33 is The selector selects whether to perform fusion processing, and is controlled by the controller 10, 0 does not perform fusion processing, and 1 performs fusion processing; the second output unit 34 outputs RGB fusion signals. Among them, when the three components of R, G, and B are in the setting range of the controller 10, the setting range is determined by the video data to be fused in actual application.
实施例三:Example three:
本发明实施例提供了一种存储介质,包括处理器以及存储器;The embodiment of the present invention provides a storage medium including a processor and a memory;
存储器用于存储程序代码,并将程序代码传输给处理器;The memory is used to store the program code and transmit the program code to the processor;
处理器用于根据程序代码中的指令执行上述所述的视频融合方法。The processor is configured to execute the aforementioned video fusion method according to the instructions in the program code.
需要说明的是,处理器用于根据所程序代码中的指令执行上述的一种视频融合方法实施例中的步骤,例如图1所示的步骤S1至S4。或者,处理器执行计算机程序时实现上述各装置实施例中各模块/单元的功能,例如图4所示单元101至103的功能。It should be noted that the processor is configured to execute the steps in the foregoing embodiment of the video fusion method according to the instructions in the program code, such as steps S1 to S4 shown in FIG. 1. Or, when the processor executes the computer program, the functions of the modules/units in the foregoing device embodiments, such as the functions of the units 101 to 103 shown in FIG. 4, are realized.
示例性的,计算机程序可以被分割成一个或多个模块/单元,一个或者多个模块/单元被存储在存储器中,并由处理器执行,以完成本申请。一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述计算机程序在终端设备中的执行过程。例如,计算机程序可以被分割包括控制器10以及分别与控制器10连接的待融合视频信号处理模块20和视频融合处理模块30:Exemplarily, the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory and executed by the processor to complete the application. One or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the terminal device. For example, the computer program can be divided to include the controller 10 and the to-be-fused video signal processing module 20 and the video fusion processing module 30 respectively connected to the controller 10:
待融合视频信号处理模块20用于对待融合的视频信号进行亮度、色度处理,得到视频数据;The to-be-fused video signal processing module 20 is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
视频融合处理模块30用于对视频数据进行融合处理,得到视频融合数 据;The video fusion processing module 30 is used to perform fusion processing on video data to obtain video fusion data;
控制器10用于控制视频融合数据是否与视频进行融合。The controller 10 is used to control whether the video fusion data is fused with the video.
终端设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。终端设备可包括,但不仅限于,处理器、存储器。本领域技术人员可以理解,并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如终端设备还可以包括输入输出设备、网络接入设备、总线等。The terminal device can be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The terminal device may include, but is not limited to, a processor and a memory. Those skilled in the art can understand that it does not constitute a limitation on the terminal device. It may include more or fewer components than shown in the figure, or combine certain components, or different components. For example, the terminal device may also include input and output devices, Network access equipment, bus, etc.
所称处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor can be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
存储器可以是终端设备的内部存储单元,例如终端设备的硬盘或内存。存储器也可以是终端设备的外部存储设备,例如终端设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器还可以既包括终端设备的内部存储单元也包括外部存储设备。存储器用于存储计算机程序以及终端设备所需的其他程序和数据。存储器还可以用于暂时地存储已经输出或者将要输出的数据。The memory may be an internal storage unit of the terminal device, such as the hard disk or memory of the terminal device. The memory can also be an external storage device of the terminal device, such as a plug-in hard disk equipped on the terminal device, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), etc. . Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used to store computer programs and other programs and data required by the terminal device. The memory can also be used to temporarily store data that has been output or will be output.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of the description, the specific working process of the above-described system, device, and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单 元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method can be implemented in other ways. For example, the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium. , Including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present invention. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them. Although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions recorded in the embodiments are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

  1. 一种视频融合方法,其特征在于,包括以下步骤:A video fusion method is characterized in that it comprises the following steps:
    S1.获取待融合的视频信号以及视频;S1. Obtain the video signal and video to be fused;
    S2.对所述视频信号进行亮度、色度处理,得到处理后的视频数据;S2. Perform brightness and chroma processing on the video signal to obtain processed video data;
    S3.对所述视频数据进行融合处理,得到视频融合数据;S3. Perform fusion processing on the video data to obtain video fusion data;
    S4.将所述视频融合数据与所述视频进行融合。S4. Fusion of the video fusion data and the video.
  2. 根据权利要求1所述的视频融合方法,其特征在于,所述视频信号为RGB视频信号,在所述步骤S2中,获得所述视频数据的处理步骤包括:The video fusion method according to claim 1, wherein the video signal is an RGB video signal, and in the step S2, the processing step of obtaining the video data comprises:
    S21.对所述RGB视频信号进行信号转换,并转换成YC444信号;S21. Perform signal conversion on the RGB video signal and convert it into a YC444 signal;
    S22.对所述YC444信号中的亮度进行亮度Y分量处理,对所述YC444信号中的色度进行处理,输出Y分量和C分量;S22. Perform luminance Y component processing on the luminance in the YC444 signal, and perform processing on the chrominance in the YC444 signal, and output the Y component and the C component;
    S23.处理后具有所述Y分量和C分量的所述YC444信号转换成YC422视频信号,所述YC422视频信号中的数据即为处理后的所述视频数据。S23. After processing, the YC444 signal having the Y component and the C component is converted into a YC422 video signal, and the data in the YC422 video signal is the processed video data.
  3. 根据权利要求2所述的视频融合方法,其特征在于,在所述步骤S22中,在所述RGB视频信号中的R、G、B这三个分量在控制器的预设值范围内,经过所述亮度Y分量处理后将视频信号的Y分量值设置为0;The video fusion method according to claim 2, characterized in that, in the step S22, the three components of R, G, and B in the RGB video signal are within the preset value range of the controller and pass through After the luminance Y component is processed, the Y component value of the video signal is set to 0;
    若所述YC444信号中的Y分量值为0时将其最低位设置为1,其它情况下YC444信号中的Y分量值保持原值输出;If the Y component value of the YC444 signal is 0, the lowest bit is set to 1, otherwise the Y component value of the YC444 signal remains the original value and output;
    若所述RGB视频信号中的Y分量值为0时,所述控制器控制转换后的所述YC444信号输出Y分量;If the value of the Y component in the RGB video signal is 0, the controller controls the converted YC444 signal to output the Y component;
    若所述RGB视频信号中的Y分量值为1时,所述控制器控制转换后的所述YC444信号经过所述亮度Y分量处理输出Y分量;If the Y component value in the RGB video signal is 1, the controller controls the converted YC444 signal to process the luminance Y component to output the Y component;
    所述YC444信号上设置有奇数点和偶数点,若奇数点的所述YC444信号中Y分量为0时,所述YC444信号中的C分量取其下一数点的C分量作为两个数点C分量的平均,若偶数点的所述YC444信号中Y分量为0时,所述YC444信号中的C分量取其上一数点的C分量作为两个数点C分量的平均,当Y分量不为0时,不分奇偶数点,所述YC444信号中的C分量取其下一数点的C分量作为数点C分量的平均。The YC444 signal is provided with an odd point and an even point. If the Y component of the YC444 signal of the odd point is 0, the C component of the YC444 signal takes the C component of the next point as the two number points. The average of the C component. If the Y component of the YC444 signal at the even-numbered point is 0, the C component of the YC444 signal takes the C component of the previous point as the average of the C-components of the two points. When the Y component When it is not 0, no odd and even points are distinguished, and the C component of the YC444 signal takes the C component of the next number point as the average of the number points C component.
  4. 根据权利要求1所述的视频融合方法,其特征在于,在所述步骤 S3中,获得所述视频融合数据的处理步骤包括:The video fusion method according to claim 1, wherein in the step S3, the processing step of obtaining the video fusion data comprises:
    S31.对于所述视频数据中的YC422视频信号进行信号转换,并转换成RGB信号;S31. Perform signal conversion on the YC422 video signal in the video data, and convert it into an RGB signal;
    S32.若所述YC422视频信号中的Y分量值为0,控制器发出的控制信号为0时,去除Y分量值为0的所述YC422视频信号数据,得到Y分量不为0的所述YC422视频信号数据,即为处理后的所述YC422视频信号数据;若所述YC422视频信号中的Y分量值为0,所述控制器发出的控制信号为1时,去除Y分量值不为0的所述YC422视频信号数据,得到Y分量为0的所述YC422视频信号数据,即为处理后的所述YC422视频信号数据。S32. If the Y component value of the YC422 video signal is 0 and the control signal sent by the controller is 0, remove the YC422 video signal data whose Y component value is 0 to obtain the YC422 whose Y component is not 0 The video signal data is the processed YC422 video signal data; if the Y component value in the YC422 video signal is 0 and the control signal sent by the controller is 1, remove the Y component value that is not 0 The YC422 video signal data obtains the YC422 video signal data whose Y component is 0, which is the processed YC422 video signal data.
  5. 根据权利要求4所述的视频融合方法,其特征在于,在所述步骤S3中,根据所述控制器控制处理后的所述YC422视频信号数据与所述RGB信号的数据进行融合处理,得到所述视频融合数据。The video fusion method according to claim 4, wherein in the step S3, fusion processing is performed according to the YC422 video signal data and the RGB signal data processed by the controller to obtain the The video fusion data.
  6. 一种视频融合装置,其特征在于,包括控制器以及分别与所述控制器连接的待融合视频信号处理模块和视频融合处理模块;A video fusion device, characterized by comprising a controller and a video signal processing module to be fused and a video fusion processing module connected to the controller respectively;
    所述待融合视频信号处理模块用于对待融合的视频信号进行亮度、色度处理,得到视频数据;The to-be-fused video signal processing module is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
    所述视频融合处理模块用于对所述视频数据进行融合处理,得到视频融合数据;The video fusion processing module is used to perform fusion processing on the video data to obtain video fusion data;
    所述控制器用于控制所述视频融合数据是否与所述视频进行融合。The controller is used to control whether the video fusion data is fused with the video.
  7. 根据权利要求6所述的视频融合装置,其特征在于,所述待融合视频信号处理模块包括视频信号获取单元、第一信号转换单元、亮度处理单元、第一选择单元、第二信号转换单元和第一输出单元;The video fusion device according to claim 6, wherein the to-be-fused video signal processing module includes a video signal acquisition unit, a first signal conversion unit, a brightness processing unit, a first selection unit, a second signal conversion unit, and First output unit
    所述视频信号获取单元中获取待融合的视频信号为RGB视频信号,所述视频信号获取单元分别与所述第一信号转换单元和所述亮度处理单元连接,所述第一信号转换单元分别与所述第一选择单元和所述亮度处理单元连接,所述亮度处理单元还与所述第一选择单元,所述第一选择单元还与所述第二信号转换单元连接,所述第二信号转换单元与所述第一输出单元连接,所述控制器分别与所述亮度处理单元和所述第一选择单元连接,所 述第一输出单元与所述视频融合处理模块连接。The video signal to be fused obtained by the video signal acquisition unit is an RGB video signal, the video signal acquisition unit is respectively connected to the first signal conversion unit and the brightness processing unit, and the first signal conversion unit is connected to the first signal conversion unit and the brightness processing unit respectively. The first selection unit is connected to the brightness processing unit, the brightness processing unit is also connected to the first selection unit, the first selection unit is further connected to the second signal conversion unit, and the second signal The conversion unit is connected to the first output unit, the controller is respectively connected to the brightness processing unit and the first selection unit, and the first output unit is connected to the video fusion processing module.
  8. 根据权利要求7所述的视频融合装置,其特征在于,所述第一信号转换单元用于将所述RGB视频信号转换成YC444信号,所述亮度处理单元采用亮度Y分量进行信号处理,所述控制器控制所述第一选择单元输出具有Y分量和C分量的YC444信号,所述第二信号转换单元用于将具有Y分量和C分量的YC444信号转换成YC422视频信号,所述YC422视频信号从所述第一输出单元输送至所述视频融合处理模块上。The video fusion device according to claim 7, wherein the first signal conversion unit is used to convert the RGB video signal into a YC444 signal, the brightness processing unit uses the brightness Y component for signal processing, and the The controller controls the first selection unit to output a YC444 signal having a Y component and a C component, and the second signal conversion unit is used to convert the YC444 signal having a Y component and a C component into a YC422 video signal, the YC422 video signal From the first output unit to the video fusion processing module.
  9. 根据权利要求8所述的视频融合装置,其特征在于,所述视频融合处理模块包括第三信号转换单元、视频融合处理单元、第二选择单元和第二输出单元;The video fusion device according to claim 8, wherein the video fusion processing module comprises a third signal conversion unit, a video fusion processing unit, a second selection unit, and a second output unit;
    所述第一输出单元分别与所述第三信号转换单元和所述视频融合处理单元连接,所述第三信号转换单元还与所述第二选择单元连接,所述视频融合处理单元还与所述第二选择单元连接,所述第二选择单元还与所述第二输出单元连接,所述控制器还分别与所述视频融合处理单元和所述第二选择单元连接;The first output unit is respectively connected to the third signal conversion unit and the video fusion processing unit, the third signal conversion unit is also connected to the second selection unit, and the video fusion processing unit is also connected to the The second selection unit is connected, the second selection unit is also connected to the second output unit, and the controller is also connected to the video fusion processing unit and the second selection unit respectively;
    所述第三信号转换单元将所述YC422视频信号转换成RGB信号,所述视频融合处理单元根据所述YC422信号中的Y分量值为0时去除或保留值为0的所述YC422视频信号数据,得到处理后的所述YC422视频信号数据;所述控制器控制所述第二选择单元是否选择处理后所述YC422视频信号数据与所述RGB信号进行融合处理,得到视频融合数据;所述第二输出单元输出视频融合后的RGB信号。The third signal conversion unit converts the YC422 video signal into an RGB signal, and the video fusion processing unit removes or retains the YC422 video signal data whose value is 0 when the Y component value in the YC422 signal is 0 , The processed YC422 video signal data is obtained; the controller controls whether the second selection unit selects the processed YC422 video signal data and the RGB signal for fusion processing to obtain video fusion data; The second output unit outputs the RGB signal after video fusion.
  10. 一种存储介质,其特征在于,包括处理器以及存储器;A storage medium, characterized by comprising a processor and a memory;
    所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;The memory is used to store program code and transmit the program code to the processor;
    所述处理器用于根据所述程序代码中的指令执行权利要求1-5任一项所述的视频融合方法。The processor is configured to execute the video fusion method according to any one of claims 1 to 5 according to instructions in the program code.
PCT/CN2019/125801 2019-12-17 2019-12-17 Video fusion method and apparatus, and storage medium WO2021119968A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/125801 WO2021119968A1 (en) 2019-12-17 2019-12-17 Video fusion method and apparatus, and storage medium
CN201980003225.3A CN111095919B (en) 2019-12-17 2019-12-17 Video fusion method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/125801 WO2021119968A1 (en) 2019-12-17 2019-12-17 Video fusion method and apparatus, and storage medium

Publications (1)

Publication Number Publication Date
WO2021119968A1 true WO2021119968A1 (en) 2021-06-24

Family

ID=70400245

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/125801 WO2021119968A1 (en) 2019-12-17 2019-12-17 Video fusion method and apparatus, and storage medium

Country Status (2)

Country Link
CN (1) CN111095919B (en)
WO (1) WO2021119968A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572475A (en) * 2010-12-17 2012-07-11 微软公司 Stereo 3D video support in computing devices
CN105005963A (en) * 2015-06-30 2015-10-28 重庆市勘测院 Multi-camera images stitching and color homogenizing method
CN105635602A (en) * 2015-12-31 2016-06-01 天津大学 System for mosaicing videos by adopting brightness and color cast between two videos and adjustment method thereof
CN106570850A (en) * 2016-10-12 2017-04-19 成都西纬科技有限公司 Image fusion method
US20170323481A1 (en) * 2015-07-17 2017-11-09 Bao Tran Systems and methods for computer assisted operation
CN108449569A (en) * 2018-03-13 2018-08-24 重庆虚拟实境科技有限公司 Virtual meeting method, system, device, computer installation and storage medium
CN109981983A (en) * 2019-03-26 2019-07-05 Oppo广东移动通信有限公司 Augmented reality image processing method, device, electronic equipment and storage medium
CN110147162A (en) * 2019-04-17 2019-08-20 江苏大学 A kind of reinforced assembly teaching system and its control method based on fingertip characteristic

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8798171B2 (en) * 2010-06-28 2014-08-05 Richwave Technology Corp. Video transmission by decoupling color components
CN110363732A (en) * 2018-04-11 2019-10-22 杭州海康威视数字技术股份有限公司 A kind of image interfusion method and its device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572475A (en) * 2010-12-17 2012-07-11 微软公司 Stereo 3D video support in computing devices
CN105005963A (en) * 2015-06-30 2015-10-28 重庆市勘测院 Multi-camera images stitching and color homogenizing method
US20170323481A1 (en) * 2015-07-17 2017-11-09 Bao Tran Systems and methods for computer assisted operation
CN105635602A (en) * 2015-12-31 2016-06-01 天津大学 System for mosaicing videos by adopting brightness and color cast between two videos and adjustment method thereof
CN106570850A (en) * 2016-10-12 2017-04-19 成都西纬科技有限公司 Image fusion method
CN108449569A (en) * 2018-03-13 2018-08-24 重庆虚拟实境科技有限公司 Virtual meeting method, system, device, computer installation and storage medium
CN109981983A (en) * 2019-03-26 2019-07-05 Oppo广东移动通信有限公司 Augmented reality image processing method, device, electronic equipment and storage medium
CN110147162A (en) * 2019-04-17 2019-08-20 江苏大学 A kind of reinforced assembly teaching system and its control method based on fingertip characteristic

Also Published As

Publication number Publication date
CN111095919B (en) 2021-10-08
CN111095919A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
CN107680056B (en) Image processing method and device
CN107682667B (en) Video processor and multi-signal source pre-monitoring method
CN108337496B (en) White balance processing method, processing device, processing equipment and storage medium
JP7359521B2 (en) Image processing method and device
US10573279B2 (en) Systems and methods for combining video and graphic sources for display
CN108345559B (en) Virtual reality data input device and virtual reality equipment
CN114998122A (en) Low-illumination image enhancement method
WO2023010755A1 (en) Hdr video conversion method and apparatus, and device and computer storage medium
US9239697B2 (en) Display multiplier providing independent pixel resolutions
CN110930932B (en) Display screen correction method and system
CN113658043A (en) Image processing method, image processing device, electronic equipment and readable storage medium
US11146770B2 (en) Projection display apparatus and display method
WO2021119968A1 (en) Video fusion method and apparatus, and storage medium
CN107948652B (en) Method and equipment for image conversion
US20210037208A1 (en) Compositing video signals and stripping composite video signal
CN114245027B (en) Video data hybrid processing method, system, electronic equipment and storage medium
CN112309312A (en) Image display method and device, receiving card, sending card and LED display system
EP1993293A1 (en) System and method of image processing
US8929666B2 (en) Method and apparatus of generating a multi-format template image from a single format template image
US20130300774A1 (en) Image processing method
WO2021259191A1 (en) Video mirror processing method and apparatus, photographic device, and readable storage medium
TWI754863B (en) Image capturing device and method
EP3709254A1 (en) Image processing device, image processing method, and image processing program
KR101626906B1 (en) Image improving monitor
WO2023035943A1 (en) Color card generating method, image processing method, and apparatuses, and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19956979

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19956979

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02/02/2023)

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02-02-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 19956979

Country of ref document: EP

Kind code of ref document: A1