WO2021119968A1 - Video fusion method and apparatus, and storage medium - Google Patents
Video fusion method and apparatus, and storage medium Download PDFInfo
- Publication number
- WO2021119968A1 WO2021119968A1 PCT/CN2019/125801 CN2019125801W WO2021119968A1 WO 2021119968 A1 WO2021119968 A1 WO 2021119968A1 CN 2019125801 W CN2019125801 W CN 2019125801W WO 2021119968 A1 WO2021119968 A1 WO 2021119968A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- signal
- component
- data
- fusion
- Prior art date
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 97
- 230000004927 fusion Effects 0.000 claims abstract description 83
- 238000007499 fusion processing Methods 0.000 claims abstract description 68
- 238000006243 chemical reaction Methods 0.000 claims abstract description 62
- 238000000034 method Methods 0.000 claims abstract description 33
- 230000008569 process Effects 0.000 claims abstract description 25
- 230000008859 change Effects 0.000 abstract description 9
- 230000005856 abnormality Effects 0.000 abstract description 6
- 230000005540 biological transmission Effects 0.000 abstract description 2
- 238000004590 computer program Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234309—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
Definitions
- the present invention relates to the technical field of image processing, in particular to a video fusion method, device and storage medium.
- FPGA Field Programmable Gate Array
- Video fusion technology is a branch of virtual reality technology, and it can also be said to be a development stage of virtual reality.
- Video fusion technology refers to the fusion of one or more image sequence videos about a certain scene or model collected by a video acquisition device with a virtual scene related to it to generate a new virtual scene or model about this scene.
- the FPGA for signal processing puts forward higher and higher requirements, such as bandwidth, signal quality, and the complexity of logic processing is also accompanied by further improvement. Therefore, in order to better display the information displayed by the video, it is extremely necessary to improve the fusion problem of the FPGA processing video images and improve the quality of the fusion video under the premise of ensuring the picture quality and real-time performance.
- the embodiments of the present invention provide a video fusion method, device, and storage medium, which are used to solve the technical problem that the edge abnormality caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video.
- a video fusion method includes the following steps:
- the video signal is an RGB video signal
- the processing step of obtaining the video data includes:
- the YC444 signal having the Y component and the C component is converted into a YC422 video signal, and the data in the YC422 video signal is the processed video data.
- the three components of R, G, and B in the RGB video signal are within a preset value range of the controller, and the Y component of the video signal is processed by the luminance Y component.
- the component value is set to 0;
- the lowest bit is set to 1, otherwise the Y component value of the YC444 signal remains the original value and output;
- the controller controls the converted YC444 signal to output the Y component
- the controller controls the converted YC444 signal to process the luminance Y component to output the Y component;
- the YC444 signal is provided with an odd point and an even point. If the Y component of the YC444 signal of the odd point is 0, the C component of the YC444 signal takes the C component of the next point as the two number points. The average of the C component. If the Y component of the YC444 signal at an even point is 0, the C component of the YC444 signal takes the C component of the previous point as the average of the C components of the two points. When the Y component When it is not 0, no odd and even points are distinguished, and the C component of the YC444 signal takes the C component of the next number point as the average of the number points C component.
- the processing step of obtaining the video fusion data includes:
- the Y component value of the YC422 video signal is 0 and the control signal sent by the controller is 0, remove the YC422 video signal data whose Y component value is 0 to obtain the YC422 whose Y component is not 0
- the video signal data is the processed YC422 video signal data; if the Y component value in the YC422 video signal is 0 and the control signal sent by the controller is 1, remove the Y component value that is not 0
- the YC422 video signal data obtains the YC422 video signal data whose Y component is 0, which is the processed YC422 video signal data.
- fusion processing is performed according to the YC422 video signal data and the RGB signal data after the controller control and processing, to obtain the video fusion data.
- the present invention also provides a video fusion device, which includes a controller and a video signal processing module to be fused and a video fusion processing module connected to the controller respectively;
- the to-be-fused video signal processing module is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
- the video fusion processing module is used to perform fusion processing on the video data to obtain video fusion data
- the controller is used to control whether the video fusion data is fused with the video.
- the to-be-fused video signal processing module includes a video signal acquisition unit, a first signal conversion unit, a brightness processing unit, a first selection unit, a second signal conversion unit, and a first output unit;
- the video signal to be fused obtained by the video signal acquisition unit is an RGB video signal
- the video signal acquisition unit is respectively connected to the first signal conversion unit and the brightness processing unit
- the first signal conversion unit is connected to the first signal conversion unit and the brightness processing unit respectively.
- the first selection unit is connected to the brightness processing unit
- the brightness processing unit is further connected to the first selection unit
- the first selection unit is further connected to the second signal conversion unit
- the second signal The conversion unit is connected to the first output unit
- the controller is respectively connected to the brightness processing unit and the first selection unit
- the first output unit is connected to the video fusion processing module.
- the first signal conversion unit is used to convert the RGB video signal into a YC444 signal
- the brightness processing unit uses the brightness Y component for signal processing
- the controller controls the first selection unit to output a YC444 signal.
- the second signal conversion unit is used to convert the YC444 signal with Y component and C component into YC422 video signal
- the YC422 video signal is sent from the first output unit to the video Fusion processing module.
- the video fusion processing module includes a third signal conversion unit, a video fusion processing unit, a second selection unit, and a second output unit;
- the first output unit is respectively connected to the third signal conversion unit and the video fusion processing unit, the third signal conversion unit is also connected to the second selection unit, and the video fusion processing unit is also connected to the
- the second selection unit is connected, the second selection unit is also connected to the second output unit, and the controller is also connected to the video fusion processing unit and the second selection unit respectively;
- the third signal conversion unit converts the YC422 video signal into an RGB signal
- the video fusion processing unit removes or retains the YC422 video signal data whose value is 0 when the Y component value in the YC422 signal is 0
- the controller controls whether the second selection unit selects the processed YC422 video signal data and the RGB signal for fusion processing to obtain the video fusion data, the first The second output unit outputs the RGB signal after video fusion.
- the present invention also provides a storage medium, including a processor and a memory;
- the memory is used to store program code and transmit the program code to the processor
- the processor is configured to execute the aforementioned video fusion method according to the instructions in the program code.
- the video fusion method obtains the video signal and video to be fused; processes the brightness and chrominance of the video signal to obtain the processed video data; performs fusion processing on the video data to obtain the video fusion data; combines the video fusion data with
- the video is fused, the YC422 format video brightness and chroma in the video signal are processed, and the brightness data is used to separate the required fused video data through the brightness data.
- the chroma component is in the RGB, YC conversion process. Data loss will occur during the YC conversion process, special processing is performed on the odd and even points; and based on the YCbCr chrominance and brightness processing, the quality of the fusion content signal is effectively improved.
- This method can solve the fusion under the lower bandwidth YC422 video format transmission and processing.
- the abnormal edge problem caused by the change of video color format effectively improves the quality of the fusion video, and solves the technical problem of the abnormal edge caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video;
- the to-be-fused video signal processing module of the video fusion device utilizes the characteristics of no loss of data during the conversion of RGB signals into YC by brightness, and separates the required fusion data through brightness Y data; chrominance components are also converted into YC from RGB signals In the process of performing special processing on the odd and even points, data loss will occur in the signal in the video signal, and the video signal of the video data will be output.
- the video signal of the output video data is input to the video fusion processing module.
- the video fusion processing module is in the process of converting the YC signal to the RGB signal, when the three components of the converted RGB signal, R, G, and B, are set in the controller.
- the video fusion data is fused with other videos to obtain a high-quality video after fusion, which improves the quality of the fusion video, and solves the technical problem of the abnormal edge caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video .
- FIG. 1 is a flowchart of steps of a video fusion method according to an embodiment of the present invention
- FIG. 2 is a flowchart of steps of video data processing in a video fusion method according to an embodiment of the present invention
- FIG. 3 is a flowchart of steps of video fusion data processing in a video fusion method according to an embodiment of the present invention
- Fig. 4 is a frame diagram of a video fusion device according to an embodiment of the present invention.
- the described RGB video signal is a video signal in the RGB format
- the YC422 video signal is a video signal in the YC422 format
- the RGB signal is a video signal in the RGB format
- the YC444 signal is a video signal in the YC444 format.
- the Y letter represents the brightness signal
- the C letter represents the color difference signal
- the YC letter represents the composite signal of brightness and color difference.
- the embodiments of the present application provide a video fusion method, device, and storage medium, which are used to solve the technical problem that the edge abnormality caused by the color format change in the existing processing video fusion process, which leads to the poor quality of the fusion video.
- FIG. 1 is a flowchart of the steps of the video fusion method according to the embodiment of the present invention.
- an embodiment of the present invention provides a video fusion method, which includes the following steps:
- the video signal to be fused is an RGB video signal.
- step S2 in the embodiment of the present invention the RGB video signal is converted into YC444 signal, YCbCr performs luminance and chrominance processing on YC444 signal to obtain processed video data, and converts the processed video data of YC444 signal into YC422 video Signal output.
- step S3 in the embodiment of the present invention the YC422 video signal is converted into an RGB signal.
- the YC422 video signal is changed to The component value is set to 0, and the YC422 video signal data with the value of 0 is removed or retained when the Y component value in the YC422 video signal is 0, to obtain the processed YC422 video signal data, and the processed YC422 video signal data and RGB signal The data is fused to obtain video fusion data.
- step S4 in the embodiment of the present invention the controller controls the video fusion data to be fused with other videos to obtain a fused video.
- the video fusion method obtaineds the video signal and video to be fused; processes the brightness and chroma of the video signal to obtain the processed video data; performs fusion processing on the video data to obtain the video fusion data;
- the fusion data is fused with the video, the brightness and chroma of the YC422 format video in the video signal are processed, and the brightness data is used to separate the required fusion video data through the brightness data without data loss during the RGB and YC conversion process; In the RGB, YC conversion process, data loss will occur for the degree component. Special processing is performed on the odd and even points; and based on the YCbCr chrominance and brightness processing, the quality of the fusion content signal is effectively improved.
- This method can transmit and transmit in the lower bandwidth YC422 video format. Under processing, it solves the problem of edge abnormality caused by the color format change of the fusion video, effectively improves the quality of the fusion video, and solves the technical problem of the poor quality of the fusion video caused by the edge abnormality caused by the color format change in the existing processing video fusion process.
- Fig. 2 is a flowchart of steps of video data processing in a video fusion method according to an embodiment of the present invention.
- the video signal is an RGB video signal.
- the processing steps for obtaining video data include:
- the YC444 signal with the Y component and the C component is converted into a YC422 video signal, and the data in the YC422 video signal is the processed video data.
- step S21 the RGB video signal is converted into a YC444 signal
- step S22 the three components of R, G, and B in the RGB video signal are within the preset value range of the controller.
- the Y component value of the video signal is set to 0. If the Y component value of the YC444 signal is 0, the lowest bit is set to 1, and the Y component value of the YC444 signal remains the original value in other cases.
- the Y component has 8 bits of data, and the lowest bit is the 0th bit of the Y component.
- the controller controls the converted YC444 signal to output the Y component. If the Y component value in the RGB video signal is 1, the controller controls the converted YC444 signal to process the luminance Y component to output the Y component. YC444 signal outputs C component. There are odd and even points on the YC444 signal. If the Y component of the YC444 signal of the odd point is 0, the C component of the YC444 signal takes the C component of the next point as the average of the C components of the two points.
- the C component of the YC444 signal takes the C component of the previous number as the average of the two C components.
- the C component in the YC444 signal takes the C component of the next number point as the average of the number point C component.
- YC444 signal is 1920*1080@60HZ signal resolution. Among them, the number of pixels in a row is 1920, the first pixel is odd, the second pixel is even, the third is odd, and the fourth is odd.
- Points are even-numbered points, and so on; when the Y component of the second pixel is 0, the C component of the second pixel is the average of the C component of the first pixel and the C component of the second pixel When the Y component of the third pixel is 0, the C component of the third pixel is the average of the C component of the third pixel and the C component of the fourth pixel. When the Y component is not 0, the odd and even points are not distinguished, and the C component of the next point is taken as the average of the C components of the points.
- step S2 in the embodiment of the present invention the data is not lost during the conversion process of the RGB video signal to the YC signal by using the brightness, and the required fused video data is separated by the brightness Y data; for the YC444 signal Special processing of the parity points causes data loss in the conversion process of the chrominance components. Based on the above-mentioned brightness and chrominance processing of the video signal, the quality of the fused video signal is effectively improved, thereby improving the quality of the fused video.
- the video In the process of signal processing, it also solves the problem of fusion video quality caused by the chrominance loss of the processed video signal.
- Fig. 3 is a flowchart of steps of video fusion data processing in the video fusion method according to an embodiment of the present invention.
- the processing step of obtaining video fusion data includes:
- the fusion processing is performed according to the YC422 video signal data and the RGB signal data processed by the controller to obtain the video fusion data.
- step S3 there is a YC signal in the process of obtaining video fusion data.
- the YC422 video signal when the three components of R, G, and B of the converted RGB signal are within the controller setting range value, the YC422 video signal The Y component value in the YC422 video signal is set to 0, and the YC422 video signal data whose value is 0 is removed or retained when the Y component value in the YC422 video signal is 0, to obtain the processed YC422 video signal data.
- step S4 in the embodiment of the present invention the obtained video fusion data is fused with other videos to obtain a video with high quality after fusion.
- Fig. 4 is a frame diagram of a video fusion device according to an embodiment of the present invention.
- an embodiment of the present invention provides a video fusion device, including a controller 10, and a video signal processing module 20 and a video fusion processing module 30 to be fused that are respectively connected to the controller 10;
- the to-be-fused video signal processing module 20 is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
- the video fusion processing module 30 is configured to perform fusion processing on video data to obtain video fusion data
- the controller 10 is used to control whether the video fusion data is fused with the video.
- the to-be-fused video signal processing module 20 utilizes the feature of no loss of data during the conversion of RGB signals into YC by brightness, and separates the required fused data through brightness Y data; chrominance components are also converted into YC from RGB signals In the process of performing special processing on the odd and even points, data loss will occur in the signal in the video signal, and the video signal of the video data will be output.
- the video signal of the output video data is input to the video fusion processing module 30.
- the video fusion processing module 30 is in the process of converting the YC signal to the RGB signal.
- the to-be-fused video signal processing module 20 in the embodiment of the present invention includes a video signal acquisition unit 21, a first signal conversion unit 22, a brightness processing unit 23, a first selection unit 24, a second signal conversion unit 25, and a first output unit 26;
- the video signal acquisition unit 21 acquires the video signal to be fused as an RGB video signal.
- the video signal acquisition unit 21 is respectively connected to the first signal conversion unit 22 and the brightness processing unit 23, and the first signal conversion unit 22 is respectively connected to the first selection unit 24.
- the brightness processing unit 23 is also connected to the first selection unit 24, the first selection unit 24 is also connected to the second signal conversion unit 25, the second signal conversion unit 25 is connected to the first output unit 26, the controller 10 is respectively connected to the brightness processing unit 23 and the first selection unit 24, and the first output unit 26 is connected to the video fusion processing module 30.
- the video signal acquisition unit 21 is mainly used to acquire the RGB video signal of the video
- the first signal conversion unit 22 is used to convert the RGB video signal into a YC444 signal
- the brightness processing unit 23 uses the brightness Y component for signal processing and control
- the device 10 controls the first selection unit 22 to output YC444 signals with Y and C components
- the second signal conversion unit 25 is used to convert the YC444 signals with Y and C components into YC422 video signals
- the YC422 video signals are output from the first
- the unit 26 is delivered to the video fusion processing module 30.
- the Y component value of the video signal in the brightness processing unit 23 is set to 0, and when the first signal conversion unit 22.
- the Y component value of the output YC444 signal is 0, set its lowest bit to 1.
- the Y component value of the YC444 signal is maintained and output as the first signal conversion unit 22; the first selection unit 24 is the Y component of the YC444 signal The output selection is controlled by the controller 10.
- the first signal conversion unit 22 When the Y component of the YC444 signal is 0, the first signal conversion unit 22 outputs the Y component of the YC444 signal, and when the Y component of the RGB video signal is 1, it is processed by the brightness processing unit 23.
- the first selection unit 24 After outputting the Y component value in the RGB video signal, the first selection unit 24 outputs the chrominance component CbCr (C component) which is output by the YC444 signal in the first signal conversion unit 22; the second signal conversion unit 25 converts the YC444 signal into YC422 Signal, when the Y component of the odd point in the YC444 signal is 0, the chrominance component CbCr (C component) takes the C component of the next point as the average of the two C components, when the even point Y in the YC444 signal When the component is 0, the chrominance component CbCr (C component) takes the C component of the previous number point as the average of the two number points C components.
- the chrominance component CbCr (C component) is taken regardless of the odd and even points.
- the C component of the next number of points is taken as the average of the number of C components;
- the signal output by the first output unit 26 is the YC422 signal output, and the YC422 signal output by the first output unit 26 is used as the input signal of the video fusion processing module 30.
- the video fusion processing module 30 in the embodiment of the present invention includes a third signal conversion unit 31, a video fusion processing unit 32, a second selection unit 33, and a second output unit 34; the first output unit 26 and the third signal conversion unit respectively 31 is connected to the video fusion processing unit 32, the third signal conversion unit 31 is also connected to the second selection unit 33, the video fusion processing unit 32 is also connected to the second selection unit 33, and the second selection unit 33 is also connected to the second output unit 34
- the controller 10 is also connected to the video fusion processing unit 32 and the second selection unit 33 respectively.
- the third signal conversion unit 31 converts the YC422 video signal into an RGB signal
- the video fusion processing unit 32 removes or retains the YC422 video signal data whose value is 0 when the Y component value in the YC422 signal is 0, to obtain a processed YC422 video signal
- the controller 10 controls the second selection unit 33 whether to select the processed YC422 video signal data and the RGB signal for fusion processing to obtain the video fusion data
- the second output unit 34 outputs the video fusion RGB signal.
- the YC422 signal output by the first output unit 26 is transmitted to the signal input end of the video fusion processing module 30 and then enters the video fusion processing module 30, and the third signal conversion unit 31 transfers the output signal to the signal input end of the video fusion processing module 30.
- the YC422 signal is converted to RGB video signal, and the video fusion processing unit 32 performs fusion data processing.
- the Y component of the signal input terminal of the video fusion processing module 30 is 0 and the control signal of the controller 10 is 0, the Y component of the YC422 signal is removed.
- the 0 data keeps the Y component data in the YC422 signal that is not 0.
- the second selection unit 33 is The selector selects whether to perform fusion processing, and is controlled by the controller 10, 0 does not perform fusion processing, and 1 performs fusion processing; the second output unit 34 outputs RGB fusion signals.
- the setting range is determined by the video data to be fused in actual application.
- the embodiment of the present invention provides a storage medium including a processor and a memory
- the memory is used to store the program code and transmit the program code to the processor
- the processor is configured to execute the aforementioned video fusion method according to the instructions in the program code.
- the processor is configured to execute the steps in the foregoing embodiment of the video fusion method according to the instructions in the program code, such as steps S1 to S4 shown in FIG. 1. Or, when the processor executes the computer program, the functions of the modules/units in the foregoing device embodiments, such as the functions of the units 101 to 103 shown in FIG. 4, are realized.
- the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory and executed by the processor to complete the application.
- One or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the terminal device.
- the computer program can be divided to include the controller 10 and the to-be-fused video signal processing module 20 and the video fusion processing module 30 respectively connected to the controller 10:
- the to-be-fused video signal processing module 20 is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;
- the video fusion processing module 30 is used to perform fusion processing on video data to obtain video fusion data;
- the controller 10 is used to control whether the video fusion data is fused with the video.
- the terminal device can be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
- the terminal device may include, but is not limited to, a processor and a memory. Those skilled in the art can understand that it does not constitute a limitation on the terminal device. It may include more or fewer components than shown in the figure, or combine certain components, or different components. For example, the terminal device may also include input and output devices, Network access equipment, bus, etc.
- the so-called processor can be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the memory may be an internal storage unit of the terminal device, such as the hard disk or memory of the terminal device.
- the memory can also be an external storage device of the terminal device, such as a plug-in hard disk equipped on the terminal device, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), etc.
- the memory may also include both an internal storage unit of the terminal device and an external storage device.
- the memory is used to store computer programs and other programs and data required by the terminal device.
- the memory can also be used to temporarily store data that has been output or will be output.
- the disclosed system, device, and method can be implemented in other ways.
- the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the present invention essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
- a computer device which can be a personal computer, a server, or a network device, etc.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Of Color Television Signals (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种视频融合方法,其特征在于,包括以下步骤:A video fusion method is characterized in that it comprises the following steps:S1.获取待融合的视频信号以及视频;S1. Obtain the video signal and video to be fused;S2.对所述视频信号进行亮度、色度处理,得到处理后的视频数据;S2. Perform brightness and chroma processing on the video signal to obtain processed video data;S3.对所述视频数据进行融合处理,得到视频融合数据;S3. Perform fusion processing on the video data to obtain video fusion data;S4.将所述视频融合数据与所述视频进行融合。S4. Fusion of the video fusion data and the video.
- 根据权利要求1所述的视频融合方法,其特征在于,所述视频信号为RGB视频信号,在所述步骤S2中,获得所述视频数据的处理步骤包括:The video fusion method according to claim 1, wherein the video signal is an RGB video signal, and in the step S2, the processing step of obtaining the video data comprises:S21.对所述RGB视频信号进行信号转换,并转换成YC444信号;S21. Perform signal conversion on the RGB video signal and convert it into a YC444 signal;S22.对所述YC444信号中的亮度进行亮度Y分量处理,对所述YC444信号中的色度进行处理,输出Y分量和C分量;S22. Perform luminance Y component processing on the luminance in the YC444 signal, and perform processing on the chrominance in the YC444 signal, and output the Y component and the C component;S23.处理后具有所述Y分量和C分量的所述YC444信号转换成YC422视频信号,所述YC422视频信号中的数据即为处理后的所述视频数据。S23. After processing, the YC444 signal having the Y component and the C component is converted into a YC422 video signal, and the data in the YC422 video signal is the processed video data.
- 根据权利要求2所述的视频融合方法,其特征在于,在所述步骤S22中,在所述RGB视频信号中的R、G、B这三个分量在控制器的预设值范围内,经过所述亮度Y分量处理后将视频信号的Y分量值设置为0;The video fusion method according to claim 2, characterized in that, in the step S22, the three components of R, G, and B in the RGB video signal are within the preset value range of the controller and pass through After the luminance Y component is processed, the Y component value of the video signal is set to 0;若所述YC444信号中的Y分量值为0时将其最低位设置为1,其它情况下YC444信号中的Y分量值保持原值输出;If the Y component value of the YC444 signal is 0, the lowest bit is set to 1, otherwise the Y component value of the YC444 signal remains the original value and output;若所述RGB视频信号中的Y分量值为0时,所述控制器控制转换后的所述YC444信号输出Y分量;If the value of the Y component in the RGB video signal is 0, the controller controls the converted YC444 signal to output the Y component;若所述RGB视频信号中的Y分量值为1时,所述控制器控制转换后的所述YC444信号经过所述亮度Y分量处理输出Y分量;If the Y component value in the RGB video signal is 1, the controller controls the converted YC444 signal to process the luminance Y component to output the Y component;所述YC444信号上设置有奇数点和偶数点,若奇数点的所述YC444信号中Y分量为0时,所述YC444信号中的C分量取其下一数点的C分量作为两个数点C分量的平均,若偶数点的所述YC444信号中Y分量为0时,所述YC444信号中的C分量取其上一数点的C分量作为两个数点C分量的平均,当Y分量不为0时,不分奇偶数点,所述YC444信号中的C分量取其下一数点的C分量作为数点C分量的平均。The YC444 signal is provided with an odd point and an even point. If the Y component of the YC444 signal of the odd point is 0, the C component of the YC444 signal takes the C component of the next point as the two number points. The average of the C component. If the Y component of the YC444 signal at the even-numbered point is 0, the C component of the YC444 signal takes the C component of the previous point as the average of the C-components of the two points. When the Y component When it is not 0, no odd and even points are distinguished, and the C component of the YC444 signal takes the C component of the next number point as the average of the number points C component.
- 根据权利要求1所述的视频融合方法,其特征在于,在所述步骤 S3中,获得所述视频融合数据的处理步骤包括:The video fusion method according to claim 1, wherein in the step S3, the processing step of obtaining the video fusion data comprises:S31.对于所述视频数据中的YC422视频信号进行信号转换,并转换成RGB信号;S31. Perform signal conversion on the YC422 video signal in the video data, and convert it into an RGB signal;S32.若所述YC422视频信号中的Y分量值为0,控制器发出的控制信号为0时,去除Y分量值为0的所述YC422视频信号数据,得到Y分量不为0的所述YC422视频信号数据,即为处理后的所述YC422视频信号数据;若所述YC422视频信号中的Y分量值为0,所述控制器发出的控制信号为1时,去除Y分量值不为0的所述YC422视频信号数据,得到Y分量为0的所述YC422视频信号数据,即为处理后的所述YC422视频信号数据。S32. If the Y component value of the YC422 video signal is 0 and the control signal sent by the controller is 0, remove the YC422 video signal data whose Y component value is 0 to obtain the YC422 whose Y component is not 0 The video signal data is the processed YC422 video signal data; if the Y component value in the YC422 video signal is 0 and the control signal sent by the controller is 1, remove the Y component value that is not 0 The YC422 video signal data obtains the YC422 video signal data whose Y component is 0, which is the processed YC422 video signal data.
- 根据权利要求4所述的视频融合方法,其特征在于,在所述步骤S3中,根据所述控制器控制处理后的所述YC422视频信号数据与所述RGB信号的数据进行融合处理,得到所述视频融合数据。The video fusion method according to claim 4, wherein in the step S3, fusion processing is performed according to the YC422 video signal data and the RGB signal data processed by the controller to obtain the The video fusion data.
- 一种视频融合装置,其特征在于,包括控制器以及分别与所述控制器连接的待融合视频信号处理模块和视频融合处理模块;A video fusion device, characterized by comprising a controller and a video signal processing module to be fused and a video fusion processing module connected to the controller respectively;所述待融合视频信号处理模块用于对待融合的视频信号进行亮度、色度处理,得到视频数据;The to-be-fused video signal processing module is used to perform brightness and chroma processing on the to-be-fused video signal to obtain video data;所述视频融合处理模块用于对所述视频数据进行融合处理,得到视频融合数据;The video fusion processing module is used to perform fusion processing on the video data to obtain video fusion data;所述控制器用于控制所述视频融合数据是否与所述视频进行融合。The controller is used to control whether the video fusion data is fused with the video.
- 根据权利要求6所述的视频融合装置,其特征在于,所述待融合视频信号处理模块包括视频信号获取单元、第一信号转换单元、亮度处理单元、第一选择单元、第二信号转换单元和第一输出单元;The video fusion device according to claim 6, wherein the to-be-fused video signal processing module includes a video signal acquisition unit, a first signal conversion unit, a brightness processing unit, a first selection unit, a second signal conversion unit, and First output unit所述视频信号获取单元中获取待融合的视频信号为RGB视频信号,所述视频信号获取单元分别与所述第一信号转换单元和所述亮度处理单元连接,所述第一信号转换单元分别与所述第一选择单元和所述亮度处理单元连接,所述亮度处理单元还与所述第一选择单元,所述第一选择单元还与所述第二信号转换单元连接,所述第二信号转换单元与所述第一输出单元连接,所述控制器分别与所述亮度处理单元和所述第一选择单元连接,所 述第一输出单元与所述视频融合处理模块连接。The video signal to be fused obtained by the video signal acquisition unit is an RGB video signal, the video signal acquisition unit is respectively connected to the first signal conversion unit and the brightness processing unit, and the first signal conversion unit is connected to the first signal conversion unit and the brightness processing unit respectively. The first selection unit is connected to the brightness processing unit, the brightness processing unit is also connected to the first selection unit, the first selection unit is further connected to the second signal conversion unit, and the second signal The conversion unit is connected to the first output unit, the controller is respectively connected to the brightness processing unit and the first selection unit, and the first output unit is connected to the video fusion processing module.
- 根据权利要求7所述的视频融合装置,其特征在于,所述第一信号转换单元用于将所述RGB视频信号转换成YC444信号,所述亮度处理单元采用亮度Y分量进行信号处理,所述控制器控制所述第一选择单元输出具有Y分量和C分量的YC444信号,所述第二信号转换单元用于将具有Y分量和C分量的YC444信号转换成YC422视频信号,所述YC422视频信号从所述第一输出单元输送至所述视频融合处理模块上。The video fusion device according to claim 7, wherein the first signal conversion unit is used to convert the RGB video signal into a YC444 signal, the brightness processing unit uses the brightness Y component for signal processing, and the The controller controls the first selection unit to output a YC444 signal having a Y component and a C component, and the second signal conversion unit is used to convert the YC444 signal having a Y component and a C component into a YC422 video signal, the YC422 video signal From the first output unit to the video fusion processing module.
- 根据权利要求8所述的视频融合装置,其特征在于,所述视频融合处理模块包括第三信号转换单元、视频融合处理单元、第二选择单元和第二输出单元;The video fusion device according to claim 8, wherein the video fusion processing module comprises a third signal conversion unit, a video fusion processing unit, a second selection unit, and a second output unit;所述第一输出单元分别与所述第三信号转换单元和所述视频融合处理单元连接,所述第三信号转换单元还与所述第二选择单元连接,所述视频融合处理单元还与所述第二选择单元连接,所述第二选择单元还与所述第二输出单元连接,所述控制器还分别与所述视频融合处理单元和所述第二选择单元连接;The first output unit is respectively connected to the third signal conversion unit and the video fusion processing unit, the third signal conversion unit is also connected to the second selection unit, and the video fusion processing unit is also connected to the The second selection unit is connected, the second selection unit is also connected to the second output unit, and the controller is also connected to the video fusion processing unit and the second selection unit respectively;所述第三信号转换单元将所述YC422视频信号转换成RGB信号,所述视频融合处理单元根据所述YC422信号中的Y分量值为0时去除或保留值为0的所述YC422视频信号数据,得到处理后的所述YC422视频信号数据;所述控制器控制所述第二选择单元是否选择处理后所述YC422视频信号数据与所述RGB信号进行融合处理,得到视频融合数据;所述第二输出单元输出视频融合后的RGB信号。The third signal conversion unit converts the YC422 video signal into an RGB signal, and the video fusion processing unit removes or retains the YC422 video signal data whose value is 0 when the Y component value in the YC422 signal is 0 , The processed YC422 video signal data is obtained; the controller controls whether the second selection unit selects the processed YC422 video signal data and the RGB signal for fusion processing to obtain video fusion data; The second output unit outputs the RGB signal after video fusion.
- 一种存储介质,其特征在于,包括处理器以及存储器;A storage medium, characterized by comprising a processor and a memory;所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;The memory is used to store program code and transmit the program code to the processor;所述处理器用于根据所述程序代码中的指令执行权利要求1-5任一项所述的视频融合方法。The processor is configured to execute the video fusion method according to any one of claims 1 to 5 according to instructions in the program code.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980003225.3A CN111095919B (en) | 2019-12-17 | 2019-12-17 | Video fusion method and device and storage medium |
PCT/CN2019/125801 WO2021119968A1 (en) | 2019-12-17 | 2019-12-17 | Video fusion method and apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/125801 WO2021119968A1 (en) | 2019-12-17 | 2019-12-17 | Video fusion method and apparatus, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021119968A1 true WO2021119968A1 (en) | 2021-06-24 |
Family
ID=70400245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/125801 WO2021119968A1 (en) | 2019-12-17 | 2019-12-17 | Video fusion method and apparatus, and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111095919B (en) |
WO (1) | WO2021119968A1 (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102572475A (en) * | 2010-12-17 | 2012-07-11 | 微软公司 | Stereo 3D video support in computing devices |
CN105005963A (en) * | 2015-06-30 | 2015-10-28 | 重庆市勘测院 | Multi-camera images stitching and color homogenizing method |
CN105635602A (en) * | 2015-12-31 | 2016-06-01 | 天津大学 | System for mosaicing videos by adopting brightness and color cast between two videos and adjustment method thereof |
CN106570850A (en) * | 2016-10-12 | 2017-04-19 | 成都西纬科技有限公司 | Image fusion method |
US20170323481A1 (en) * | 2015-07-17 | 2017-11-09 | Bao Tran | Systems and methods for computer assisted operation |
CN108449569A (en) * | 2018-03-13 | 2018-08-24 | 重庆虚拟实境科技有限公司 | Virtual meeting method, system, device, computer installation and storage medium |
CN109981983A (en) * | 2019-03-26 | 2019-07-05 | Oppo广东移动通信有限公司 | Augmented reality image processing method, device, electronic equipment and storage medium |
CN110147162A (en) * | 2019-04-17 | 2019-08-20 | 江苏大学 | A kind of reinforced assembly teaching system and its control method based on fingertip characteristic |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8798171B2 (en) * | 2010-06-28 | 2014-08-05 | Richwave Technology Corp. | Video transmission by decoupling color components |
CN110363732A (en) * | 2018-04-11 | 2019-10-22 | 杭州海康威视数字技术股份有限公司 | A kind of image interfusion method and its device |
-
2019
- 2019-12-17 WO PCT/CN2019/125801 patent/WO2021119968A1/en active Application Filing
- 2019-12-17 CN CN201980003225.3A patent/CN111095919B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102572475A (en) * | 2010-12-17 | 2012-07-11 | 微软公司 | Stereo 3D video support in computing devices |
CN105005963A (en) * | 2015-06-30 | 2015-10-28 | 重庆市勘测院 | Multi-camera images stitching and color homogenizing method |
US20170323481A1 (en) * | 2015-07-17 | 2017-11-09 | Bao Tran | Systems and methods for computer assisted operation |
CN105635602A (en) * | 2015-12-31 | 2016-06-01 | 天津大学 | System for mosaicing videos by adopting brightness and color cast between two videos and adjustment method thereof |
CN106570850A (en) * | 2016-10-12 | 2017-04-19 | 成都西纬科技有限公司 | Image fusion method |
CN108449569A (en) * | 2018-03-13 | 2018-08-24 | 重庆虚拟实境科技有限公司 | Virtual meeting method, system, device, computer installation and storage medium |
CN109981983A (en) * | 2019-03-26 | 2019-07-05 | Oppo广东移动通信有限公司 | Augmented reality image processing method, device, electronic equipment and storage medium |
CN110147162A (en) * | 2019-04-17 | 2019-08-20 | 江苏大学 | A kind of reinforced assembly teaching system and its control method based on fingertip characteristic |
Also Published As
Publication number | Publication date |
---|---|
CN111095919A (en) | 2020-05-01 |
CN111095919B (en) | 2021-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107680056B (en) | Image processing method and device | |
CN107682667B (en) | Video processor and multi-signal source pre-monitoring method | |
CN108337496B (en) | White balance processing method, processing device, processing equipment and storage medium | |
JP7359521B2 (en) | Image processing method and device | |
US20190080674A1 (en) | Systems and methods for combining video and graphic sources for display | |
CN108345559B (en) | Virtual reality data input device and virtual reality equipment | |
US9239697B2 (en) | Display multiplier providing independent pixel resolutions | |
CN114998122A (en) | Low-illumination image enhancement method | |
WO2023010755A1 (en) | Hdr video conversion method and apparatus, and device and computer storage medium | |
CN110930932B (en) | Display screen correction method and system | |
US11146770B2 (en) | Projection display apparatus and display method | |
WO2021119968A1 (en) | Video fusion method and apparatus, and storage medium | |
CN107948652B (en) | Method and equipment for image conversion | |
CN114245027B (en) | Video data hybrid processing method, system, electronic equipment and storage medium | |
EP1993293A1 (en) | System and method of image processing | |
US8929666B2 (en) | Method and apparatus of generating a multi-format template image from a single format template image | |
CN116489431B (en) | Input/output signal processing method, device, equipment and storage medium | |
CN114266696A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
US20130300774A1 (en) | Image processing method | |
WO2021259191A1 (en) | Video mirror processing method and apparatus, photographic device, and readable storage medium | |
TWI754863B (en) | Image capturing device and method | |
CN107197287A (en) | A kind of video recorded broadcast method and apparatus based on arm processors | |
CN113395502B (en) | Cross chromaticity reduction method based on different color space sampling formats | |
WO2023035943A1 (en) | Color card generating method, image processing method, and apparatuses, and readable storage medium | |
CN109688333B (en) | Color image acquisition method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19956979 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19956979 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02/02/2023) |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02-02-2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19956979 Country of ref document: EP Kind code of ref document: A1 |