WO2023184850A1 - 视频图像处理方法及装置、设备以及存储介质 - Google Patents

视频图像处理方法及装置、设备以及存储介质 Download PDF

Info

Publication number
WO2023184850A1
WO2023184850A1 PCT/CN2022/115939 CN2022115939W WO2023184850A1 WO 2023184850 A1 WO2023184850 A1 WO 2023184850A1 CN 2022115939 W CN2022115939 W CN 2022115939W WO 2023184850 A1 WO2023184850 A1 WO 2023184850A1
Authority
WO
WIPO (PCT)
Prior art keywords
optical parameter
video image
character
unit
display
Prior art date
Application number
PCT/CN2022/115939
Other languages
English (en)
French (fr)
Inventor
冯学铨
Original Assignee
晶晨半导体(上海)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 晶晨半导体(上海)股份有限公司 filed Critical 晶晨半导体(上海)股份有限公司
Publication of WO2023184850A1 publication Critical patent/WO2023184850A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/44504Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/917Television signal processing therefor for bandwidth reduction

Definitions

  • Embodiments of the present invention relate to the field of image processing technology, and in particular, to a video image processing method and device, equipment and storage medium.
  • Video image processing technology is widely used in intelligent video image analysis technology.
  • OSD On-Screen Display
  • CTR/LCD monitors video recorders and DVD players
  • Graphics allow users to get some information, such as station logos, trademarks, lines, system menus, etc.
  • the problem solved by the embodiments of the present invention is to provide a video image processing method and device, equipment and storage medium, which can reduce the CPU load, calculation amount and bandwidth, and can also help reduce the complexity of chip design and the area occupied by the chip.
  • an embodiment of the present invention provides a video image processing method, which includes: obtaining an area to be superimposed in the video image, where the area to be superimposed includes multiple display blocks; using an optical parameter statistics unit to calculate each of the display blocks.
  • the block performs optical parameter statistical processing to obtain optical parameter statistical values; wherein the optical parameter statistical unit is a hardware unit; based on the difference between the optical parameter statistical value and the optical parameter value of the target display character, it is judged whether it is necessary to The target display character is adjusted; when the difference is less than or equal to the preset threshold, the target display character is adjusted to obtain a superimposed character.
  • the optical parameter value of the superimposed character is consistent with the optical parameter value of the corresponding display block.
  • the difference between parameter statistics is greater than the preset threshold; character superposition processing is performed on the area to be superimposed, and the character superposition processing includes: superimposing the superimposed characters and the corresponding display block.
  • an embodiment of the present invention also provides a video image processing device, including: an acquisition unit for acquiring an area to be superimposed in the video image, where the area to be superimposed includes a plurality of display blocks; and an optical parameter statistics unit for Perform optical parameter statistical processing on each display block to obtain optical parameter statistical values; the optical parameter statistical unit is a hardware unit; and a judgment unit is used to based on the difference between the optical parameter statistical value and the target display character optical parameter value.
  • a character adjustment unit used to adjust the target display character to obtain superimposed characters when the difference is less than or equal to the preset threshold, The difference between the optical parameter value of the superimposed character and the optical parameter statistical value of the corresponding display block is greater than the preset threshold;
  • a character superimposition unit is used to perform character superposition processing on the area to be superimposed; the character
  • the overlay unit includes a first subunit for overlaying the overlay character with the corresponding display block.
  • an embodiment of the present invention also provides a device, including at least one memory and at least one processor.
  • the memory stores one or more computer instructions, wherein the one or more computer instructions are executed by the processor. Executed to implement the video image processing method provided by the embodiment of the present invention.
  • embodiments of the present invention also provide a storage medium that stores one or more computer instructions, and the one or more computer instructions are used to implement the video image processing method provided by the embodiment of the present invention.
  • the video image processing method provided by the embodiment of the present invention uses an optical parameter statistical unit to perform optical parameter statistical processing on each of the display blocks to obtain optical parameters.
  • Statistical value wherein, the optical parameter statistics unit is a hardware unit; based on the difference between the optical parameter statistical value and the optical parameter value of the target display character, it is determined whether the target display character needs to be adjusted; when the When the difference is less than or equal to the preset threshold, the target display character is adjusted; therefore, the embodiment of the present invention completes the statistics of local optical parameters of the video image by adding a small number of hardware units, without using software to perform optical parameter statistics. processing, correspondingly reducing the CPU load, calculation amount and bandwidth, and making little changes to the hardware module, it is also helpful to reduce the complexity of the chip design and the area occupied by the chip.
  • the optical parameter statistics unit performs optical parameter statistical processing on each of the display blocks to obtain optical parameter statistical values; the optical parameter statistics unit is a hardware unit; and the judgment unit is based on the optical parameter statistics.
  • the difference between the parameter statistical value and the optical parameter value of the target display character is used to determine whether the target display character needs to be adjusted; when the difference is less than or equal to the preset threshold, the brightness adjustment unit adjusts the target display character Adjustment processing is performed; therefore, the embodiment of the present invention completes the statistics of local optical parameters of the video image by adding a small number of hardware units, without using software to perform statistical processing of optical parameters, correspondingly reducing the CPU load, calculation amount and bandwidth, and requiring hardware Module changes are small, which also helps reduce the complexity of chip design and the chip area occupied.
  • Figure 1 is a flow chart corresponding to an embodiment of the video image processing method of the present invention
  • Figure 2 shows the area to be superimposed in a frame of video image
  • FIG. 3 is a functional block diagram of an embodiment of the video image processing device of the present invention.
  • Figure 4 is a hardware structure diagram of a device provided by an embodiment of the present invention.
  • One is a pure software implementation method.
  • the CPU reads the area to be superimposed in the video image and divides the area to be superimposed into multiple display blocks. Then, the brightness statistics of each display block are performed to obtain the brightness statistical value. Then, based on the difference between the brightness statistical value and the target display character brightness value, it is determined whether each display block needs to undergo brightness adjustment processing.
  • the pure software implementation method has higher flexibility, but when there are many locations or a large area of the area to be superimposed, it will generate a large amount of bandwidth and CPU calculations.
  • the other is a hardware implementation method.
  • the video overlay module in the chip has integrated brightness adjustment processing function, and the CPU only needs to configure the corresponding brightness adjustment processing settings for the video overlay module.
  • the hardware implementation method basically does not require the participation of the CPU.
  • a video overlay processing module requires a corresponding brightness adjustment unit, which will increase the complexity of the chip design and the area occupied by the chip.
  • the hardware implementation method is easy to reduce the cost of video image processing. Flexibility and functionality cannot be expanded.
  • embodiments of the present invention provide a video image processing method.
  • FIG 1 a schematic flow chart of an embodiment of the video image processing method of the present invention is shown.
  • the video image processing method includes the following basic steps:
  • Step S1 Obtain the area to be superimposed in the video image, where the area to be superimposed includes multiple display blocks;
  • Step S2 Use an optical parameter statistical unit to perform optical parameter statistical processing on each of the display blocks to obtain optical parameter statistical values; wherein the optical parameter statistical unit is a hardware unit;
  • Step S3 Based on the difference between the statistical value of the optical parameter and the optical parameter value of the target display character, determine whether the target display character needs to be adjusted;
  • Step S4 When the difference is less than or equal to the preset threshold, adjust the target display character to obtain a superimposed character, between the optical parameter value of the superimposed character and the optical parameter statistical value of the corresponding display block The difference is greater than the preset threshold;
  • Step S5 Perform character superposition processing on the area to be superimposed; the character superposition processing includes step S51: superimpose the superimposed characters and the corresponding display blocks.
  • an optical parameter statistical unit is used to perform optical parameter statistical processing on each of the display blocks to obtain optical parameter statistical values; wherein the optical parameter statistical unit is a hardware unit; based on the optical parameter statistics
  • the difference between the value and the optical parameter value of the target display character is used to determine whether the optical parameter adjustment process of the target display character needs to be performed; when it is determined that the optical parameter adjustment process is required, the optical parameter adjustment process of the target display character is performed; therefore , the embodiment of the present invention completes the statistics of local optical parameters of the video image by adding a small number of hardware units, without using software to perform statistical processing of optical parameters, correspondingly reducing the CPU load, calculation amount and bandwidth, and making little changes to the hardware modules. It is also helpful to reduce the complexity of chip design and the area occupied by the chip.
  • step S1 Obtain an area to be superimposed in the video image, where the area to be superimposed includes multiple display blocks.
  • the area to be superimposed is an area in the original video image where characters need to be superimposed.
  • content such as menus, prompt information, dates, addresses, icons (such as logos), camera information, etc. can be superimposed in the area to be superimposed, so that additional content can be provided based on the video screen.
  • Display information e.g., text, images, etc.
  • the area to be superimposed includes multiple display blocks, so that the subsequent brightness statistics unit can perform brightness statistics on each display block respectively.
  • the position information of the area to be superimposed, and the position information and size of the display block are obtained.
  • the position information of the area to be superimposed, and the position information and size of the display block are obtained based on the size of the superimposed characters and the position information of the superimposed characters in the original video image.
  • the area to be superimposed includes six display blocks A, B, C, D, E and F.
  • the shape of the display block is a rectangle.
  • the size and number of the display blocks can be flexibly set according to actual needs, and the sizes of the display blocks can be the same or different.
  • the step of obtaining the area to be superimposed and the display block includes: obtaining the area to be superimposed in the video image; and performing area division processing on the area to be superimposed to obtain the multiple display blocks. That is to say, the area to be superimposed is first obtained, and then the area to be superimposed is divided into a plurality of display blocks.
  • the area to be superimposed is divided into regions to obtain the multiple display blocks. For example: the size of the superimposed characters is 32*32, then the size of the display block is 32*32, or 16*32.
  • the display block corresponds to one or more superimposed characters.
  • the video image processing method also includes: performing post processing on the video image, and the post processing includes: cropping the video image, and outputting the cropped image.
  • the video image is cropped to retain only the required image content, thereby confirming the final output image size.
  • the cropped image includes multiple rows of sub-images.
  • the step of cropping processing can also be omitted.
  • obtaining the area to be superimposed in the video image includes: obtaining the area to be superimposed in the cropped image.
  • the processed video image Only by cropping the processed video image can the size of the final output video image be confirmed.
  • the information of the area to be superimposed can be obtained based on the size of the final output image. Correspondingly It can ensure the position accuracy of the area to be superimposed.
  • optical parameter statistical processing can be directly based on the optical parameter values of the pixels at each position output by the cropping process, without the need to call data to the DDR, and there is no need to generate additional data. DDR reading and writing, which in turn helps reduce the amount of calculations, CPU load and bandwidth.
  • post-processing modules are usually used to post-process video images.
  • the optical parameter statistics unit can be added to the post-processing module and is easily designed as an online module, which accordingly reduces the Modifications to existing hardware modules reduce the complexity of chip design and help save chip area.
  • the post-processing further includes: before cropping the video image, scaling the video image and outputting multiple video images with different resolutions.
  • cropping is performed on multiple video images with different resolutions to output the cropped images with different resolutions.
  • the scaling step may be omitted.
  • the video image processing method is applied to a surveillance camera scene as an example.
  • the post-processing also includes: covering the video image before performing the scaling process. )deal with.
  • the video image is blocked, so that based on actual requirements, areas in the video image that are not expected to be exposed (for example, areas in the image that may expose personal privacy) are blocked.
  • areas of the video image that are not expected to be exposed can be blocked with solid color areas.
  • the occlusion processing step may be omitted.
  • the video image processing method further includes: after acquiring the area to be superimposed and the display block, configuring the information of the display block to the optical parameter statistics unit.
  • the information of the display block is configured to the optical parameter statistics unit, so that the subsequent optical parameter statistics unit can perform optical parameter statistical processing on the display block based on the configured information of the display block.
  • the information of the display blocks includes position information and size information of each display block.
  • the CPU may obtain the information of the area to be superimposed and the display block, and then configure the information of the display block to the optical parameter statistics unit.
  • the CPU After acquiring the information of the area to be superimposed and the display block, the CPU stores the information of the area to be superimposed and the display block in a register so that the optical parameter statistics unit can obtain it from the register. Information about the area to be overlaid and the display block.
  • the register may be set inside the post-processing module.
  • step S2 Use the optical parameter statistics unit to perform optical parameter statistical processing on each display block to obtain optical parameter statistical values; wherein the optical parameter statistics unit is a hardware unit.
  • the optical parameter statistics unit is a hardware unit. Therefore, this embodiment completes the statistics of local optical parameters of the video image by adding a small number of hardware units. There is no need to use software to perform optical parameter statistical processing, which accordingly reduces the CPU load, calculation amount and Bandwidth, and small changes to hardware modules are also helpful in reducing the complexity of chip design and the chip area occupied.
  • performing optical parameter statistical processing on each display block includes: obtaining optical parameter statistical values of all pixels in each display block.
  • the statistical value of the optical parameters of the pixels may be the sum of the optical parameters of all pixels, or may also be the average value of the optical parameters of all pixels.
  • the optical parameter is brightness;
  • the optical parameter statistics unit is a brightness statistics unit;
  • performing optical parameter statistical processing on each of the display blocks includes: using the brightness statistics unit to perform statistical processing on each of the display blocks. Brightness statistical processing to obtain brightness statistical values.
  • the brightness statistical value may be the sum of brightness of all pixels in the current display block, or may be the average brightness of all pixels in the current display block.
  • the optical parameter is brightness
  • the optical parameter statistics unit is a brightness statistics unit only as an example, and the implementation of the optical parameter statistics unit is not limited to this.
  • the optical parameter may be chromaticity
  • the optical parameter statistics unit may also be a chromaticity statistics unit.
  • the chromaticity statistics unit is used to perform chromaticity statistical processing on each display block to obtain chromaticity statistical values. By obtaining the chromaticity statistical value, each display block can be compared with the chromaticity value of the corresponding target display character to determine whether the target display character needs to be adjusted.
  • the optical parameter statistics unit is a hardware unit.
  • the optical parameter statistics unit may be a logic circuit.
  • the video image processing method further includes: selecting one cropped image from multiple cropped images in different time periods and outputting it to the optical parameter statistics unit.
  • the optical parameter statistics unit can analyze the cropped images with different resolutions.
  • the display block performs optical parameter statistical processing, thereby enabling time-sharing multiplexing of the optical parameter statistical unit.
  • there is no need to set corresponding optical parameter statistical units for multiple cropped images which is beneficial to reducing the area occupied by the chip. thereby saving costs.
  • the post-processing module reads and processes images line by line. There is a corresponding line count inside the post-processing module. After processing the corresponding number of lines, the processing of one frame of image is completed.
  • the cropped image includes multiple rows of sub-images that are sequentially output; the multiple rows of sub-images are sequentially input to the optical parameter statistics unit.
  • the step of using the optical parameter statistics unit to perform optical parameter statistical processing on each of the display blocks includes: determining whether the current row sub-image contains the display block; if so, the optical parameter statistics The unit performs statistical processing of optical parameters on the display blocks in the current row sub-image.
  • the cropped image is divided into multiple rows of sub-images, and the multiple rows of sub-images are sequentially input to the optical parameter statistics unit, it is necessary to determine whether the current row of sub-images contains the display block, so that only the sub-images containing the display blocks can be judged.
  • the display block in the current row sub-image of the block is judged, thereby reducing the amount of calculation and ensuring that the optical parameter statistics unit correctly performs optical parameter statistical processing on the display block.
  • the post-processing also includes: after performing optical parameter statistical processing on each of the display blocks, performing mirror/flip processing on the cropped image, and outputting the processed image.
  • the step of mirror flipping processing can be omitted.
  • step S3 Based on the difference between the optical parameter statistical value and the optical parameter value of the target display character, determine whether the target display character needs to be adjusted.
  • the target display characters are characters that need to be superimposed and displayed on each display block in the area to be superimposed, so that additional display information can be provided based on the video picture.
  • the target display character By judging whether the target display character needs to be adjusted based on the difference between the optical parameter statistical value and the optical parameter value of the target display character, it is possible to ensure that the target display character can be displayed on each display block. normal display.
  • the target display characters may include menus, prompt information, dates, addresses, icons (such as logos), camera information, etc.
  • the display block corresponds to one or more of the target display characters. Therefore, based on the difference between the optical parameter statistical value of the current display block and the optical parameter value of the corresponding target display character, it is determined whether the target display character corresponding to the current block needs to be adjusted.
  • the optical parameter is brightness. Therefore, based on the difference between the brightness statistical value of the current block and the optical brightness value of the corresponding target display character, it is determined whether the target display character corresponding to the current display block needs to be adjusted. deal with.
  • the optical parameters may also be other types of optical parameters, such as chromaticity.
  • the optical parameters may also be other types of optical parameters, such as chromaticity.
  • chromaticity based on the difference between the chromaticity statistical value of the current block and the corresponding optical chromaticity value of the target display character , determine whether it is necessary to adjust the target display characters corresponding to the current display block.
  • the step S3 of determining whether the target display character needs to be adjusted includes: determining the difference Whether it is greater than or equal to the preset threshold; when the difference is less than or equal to the preset threshold, it is judged that the target display character needs to be adjusted; when the difference is greater than the preset threshold, it is judged that it is not necessary Adjust the target display characters.
  • the difference between the optical parameter statistical value and the optical parameter value of the target display character Compare the difference between the optical parameter statistical value and the optical parameter value of the target display character with the preset threshold. When the difference is less than or equal to the preset threshold, it means that the display The optical parameter difference between the block and the target display character is too small. If the display block and the corresponding target display character are directly superimposed, the target display character will not be displayed normally on the display block. Therefore, , it is determined that the target display characters need to be adjusted so that the target display characters can be displayed normally on the display block.
  • the difference is greater than the preset threshold, it means that the optical parameter value between the display block and the target display character has enough difference, so that the display block and the corresponding target display character can be compared.
  • the target display characters can be displayed normally on the display block as the background.
  • the preset threshold is set based on experience values or user preferences.
  • step S4 When the difference is less than or equal to the preset threshold, adjust the target display character to obtain a superimposed character.
  • the difference between the superimposed character and the optical parameter statistical value of the corresponding display block is The difference between is greater than the preset threshold.
  • the difference is less than or equal to the preset threshold, it means that the optical parameter difference between the display block and the target display character is too small, and the target display character needs to be adjusted to obtain the superimposed character. Therefore, after the superimposed characters and the corresponding display blocks are subsequently superimposed, the target display characters can be displayed normally on the display blocks.
  • the superimposed characters are used to be subsequently superimposed with the corresponding display block to display the target display character on the display block.
  • the step of adjusting the target display characters includes: adjusting the optical parameters of the target display characters to obtain superimposed characters, so that the optical parameter values of the superimposed characters are consistent with the optical parameters of the corresponding display blocks.
  • the difference between parameter statistics is greater than the preset threshold.
  • the optical parameter values between the obtained superimposed characters and the corresponding display blocks are sufficiently different, so that the superimposed characters and the display blocks are subsequently superimposed.
  • the superimposed characters can be displayed normally on the display block.
  • the optical parameter is brightness as an example for explanation.
  • adjusting the target display character may include: performing color inversion processing on the target display character.
  • the target display characters are usually black or white eye-catching characters, and the color is inverted, that is, the white target display characters are turned into black superimposed characters, and the black target display characters are turned into white
  • the superimposed characters are not only simple to operate, but also enable the target similar characters to be clearly displayed in the video image after superimposing the superimposed characters and the display block.
  • the step of performing color inversion processing on the target display character includes bitwise inversion processing.
  • the optical parameters may also be other types of optical parameters, such as chromaticity.
  • the target display character corresponding to the current block is adjusted.
  • Adjusting the target display characters corresponding to the current display block includes: performing color adjustment processing on the target display characters corresponding to the current display block to obtain superimposed characters.
  • the chromaticity value of the superimposed characters is the same as the chromaticity statistical value of the corresponding display block. The difference between them is greater than the preset threshold.
  • performing color adjustment processing on the target display characters corresponding to the current display block may include: adjusting the white balance and color richness of the target display characters.
  • step S5 perform character superimposition processing on the area to be superimposed. Character superposition processing is performed on the area to be superimposed, so that the target display character is displayed in the area to be superimposed.
  • the character superimposition processing includes step S51: superimposing the superimposed characters and the corresponding display blocks.
  • the superimposed characters are target display characters after adjustment, so that the optical parameter values of the superimposed characters and the optical parameter statistical values of the corresponding display blocks have sufficient differences.
  • the target display characters can be clearly displayed on the display block and are easily identifiable, thereby ensuring that the additional content information displayed on the video screen can be displayed normally, optimizing user experience and sensitivity.
  • the character superposition processing also includes step S52: when it is judged that adjustment processing is not needed, the target The display characters are superimposed on the corresponding display blocks.
  • the target display character when there is no need to perform adjustment processing, when the target display character is directly superimposed on the corresponding display block, the target display character can be displayed normally on the display block, so that the target display character is clear and easy to identify. of.
  • the video image processing method further includes: Step S6: Obtain a superimposed canvas based on the superimposed characters and the target display characters.
  • the overlay canvas is used to overlay the current video image.
  • the overlay canvas includes the overlay characters and their corresponding position information, and the target display characters and their corresponding position information.
  • performing character overlay processing on the overlay area includes: overlaying the video image and the overlay canvas.
  • the present invention also provides a video image processing device.
  • Figure 3 is a functional block diagram of an embodiment of the video image processing device of the present invention.
  • the video image processing device includes: an acquisition unit 10, used to acquire an area to be superimposed in the video image, where the area to be superimposed includes a plurality of display blocks; an optical parameter statistics unit 20, used to calculate each of the The display block performs optical parameter statistical processing to obtain optical parameter statistical values; the optical parameter statistical unit is a hardware unit; the judgment unit 30 is used to based on the difference between the optical parameter statistical value and the optical parameter value of the target display character, Determine whether the target display characters need to be adjusted; the character adjustment unit 40 is used to adjust the target display characters when the difference is less than or equal to the preset threshold to obtain superimposed characters.
  • the difference between the optical parameter value of the superimposed character and the optical parameter statistical value of the corresponding display block is greater than the preset threshold; the character superimposition unit 50 is used to perform character superposition processing on the area to be superimposed; the character superposition The unit includes a first sub-unit 501, which is used to superimpose each of the display blocks based on the superimposed characters.
  • the optical parameter statistics unit 20 performs optical parameter statistical processing on each of the display blocks to obtain optical parameter statistical values; the optical parameter statistics unit 20 is a hardware unit; the judgment unit 30 is based on the optical parameters.
  • the difference between the statistical value and the optical parameter value of the target display character is used to determine whether the target display character needs to be adjusted; when the difference is less than or equal to the preset threshold, the character adjustment unit 40 adjusts the target display character Adjustment processing is performed; therefore, this embodiment completes the statistics of local optical parameters of the video image by adding a small number of hardware units, without using software to perform statistical processing of optical parameters, which accordingly reduces the CPU load, calculation amount and bandwidth, and also reduces the hardware module The changes are small, and it is also helpful to reduce the complexity of chip design and the area occupied by the chip.
  • the acquisition unit 10 acquires the area to be superimposed in the original video image and multiple display blocks therein.
  • the area to be superimposed is an area in the video image where characters need to be superimposed.
  • content such as menus, prompt information, dates, addresses, icons (such as logos), camera information, etc. can be superimposed in the area to be superimposed, so that additional content can be provided based on the video screen.
  • Display information e.g., text, images, etc.
  • the area to be superimposed includes multiple display blocks, so that the optical parameter statistics unit 20 can perform optical parameter statistics on each display block respectively.
  • the acquisition unit 10 acquires the position information of the area to be superimposed, and the position information and size of the display block.
  • the acquisition unit 10 obtains the position information of the area to be superimposed, and the position information and size of the display block based on the size of the superimposed characters and the position information of the superimposed characters in the original video image.
  • the area to be superimposed includes six display blocks A, B, C, D, E and F.
  • the shape of the display block is a rectangle.
  • the size and number of the display blocks can be flexibly set according to actual needs, and the sizes of the display blocks can be the same or different.
  • the acquisition unit 10 acquires the area to be superimposed in the video image, and performs area division processing on the area to be superimposed to obtain the multiple display blocks. That is to say, the acquisition unit 10 first acquires the area to be superimposed, and then divides the area to be superimposed into a plurality of display blocks.
  • the acquisition unit 10 performs area division processing on the area to be superimposed based on the size of the superimposed characters, and obtains the multiple display blocks. For example: the size of the superimposed characters is 32*32, then the size of the display block is 32*32, or 16*32.
  • the display block corresponds to one or more superimposed characters.
  • the video image processing device further includes: a post-processing module (not shown) for post-processing the video image; the post-processing module includes: a cropping processing unit 60, Used to crop the video image and output the cropped image.
  • the video image is cropped to retain only the required image content, thereby confirming the final output image size.
  • the cropped image includes multiple rows of sub-images.
  • the post-processing module may also omit the cropping processing unit.
  • the acquisition unit 10 is used to acquire the area to be superimposed in the cropped image.
  • the size of the final output video image can be confirmed by cropping the video image.
  • the acquisition unit 10 can obtain the size of the area to be superimposed based on the size of the final output image. information, which can ensure the position accuracy of the area to be overlaid.
  • the optical parameter statistics unit 20 can directly perform processing based on the optical parameter values of pixels at each position output by the cropping processing unit 60. Statistical processing of optical parameters without the need to call data to DDR does not require additional DDR reading and writing, which in turn helps reduce the amount of calculations, CPU load and bandwidth.
  • optical parameter statistics unit 20 can be added to the post-processing module, and can be easily designed as an online module, which accordingly reduces changes to existing hardware modules, thereby reducing the complexity of chip design, and is beneficial to Save chip area.
  • the cropping processing unit 60 outputs multiple cropped images with different resolutions.
  • the post-processing module further includes: a scaling unit (not shown) for scaling the video image and outputting multiple video images with different resolutions to the cropping processing unit. 60.
  • the scaling unit scales the video image and outputs multiple video images with different resolutions, thereby meeting the requirements for different resolutions in different scenarios.
  • the cropping processing unit 60 performs cropping processing on multiple video images with different resolutions, respectively, to output the multiple cropped images with different resolutions.
  • the scaling unit may be omitted from the post-processing module.
  • the video image processing device is applied to a surveillance camera scene as an example.
  • the post-processing module further includes: a blocking unit (not shown) for processing the video The image is subjected to cover processing, and the covered image is output to the scaling unit.
  • the blocking unit performs blocking processing on the video image, thereby blocking areas in the video image that are not expected to be exposed (for example, areas in the image that may expose personal privacy) based on actual needs.
  • areas of the video image that are not expected to be exposed can be blocked with solid color areas.
  • the occlusion processing step may be omitted.
  • the video image processing device further includes: an information configuration unit 80 , configured to configure the information of the area to be superimposed and the display block obtained by the acquisition unit 10 to the optical parameter statistics unit 20 .
  • the information configuration unit 80 configures the information of the display block to the optical parameter statistics unit 20, so that the optical parameter statistics unit 20 can perform optical parameter statistical processing on the display block based on the configured information of the display block.
  • the information of the display blocks includes position information and size information of each display block.
  • the information configuration unit 80 may be a CPU, which acquires the information of the area to be superimposed and the display block, and then configures the information of the display block to the optical parameter statistics unit 20 .
  • the CPU stores the information of the area to be superimposed and the display block in a register, so that the optical parameter statistics unit 20 obtains the information from the register. Obtain information about the area to be overlaid and the display block.
  • the register can be set inside the post-processing module to facilitate interaction between software and hardware.
  • the optical parameter statistics unit 20 is used to perform optical parameter statistical processing on the display block and obtain the optical parameter statistical value, so that the judgment unit 30 can judge whether the target needs to be processed based on the difference between the optical parameter statistical value and the optical parameter value of the target display character. Display characters are adjusted.
  • the optical parameter statistics unit 20 is a hardware unit. Therefore, in this embodiment, a small number of hardware units are added to the video image processing device to complete the statistics of local optical parameters of the video image. There is no need to use software to perform optical parameter statistical processing, and the corresponding reduction is It reduces the CPU load, calculation volume and bandwidth, and makes little changes to the hardware modules, which also helps reduce the complexity of the chip design and the chip area occupied.
  • the optical parameter statistics unit 20 obtains the statistical values of all pixels in each display block to implement optical parameter statistical processing.
  • the statistical value of the optical parameters of the pixel may be the sum of the optical parameters of all pixels, or may also be the average value of the optical parameters of all the pixels.
  • the optical parameter is brightness
  • the optical parameter statistics unit 20 is a brightness statistics unit
  • the brightness statistics unit is used to perform brightness statistical processing on each of the display blocks to obtain brightness statistical values.
  • the brightness statistical value may be the sum of brightness of all pixels in the current display block, or may be the average brightness of all pixels in the current display block.
  • the optical parameter is brightness
  • the optical parameter statistics unit 20 is a brightness statistics unit only as an example, and the implementation of the optical parameter statistics unit 20 is not limited to this.
  • the optical parameter may be chromaticity
  • the optical parameter statistics unit may also be a chromaticity statistics unit.
  • the chromaticity statistics unit is used to perform chromaticity statistical processing on each display block to obtain chromaticity statistical values. By obtaining the chromaticity statistical value, each display block can also be compared with the chromaticity value of the corresponding target display character, thereby determining whether the target display character needs to be adjusted.
  • the optical parameter statistics unit 20 is a hardware unit.
  • the optical parameter statistics unit may be a logic circuit.
  • the post-processing module includes a register, and the optical parameter statistical values are stored in the register.
  • the cropping processing unit 60 is used to crop the video image and output a plurality of cropped images with different resolutions.
  • the video image processing device further includes: a multiplexer 90 for selecting one cropped image from multiple cropped images in different time periods and outputting it to the optical parameter statistics unit 20 .
  • the multiplexer 90 selects one cropped image from multiple cropped images and outputs it to the optical parameter statistics unit, so that in different time periods, the optical parameter statistics unit 20 can analyze different resolutions.
  • the display blocks in the cropped image are subjected to optical parameter statistical processing, thereby enabling time-sharing multiplexing of the optical parameter statistics unit 20.
  • there is no need to set corresponding optical parameter statistics units 20 for multiple cropped images which is beneficial to Reduce the area occupied by the chip, thereby saving costs.
  • the post-processing module reads and processes the image line by line. There is a corresponding line count inside the post-processing module. After processing the corresponding number of lines, the processing of one frame of image is completed.
  • the cropped image includes multiple rows of sub-images that are output sequentially; and the multiple rows of sub-images are sequentially input to the optical parameter statistics unit 20 .
  • the optical parameter statistics unit 20 includes: a display block determiner (not shown), used to determine whether the current row sub-image contains the display block; an optical parameter statistician (not shown) , used to perform optical parameter statistical processing on the display blocks in the current row sub-image when the current row sub-image contains the display block.
  • a display block judger is required to determine whether the current row of sub-images contains the display block, so that it can Only the display blocks in the current row sub-image containing the display blocks are judged, thereby reducing the amount of calculation and ensuring that the optical parameter statistics unit correctly performs optical parameter statistical processing on the display blocks.
  • the post-processing module also includes: a mirror flip unit 70, which is used to perform mirror flip processing on the cropped image after the optical parameter statistics unit performs optical parameter statistical processing on each of the display blocks, and outputs Processed image.
  • a mirror flip unit 70 which is used to perform mirror flip processing on the cropped image after the optical parameter statistics unit performs optical parameter statistical processing on each of the display blocks, and outputs Processed image.
  • the mirror flipping unit can be omitted.
  • the determination unit 30 determines whether the target display character needs to be adjusted based on the difference between the optical parameter statistical value and the optical parameter value of the target display character.
  • the target display characters are characters that need to be superimposed and displayed on each display block in the area to be superimposed, so that additional display information can be provided based on the video picture.
  • the target display character By judging whether the target display character needs to be adjusted based on the difference between the optical parameter statistical value and the optical parameter value of the target display character, it is possible to ensure that the target display character can be displayed on each display block. normal display.
  • the target display characters may include menus, prompt information, dates, addresses, icons (such as logos), camera information, and other contents.
  • the display block corresponds to one or more of the target display characters. Therefore, the determination unit 30 determines whether the target display character corresponding to the current display block needs to be adjusted based on the difference between the optical parameter statistical value of the current display block and the corresponding optical parameter value of the target display character.
  • the optical parameter is brightness. Therefore, the determination unit 30 determines whether it is necessary to display the target corresponding to the current display block based on the difference between the brightness statistical value of the current block and the optical brightness value of the corresponding target display character. Characters are adjusted.
  • the optical parameters may also be other types of optical parameters, such as chromaticity. Accordingly, the judgment unit is based on the difference between the chromaticity statistical value of the current block and the corresponding optical chromaticity value of the target display character. Difference value to determine whether the target display characters corresponding to the current display block need to be adjusted.
  • the judgment unit 30 is used to judge whether the difference is greater than or equal to a preset threshold; when the difference is less than or equal to the preset threshold, it is judged that adjustment processing is required; When the difference is greater than the preset threshold, it is determined that adjustment processing is not required.
  • the judgment unit 30 compares the difference between the optical parameter statistical value and the optical parameter value of the target display character with the preset threshold. When the difference is less than or equal to the preset threshold, it indicates that The optical parameter difference between the display block and the target display character is too small. If the display block and the corresponding target display character are directly superimposed, the target display will not be displayed normally on the display block. characters, therefore, it is determined that the target display characters need to be adjusted so that the target display characters can be displayed normally on the display block.
  • the difference is greater than the preset threshold, it means that there is a sufficient difference in the optical parameters between the display block and the target display character, so that the display block and the corresponding target display character are compared.
  • the target display characters can be displayed normally on the display block as the background.
  • the preset threshold is set based on experience values or user preferences.
  • the judgment unit 30 is used to call the optical parameter statistical information from the register.
  • the character adjustment unit 40 When the difference is less than or equal to the preset threshold, the character adjustment unit 40 performs adjustment processing on the target display character to obtain the superimposed character.
  • the optical parameter value of the superimposed character and the optical parameter statistical value of the corresponding display block are obtained.
  • the difference between the two characters is greater than the preset threshold, so that after the character superposition unit 50 superimposes the superimposed characters and the corresponding display block, the target display character can be displayed normally on the display block.
  • the superimposed characters are used to be subsequently superimposed with the corresponding display block to display the target display character on the display block.
  • the character adjustment unit 40 is used to adjust the optical parameters of the target display character to obtain a superimposed character.
  • the difference between the optical parameter value of the superimposed character and the optical parameter statistical value of the corresponding display block is greater than the preset threshold.
  • the character superimposing unit 50 performs the adjustment process on the superimposed characters and the display blocks. After superimposition, the superimposed characters can be displayed normally on the display block.
  • the optical parameter is brightness as an example for explanation.
  • the character adjustment unit 40 adjusting the target display character may include: performing color inversion processing on the target display character.
  • the target display characters are usually black or white eye-catching characters, and the color inversion process is performed, that is, the white target display characters are changed into black superimposed characters, and the black target display characters are changed into
  • the white superimposed characters are not only easy to operate, but also enable the target similar characters to be clearly displayed in the video image after superimposing the superimposed characters and the display block.
  • the character adjustment unit adjusts the target display character including bitwise inversion processing.
  • the optical parameters may also be other types of optical parameters, such as chromaticity.
  • the difference between the chromaticity statistical value of the current display block and the corresponding target display character chromaticity value When the value is less than or equal to the preset threshold, the character adjustment unit adjusts the target display character corresponding to the current block. Specifically, the character adjustment unit performs color adjustment processing on the target display characters corresponding to the current display block to obtain superimposed characters. The difference between the chromaticity value of the superimposed character and the chromaticity statistical value of the corresponding display block is greater than a preset value. threshold.
  • the character adjustment unit performing color adjustment processing on the target display character corresponding to the current display block may include: adjusting the white balance and color richness of the target display character, etc.
  • the character superposition unit 50 performs character superposition processing on the area to be superimposed, so as to display the target display character in the area to be superimposed.
  • the first subunit 501 is used to perform superimposition processing on each of the display blocks based on the superimposed characters.
  • the superimposed characters are target display characters after adjustment, so that the optical parameter values of the superimposed characters and the optical parameter statistical values of the corresponding display blocks have sufficient differences.
  • the target display characters can be clearly displayed on the display block and are easily identifiable, thereby ensuring that the additional content information displayed on the video screen can be displayed normally, optimizing user experience and sensitivity.
  • the character superimposing unit 50 also includes: a second subunit 502, used to superimpose the target display character and the corresponding display block when it is determined that adjustment processing is not required.
  • the target display character when there is no need to perform adjustment processing, when the target display character is directly superimposed on the corresponding display block, the target display character can be displayed normally on the display block, so that the target display character is clear and easy to identify. of.
  • an overlay canvas is obtained based on the overlay characters and the target display characters.
  • the overlay canvas is used to overlay the current video image.
  • the overlay canvas includes the overlay characters and their corresponding position information, and the target display characters and their corresponding position information.
  • the character superimposing unit 50 superimposes the video image and the superimposed canvas.
  • an embodiment of the present invention also provides a device, which can implement the video image processing method provided by the embodiment of the present invention by loading the above video image processing method in the form of a program.
  • FIG. 4 An optional hardware structure of the device provided by the embodiment of the present invention can be shown in Figure 4, including: at least one processor 01, at least one communication interface 02, at least one memory 03 and at least one communication bus 04.
  • the number of the processor 01 , the communication interface 02 , the memory 03 and the communication bus 04 is at least one, and the processor 01 , the communication interface 02 and the memory 03 complete communication with each other through the communication bus 04 .
  • the communication interface 02 may be an interface of a communication module used for network communication, such as an interface of a GSM module.
  • the processor 01 may be a central processing unit CPU, or an application specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention.
  • ASIC Application Specific Integrated Circuit
  • the memory 03 may include high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the memory 03 stores one or more computer instructions, which are executed by the processor 01 to implement the video image processing method provided by the embodiment of the present invention.
  • implementation terminal device may also include other components (not shown) that may not be necessary for the disclosure of the embodiments of the present invention; in view that these other components may not be necessary for understanding the disclosure of the embodiments of the present invention, The embodiments of the present invention will not introduce them one by one.
  • embodiments of the present invention also provide a storage medium that stores one or more computer instructions, and the one or more computer instructions are used to implement the video image processing method described in the embodiments of the present invention.
  • the storage medium is a computer-readable storage medium, and the storage medium can be a read-only memory (ROM), a random access memory (RAM), a U disk, a mobile hard disk, a magnetic disk or an optical disk, etc.
  • ROM read-only memory
  • RAM random access memory
  • U disk a magnetic disk
  • U disk a magnetic disk
  • optical disk etc.
  • embodiments of the invention described above are combinations of elements and features of the invention. Unless otherwise mentioned, the elements or features may be considered optional. Each element or feature may be practiced without being combined with other elements or features. Additionally, embodiments of the invention may be constructed by combining some of the elements and/or features. The order of operations described in embodiments of the invention may be rearranged. Some configurations of any embodiment may be included in another embodiment, and may be replaced with corresponding configurations of another embodiment. It is obvious to those skilled in the art that claims that do not have an explicit reference relationship with each other among the appended claims may be combined with embodiments of the present invention, or may be included as new claims in amendments after the application is filed.
  • the embodiments of the present invention can be implemented by various means such as hardware, firmware, software or a combination thereof.
  • the method according to the exemplary embodiments of the present invention may be implemented through one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices ( PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, etc.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable gate array
  • processor controller, microcontroller, microprocessor, etc.
  • the embodiments of the present invention can be implemented in the form of modules, processes, functions, etc.
  • the software code may be stored in the memory unit and executed by the processor.
  • the memory unit may be located internally or externally to the processor and may send data to and receive data from the processor via various known means.

Abstract

一种视频图像处理方法及装置、设备以及存储介质,装置包括:获取单元,用于获取视频图像中的待叠加区域,待叠加区域包括多个显示块(S1);光学参数统计单元对显示块进行光学参数统计处理,获得光学参数统计值;光学参数统计单元为硬件单元(S2);判断单元用于基于光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要进行调整处理(S3);字符调整单元,用于在当差值小于或等于预设阈值时,对目标显示字符进行调整获得叠加字符,叠加字符的光学参数值与对应显示块的光学参数统计值之间的差值大于预设阈值(S4);第一子单元,用于将叠加字符与对应的显示块进行叠加(S51)。本发明实施例降低CPU负载、运算量及带宽,还降低芯片设计复杂度和芯片面积。

Description

视频图像处理方法及装置、设备以及存储介质 技术领域
本发明实施例涉及图像处理技术领域,尤其涉及一种视频图像处理方法及装置、设备以及存储介质。
背景技术
视频图像处理技术在智能视频图像分析技术中得到广泛应用。
其中,On-Screen Display(屏幕显示)简称OSD,是指一个图像叠加在显示屏上的图像,通常被应用到CRT/LCD显示器、录像机和DVD播放器的显示器荧幕中产生一些特殊的字形或图形,让使用者得到一些讯息,如台标、商标、台词,系统菜单等等。
但是,目前视频图像处理的CPU负载和带宽较高,且相关芯片的设计复杂度高,芯片占用的面积较大。
技术问题
本发明实施例解决的问题是提供一种视频图像处理方法及装置、设备以及存储介质,降低CPU负载、运算量及带宽,还有利于降低芯片设计的复杂度和芯片占用面积。
技术解决方案
为解决上述问题,本发明实施例提供一种视频图像处理方法,包括:获取视频图像中的待叠加区域,所述待叠加区域包括多个显示块;利用光学参数统计单元,对各个所述显示块进行光学参数统计处理,获得光学参数统计值;其中,所述光学参数统计单元为硬件单元;基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理;在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理,获得叠加字符,所述叠加字符的光学参数值与对应显示块的光学参数统计值之间的差值,大于所述预设阈值;对所述待叠加区域进行字符叠加处理,所 述字符叠加处理包括:将所述叠加字符与对应的显示块进行叠加。
相应的,本发明实施例还提供一种视频图像处理装置,包括:获取单元,用于获取视频图像中的待叠加区域,所述待叠加区域包括多个显示块;光学参数统计单元,用于对所述各个显示块进行光学参数统计处理,获得光学参数统计值;所述光学参数统计单元为硬件单元;判断单元,用于基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理;字符调整单元,用于在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理,获得叠加字符,所述叠加字符的光学参数值与对应显示块的光学参数统计值之间的差值,大于所述预设阈值;字符叠加单元,用于对所述待叠加区域进行字符叠加处理;所述字符叠加单元包括第一子单元,用于将所述叠加字符与对应的显示块进行叠加。
相应的,本发明实施例还提供一种设备,包括至少一个存储器和至少一个处理器,所述存储器存储有一条或多条计算机指令,其中,所述一条或多条计算机指令被所述处理器执行以实现本发明实施例提供的视频图像处理方法。
相应的,本发明实施例还提供一种存储介质,所述存储介质存储有一条或多条计算机指令,所述一条或多条计算机指令用于实现本发明实施例提供的视频图像处理方法。
有益效果
与现有技术相比,本发明实施例的技术方案具有以下优点:本发明实施例提供的视频图像处理方法,利用光学参数统计单元,对各个所述显示块进行光学参数统计处理,获得光学参数统计值;其中,所述光学参数统计单元为硬件单元;基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对目标显示字符进行调整处理;在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理;因此,本发明实施例通过增加小量硬件单元来完成对视频图像局部光学参数的统计,无需利用软件进行光学参数统计处理, 相应降低了CPU负载、运算量及带宽,并且对硬件模块的改动小,还有利于降低芯片设计的复杂度和芯片占用面积。
本发明实施例提供的视频图像处理装置中,光学参数统计单元对各个所述显示块进行光学参数统计处理,获得光学参数统计值;所述光学参数统计单元为硬件单元;判断单元基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对目标显示字符进行调整处理;亮度调整单元在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理;因此,本发明实施例通过增加小量硬件单元来完成对视频图像局部光学参数的统计,无需利用软件进行光学参数统计处理,相应降低了CPU负载、运算量及带宽,并且对硬件模块的改动小,还有利于降低芯片设计的复杂度和芯片占用面积。
附图说明
图1是本发明视频图像处理方法一实施例对应的流程图;
图2示出了一帧视频图像中的待叠加区域;
图3是本发明视频图像处理装置一实施例的功能框图;
图4为本发明一实施例所提供的设备的硬件结构图。
本发明的实施方式
由背景技术可知,目前视频图像处理的CPU负载和带宽较高,且相关芯片的设计复杂度高,芯片占用的面积较大。
具体地,目前通常有两种方法进行视频图像处理。
一种是纯软件的实现方法,CPU读取视频图像中的待叠加区域,并将所述待叠加区域分割为多个显示块,之后对每一个显示块的进行亮度统计,获得亮度统计值,再基于所述亮度统计值和目标显示字符亮度值之间的差值,判断各个显示块是否需要进行亮度调整处理。
纯软件的实现方法的灵活性较高,但是,在当待叠加区域的位置 较多或面积较大时,会产生较大的带宽和CPU运算量。
另一种是硬件的实现方法,在芯片内的视频叠加模块已集成亮度调整处理功能,CPU仅需对所述视频叠加模块配置相应的亮度调整处理设置。硬件的实现方法基本不需要CPU参与,但是,一个视频叠加处理模块需要一个对应的亮度调整单元,这会增加芯片设计的复杂度和芯片占用的面积,并且,硬件的实现方法容易降低视频图像处理的灵活性且无法进行功能的扩展。
因此,亟需一种视频图像处理方法,在保证芯片的设计复杂度较低、芯片占用面积较小的同时,还能够降低CPU负载和带宽。
为了解决所述技术问题,本发明实施例提供一种视频图像处理方法。参考图1,示出了本发明视频图像处理方法一实施例的流程示意图。
本实施例中,所述视频图像处理方法包括以下基本步骤:
步骤S1:获取视频图像中的待叠加区域,所述待叠加区域包括多个显示块;
步骤S2:利用光学参数统计单元,对各个所述显示块进行光学参数统计处理,获得光学参数统计值;其中,所述光学参数统计单元为硬件单元;
步骤S3:基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对目标显示字符进行调整处理;
步骤S4:在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理,获得叠加字符,所述叠加字符的光学参数值与对应显示块的光学参数统计值之间的差值,大于所述预设阈值;
步骤S5:对所述待叠加区域进行字符叠加处理;所述字符叠加处理包括步骤S51:将所述叠加字符与对应的显示块进行叠加。
所述视频图像处理方法中,利用光学参数统计单元,对各个所述 显示块进行光学参数统计处理,获得光学参数统计值;其中,所述光学参数统计单元为硬件单元;基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对目标显示字符进行光学参数调整处理;在判断需要进行光学参数调整处理时,对所述目标显示字符进行光学参数调整处理;因此,本发明实施例通过增加小量硬件单元来完成对视频图像局部光学参数的统计,无需利用软件进行光学参数统计处理,相应降低了CPU负载、运算量及带宽,并且对硬件模块的改动小,还有利于降低芯片设计的复杂度和芯片占用面积。
为使本发明实施例的上述目的、特征和优点能够更为明显易懂,下面结合附图对本发明的具体实施例做详细的说明。
参考图1,步骤S1:获取视频图像中的待叠加区域,所述待叠加区域包括多个显示块。所述待叠加区域为所述原始视频图像中需要叠加字符的区域。
具体地,基于实际需求,可以在所述待叠加区域中叠加例如菜单、提示信息、日期、地址、图标(如:logo)、摄像机信息等内容,从而在视频画面的基础上,能够提供额外的显示信息。
所述待叠加区域包括多个显示块,从而后续亮度统计单元能够分别对各个显示块进行亮度统计。
具体地,本实施例中,获取所述待叠加区域的位置信息,以及所述显示块的位置信息和大小。在具体实施例中,依据叠加字符的大小、以及叠加字符在原始视频图像中的位置信息,获取所述待叠加区域的位置信息,以及所述显示块的位置信息和大小。
结合参考图2,示出了一帧视频图像中的待叠加区域M。如图2所示,作为一种实施例,所述待叠加区域包括6个显示块A、B、C、D、E和F。本实施例中,所述显示块的形状为矩形。
在具体实施中,可以根据实际的需求,灵活设置所述显示块的大小和数量,显示块的大小可以相同,也可以不同。
作为一种示例,获取所述待叠加区域和所述显示块的步骤包括:获取视频图像中的所述待叠加区域;对所述待叠加区域进行区域划分处理,获得所述多个显示块。也就是说,先获取所述待叠加区域,再将所述待叠加区域分割成多个所述显示块。
在具体实施中,基于叠加字符的大小,对所述待叠加区域进行区域划分处理,获得所述多个显示块。例如:叠加字符的大小为32*32,则显示块的大小为32*32,或者16*32。
相应地,本实施例中,所述显示块与一个或多个叠加字符相对应。
需要说明的是,本实施例中,所述视频图像处理方法还包括:对所述视频图像进行后处理(post processing),所述后处理包括:对所述视频图像进行裁剪处理,输出裁剪后图像。
对所述视频图像进行裁剪处理,从而仅保留所需的图像内容,进而确认最终输出的图像大小。本实施例中,所述裁剪后图像包括多行子图像。
在其他实施例中,还可以省去所述裁剪处理的步骤。
相应地,本实施例中,获取视频图像中的待叠加区域包括:获取所述裁剪后图像中的待叠加区域。
通过裁剪处理后的视频图像,才能确认最终输出的视频图像大小,通过获取所述裁剪后图像中的待叠加区域,从而能够在最终输出的图像大小的基础上,获取待叠加区域的信息,相应能够保证待叠加区域的位置准确性。
而且,在图像处理领域中,对视频图像进行后处理的过程中,通常均会获取图像中各个位置处的像素信息,例如:各个像素的光学参数值。通过在进行后处理中的裁剪处理后,进行光学参数统计处理,从而能够直接基于裁剪处理输出的各个位置的像素的光学参数值进行光学参数统计处理,而无需向DDR调用数据,相应无需产生额外的DDR读写,进而有利于减少运算量、降低CPU负载和带宽。
此外,在图像处理领域中,通常利用后处理模块对视频图像进行后处理,相应地,所述光学参数统计单元可以附加在所述后处理模块中,且容易设计为在线模块,相应减小了对现有硬件模块的改动,进而降低了芯片设计的复杂度,还有利于节约芯片面积。
需要说明的是,本实施例中,对所述视频图像进行裁剪处理的步骤中,输出多个分辨率不同的所述裁剪后图像。
具体地,本实施例中,所述后处理还包括:在对所述视频图像进行裁剪处理之前,对所述视频图像进行缩放处理,输出多个分辨率不同的视频图像。
对视频图像进行缩放处理,输出多个分辨率不同的视频图像,从而满足不同场景下对于不同分辨率的需求。
相应地,在对所述视频图像进行缩放处理之后,对多个分辨率不同的视频图像分别进行裁剪处理,以输出所述多个分辨率不同的裁剪后图像。
在其他实施例中,还可以省去所述缩放处理的步骤。
还需要说明的是,本实施例中,以所述视频图像处理方法应用于监控摄像场景为示例进行说明,所述后处理还包括:在进行缩放处理之前,对所述视频图像进行遮挡(cover)处理。
对所述视频图像进行遮挡处理,从而基于实际的需求,对视频图像中不期望暴露出的区域(例如:图像中可能暴露个人隐私的区域)进行遮挡。
具体地,可以用纯色区域对视频图像中不期望暴露出的区域进行遮挡。
在其他实施例中,基于实际的需求,还可以省去所述遮挡处理的步骤。
本实施例中,所述视频图像处理方法还包括:在获取所述待叠加 区域和所述显示块之后,将所述显示块的信息配置至所述光学参数统计单元。
将所述显示块的信息配置至所述光学参数统计单元,从而后续光学参数统计单元能够基于所配置的显示块的信息,对显示块进行光学参数统计处理。
本实施例中,所述显示块的信息包括各个显示块的位置信息和大小信息。
在具体实施中,可以是CPU获取所述待叠加区域和所述显示块的信息,之后将所述显示块的信息配置至所述光学参数统计单元。
具体地,所述CPU获取所述待叠加区域和所述显示块的信息之后,将所述待叠加区域和所述显示块的信息存储在寄存器中,以便光学参数统计单元从所述寄存器中获取所述待叠加区域和所述显示块的信息。
在具体实施中,所述寄存器可以设置在后处理模块内部。
继续参考图1,步骤S2:利用光学参数统计单元,对各个显示块进行光学参数统计处理,获得光学参数统计值;其中,光学参数统计单元为硬件单元。
对显示块进行光学参数统计处理,获得光学参数统计值,以便后续能够基于光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对目标显示字符进行调整处理。
所述光学参数统计单元为硬件单元,因此,本实施例通过增加小量硬件单元来完成对视频图像局部光学参数的统计,无需利用软件进行光学参数统计处理,相应降低了CPU负载、运算量及带宽,并且对硬件模块的改动小,还有利于降低芯片设计的复杂度和芯片占用面积。
本实施例中,对各个显示块进行光学参数统计处理包括:获得各个显示块中所有像素的光学参数统计值。其中,所述像素的光学参数 统计值可以为所有像素的光学参数和,或者也可以为所有像素的光学参数平均值。
作为一实施例,所述光学参数为亮度;所述光学参数统计单元为亮度统计单元;对各个所述显示块进行光学参数统计处理包括:利用所述亮度统计单元,对各个所述显示块进行亮度统计处理,获得亮度统计值。
其中,所述亮度统计值可以为当前显示块所有像素的亮度和,也可以为当前显示块所有像素的亮度平均值。
需要说明的是,本实施例中,所述光学参数为亮度,所述光学参数统计单元为亮度统计单元仅作为一种示例,光学参数统计单元的实施方式不仅限于此。
例如:所述光学参数可以为色度,所述光学参数统计单元还可以为色度统计单元,所述色度统计单元用于对各个显示块进行色度统计处理,获得色度统计值。通过获得色度统计值,后续也能够将各个显示块与对应的目标显示字符的色度值进行比较,从而判断是否需要对目标显示字符进行调整处理。
所述光学参数统计单元为硬件单元。作为一实施例,所述光学参数统计单元可以为逻辑电路。
本实施例中,所述视频图像处理方法还包括:在不同时间段内,从多个裁剪后图像中选择一个裁剪后图像,输出至所述光学参数统计单元。
在不同时间段内,从多个裁剪后图像中选择一个裁剪后图像,输出至所述光学参数统计单元,从而在不同时间段内,光学参数统计单元能够对不同分辨率的裁剪后图像中的显示块进行光学参数统计处理,进而能够实现对光学参数统计单元的分时复用,相应无需对多个裁剪后图像分别设置对应的光学参数统计单元,进而有利于减小芯片所占用的面积,进而节约成本。
在具体实施中,后处理模块按行对图像进行读入及处理,所述后处理模块内部有相应的行计数,在处理完相应的行数后,即完成一帧图像的处理。
相应地,本实施例中,所述裁剪后图像包括依次输出的多行子图像;多行子图像依次输入至光学参数统计单元。
相应地,本实施例中,利用光学参数统计单元,对各个所述显示块进行光学参数统计处理的步骤包括:判断当前行子图像是否包含所述显示块;如果是,则所述光学参数统计单元对当前行子图像中的显示块进行光学参数统计处理。
由于所述裁剪后图像被分割成多行子图像,多行子图像依次输入至光学参数统计单元,因此,需要对当前行子图像是否包含所述显示块进行判断,从而能够仅对包含有显示块的当前行子图像中的显示块进行判断,进而减少运算量,并且保证光学参数统计单元对显示块正确进行光学参数统计处理。
需要说明的是,所述后处理还包括:在对各个所述显示块进行光学参数统计处理之后,对所述裁剪后图像进行镜像翻转(mirror/flip)处理,输出处理后图像。在其他实施例中,所述镜像翻转处理的步骤还可以省去。
继续参考图1,步骤S3:基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理。
其中,目标显示字符为需要在待叠加区域的各个显示块上叠加并显示的字符,从而在视频画面的基础上,能够提供额外的显示信息。
通过基于所述光学参数统计值和所述目标显示字符光学参数值之间的差值,对是否需要对所述目标显示字符进行调整处理进行判断,从而能够保证目标显示字符能够在各个显示块上正常显示。
具体地,所述目标显示字符可以包括菜单、提示信息、日期、地 址、图标(如:logo)、摄像机信息等内容。
本实施例中,所述显示块与一个或多个所述目标显示字符相对应。因此,基于当前显示块的光学参数统计值与对应的目标显示字符光学参数值之间的差值,判断是否需要对当前块对应的目标显示字符进行调整处理。
本实施例中,所述光学参数为亮度,因此,基于当前块的亮度统计值与对应的目标显示字符光学亮度值之间的差值,判断是否需要对当前显示块对应的目标显示字符进行调整处理。
在其他实施例中,所述光学参数还可以为其他类型的光学参数,例如:色度,相应地,基于当前块的色度统计值与对应的目标显示字符光学色度值之间的差值,判断是否需要对当前显示块对应的目标显示字符进行调整处理。
具体地,本实施例中,基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理的步骤S3包括:判断所述差值是否大于或等于预设阈值;在当所述差值小于或等于所述预设阈值时,判断需要对目标显示字符进行调整处理;在当所述差值大于所述预设阈值时,判断无需对目标显示字符进行调整处理。
将所述光学参数统计值与所述目标显示字符光学参数值的差值,和所述预设阈值进行比较,在当所述差值小于或等于所述预设阈值时,则说明所述显示块与所述目标显示字符之间的光学参数差异过小,若直接将所述显示块和对应的目标显示字符进行叠加,则将无法正常在所述显示块上显示所述目标显示字符,因此,判断需要对目标显示字符进行调整处理,以便能够正常在显示块上显示所述目标显示字符。
在当所述差值大于所述预设阈值时,则说明所述显示块与所述目标显示字符之间的光学参数值具有足够的差异,以便在将所述显示块与对应的目标显示字符进行叠加后,目标显示字符能够在作为背景的显示块上正常显示。
在具体实施中,所述预设阈值基于经验值或用户喜好设置。
继续参考图1,步骤S4:在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理,获得叠加字符,所述叠加字符与对应显示块的光学参数统计值之间的差值,大于所述预设阈值。
在当所述差值小于或等于预设阈值时,则说明所述显示块与所述目标显示字符之间的光学参数差异过小,需要对目标显示字符进行调整处理,获得所述叠加字符,从而后续对所述叠加字符和对应的显示块进行叠加后,能够正常在显示块上显示所述目标显示字符。
所述叠加字符用于后续与对应的显示块进行叠加,以在所述显示块上显示目标显示字符。
本实施例中,对所述目标显示字符进行调整处理的步骤包括:调整所述目标显示字符的所述光学参数,获得叠加字符,使得所述叠加字符的光学参数值与对应的显示块的光学参数统计值之间的差值,大于所述预设阈值。
也就是说,在对目标显示字符进行调整处理之后,所获得叠加字符与对应的显示块之间的光学参数值具有足够的差异,以使后续对所述叠加字符和所述显示块进行叠加后,能够在所述显示块上正常显示所述叠加字符。
本实施例中,以光学参数为亮度为示例进行说明。在具体实施中,对所述目标显示字符进行调整处理可以包括:对所述目标显示字符进行颜色取反处理。
在具体实施中,目标显示字符通常为黑色或白色的醒目字符,通过进行颜色取反处理,也就是说,将白色的目标显示字符变成黑色的叠加字符,将黑色的目标显示字符变成白色的叠加字符,不仅操作简单,而且还能够使后续将叠加字符与显示块进行叠加后,能够在视频图像中清晰地显示所述目标相似字符。
在其他实施例中,当目标显示字符不是黑色或白色的字符时,例 如:32位(bit)或16位的颜色,对目标显示字符进行颜色取反处理的步骤包括按位取反处理。
在另一些实施例中,所述光学参数还可以为其他类型的光学参数,例如:色度,相应地,在当前显示块的色度统计值与对应的目标显示字符色度值之间的差值小于或等于预设阈值时,对当前块对应的目标显示字符进行调整处理。对当前显示块对应的目标显示字符进行调整处理包括:对当前显示块对应的目标显示字符进行色彩调整处理,获得叠加字符,所述叠加字符的色度值与对应的显示块的色度统计值之间的差值大于预设阈值。作为示例,对当前显示块对应的目标显示字符进行色彩调整处理可以包括:调整所述目标显示字符的白平衡和色彩丰富度等。
继续参考图1,步骤S5:对所述待叠加区域进行字符叠加处理。对所述待叠加区域进行字符叠加处理,以便在待叠加区域显示所述目标显示字符。
本实施例中,所述字符叠加处理包括步骤S51:将所述叠加字符与对应的显示块进行叠加。
所述叠加字符为经过调整处理后的目标显示字符,从而叠加字符的光学参数值与对应的显示块的光学参数统计值具有足够的差异,相应地,在将所述叠加字符与对应的显示块进行叠加后,目标显示字符能够在所述显示块上清楚地显示且容易辨别,进而保证在视频画面上额外显示的内容信息能够正常显示,优化了用户体验和感受度。
需要说明的是,本实施例中,在当所述差值大于预设阈值时,判断无需进行调整处理;所述字符叠加处理还包括步骤S52:在判断无需进行调整处理时,将所述目标显示字符与对应的显示块进行叠加。
也就是说,在无需进行调整处理时,将目标显示字符与对应的显示块直接叠加时,便能够在显示块上正常的显示所述目标显示字符,使得所述目标显示字符是清楚且容易辨别的。
在具体实施中,在获得叠加字符后,在进行字符叠加处理之前,所述视频图像处理方法还包括:步骤S6:基于所述叠加字符和目标显示字符,获得叠加画布(canvas)。所述叠加画布用于与当前的视频图像进行叠加。
本实施例中,所述叠加画布包括所述叠加字符及其对应的位置信息、以及所述目标显示字符及其对应的位置信息。
相应地,对叠加区域进行字符叠加处理包括:将所述视频图像与所述叠加画布进行叠加。
相应的,本发明还提供一种视频图像处理装置。图3是本发明视频图像处理装置一实施例的功能框图。
参考图3,所述视频图像处理装置包括:获取单元10,用于获取视频图像中的待叠加区域,所述待叠加区域包括多个显示块;光学参数统计单元20,用于对所述各个显示块进行光学参数统计处理,获得光学参数统计值;所述光学参数统计单元为硬件单元;判断单元30,用于基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理;字符调整单元40,用于在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理,获得叠加字符,所述叠加字符的光学参数值与对应显示块的光学参数统计值之间的差值,大于所述预设阈值;字符叠加单元50,用于对所述待叠加区域进行字符叠加处理;所述字符叠加单元包括第一子单元501,用于基于所述叠加字符,对各个所述显示块进行叠加。
所述视频图像处理装置中,光学参数统计单元20对各个所述显示块进行光学参数统计处理,获得光学参数统计值;所述光学参数统计单元20为硬件单元;判断单元30基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对目标显示字符进行调整处理;字符调整单元40在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理;因此,本实施例通过增加小量硬 件单元来完成对视频图像局部光学参数的统计,无需利用软件进行光学参数统计处理,相应降低了CPU负载、运算量及带宽,并且对硬件模块的改动小,还有利于降低芯片设计的复杂度和芯片占用面积。
获取单元10获取原始视频图像中的待叠加区域及其中的多个显示块。
所述待叠加区域为所述视频图像中需要叠加字符的区域。
具体地,基于实际需求,可以在所述待叠加区域中叠加例如菜单、提示信息、日期、地址、图标(如:logo)、摄像机信息等内容,从而在视频画面的基础上,能够提供额外的显示信息。
所述待叠加区域包括多个显示块,从而光学参数统计单元20能够分别对各个显示块进行光学参数统计。
具体地,本实施例中,获取单元10获取所述待叠加区域的位置信息,以及所述显示块的位置信息和大小。在具体实施中,获取单元10依据叠加字符的大小、以及叠加字符在原始视频图像中的位置信息,获取所述待叠加区域的位置信息,以及所述显示块的位置信息和大小。
结合参考图2,示出了一帧视频图像中的待叠加区域M。如图2所示,作为一种实施例,所述待叠加区域包括6个显示块A、B、C、D、E和F。本实施例中,所述显示块的形状为矩形。
在具体实施中,可以根据实际的需求,灵活设置所述显示块的大小和数量,显示块的大小可以相同,也可以不同。
作为一种示例,获取单元10获取视频图像中的所述待叠加区域,并对所述待叠加区域进行区域划分处理,获得所述多个显示块。也就是说,获取单元10先获取所述待叠加区域,再将所述待叠加区域分割成多个所述显示块。
在具体实施中,获取单元10基于叠加字符的大小,对所述待叠加区域进行区域划分处理,获得所述多个显示块。例如:叠加字符的 大小为32*32,则显示块的大小为32*32,或者16*32。
相应地,本实施例中,所述显示块与一个或多个叠加字符相对应。
本实施例中,所述视频图像处理装置还包括:后处理模块(图未示),用于对所述视频图像进行后处理(post processing);所述后处理模块包括:裁剪处理单元60,用于对所述视频图像进行裁剪处理,输出裁剪后图像。
对所述视频图像进行裁剪处理,从而仅保留所需的图像内容,进而确认最终输出的图像大小。本实施例中,所述裁剪后图像包括多行子图像。
在其他实施例中,所述后处理模块还可以省去所述裁剪处理单元。
相应地,所述获取单元10用于获取所述裁剪后图像中的待叠加区域。
通过裁剪处理后的视频图像,才能确认最终输出的视频图像大小,获取单元10通过获取所述裁剪后图像中的待叠加区域,从而能够在最终输出的图像大小的基础上,获取待叠加区域的信息,相应能够保证待叠加区域的位置准确性。
而且,在图像处理领域中,在后处理模块对视频图像进行后处理的过程中,通常均会获取图像中各个位置处的像素信息,例如:各个像素的光学参数值。通过在进行后处理模块中的裁剪处理单元60后,利用光学参数统计单元20进行光学参数统计处理,从而光学参数统计单元20能够直接基于裁剪处理单元60输出的各个位置的像素的光学参数值进行光学参数统计处理,而无需向DDR调用数据,相应无需产生额外的DDR读写,进而有利于减少运算量、降低CPU负载和带宽。
此外,所述光学参数统计单元20可以附加在所述后处理模块中,且容易设计为在线模块,相应减小了对现有硬件模块的改动,进而降低了芯片设计的复杂度,还有利于节约芯片面积。
需要说明的是,本实施例中,所述裁剪处理单元60输出多个分辨率不同的所述裁剪后图像。
具体地,本实施例中,所述后处理模块还包括:缩放单元(图未示),用于对所述视频图像进行缩放处理,输出多个分辨率不同的视频图像至所述裁剪处理单元60。缩放单元对视频图像进行缩放处理,输出多个分辨率不同的视频图像,从而满足不同场景下对于不同分辨率的需求。
相应地,在缩放单元对所述视频图像进行缩放处理之后,裁剪处理单元60对多个分辨率不同的视频图像分别进行裁剪处理,以输出所述多个分辨率不同的裁剪后图像。
在其他实施例中,所述后处理模块中还可以省去所述缩放单元。
还需要说明的是,本实施例中,以所述视频图像处理装置应用于监控摄像场景为示例进行说明,所述后处理模块还包括:遮挡单元(图未示),用于对所述视频图像进行遮挡(cover)处理,输出遮挡后图像至缩放单元。
遮挡单元对所述视频图像进行遮挡处理,从而基于实际的需求,对视频图像中不期望暴露出的区域(例如:图像中可能暴露个人隐私的区域)进行遮挡。
具体地,可以用纯色区域对视频图像中不期望暴露出的区域进行遮挡。
在其他实施例中,基于实际的需求,还可以省去所述遮挡处理的步骤。
本实施例中,所述视频图像处理装置还包括:信息配置单元80,用于将所述获取单元10获取所述待叠加区域和所述显示块的信息配置至所述光学参数统计单元20。
信息配置单元80将所述显示块的信息配置至所述光学参数统计单元20,从而光学参数统计单元20能够基于所配置的显示块的信息, 对显示块进行光学参数统计处理。
本实施例中,所述显示块的信息包括各个显示块的位置信息和大小信息。
在具体实施中,所述信息配置单元80可以为CPU,CPU获取所述待叠加区域和所述显示块的信息,之后将所述显示块的信息配置至所述光学参数统计单元20。
具体地,所述CPU获取所述待叠加区域和所述显示块的信息之后,将所述待叠加区域和所述显示块的信息存储在寄存器中,以便光学参数统计单元20从所述寄存器中获取所述待叠加区域和所述显示块的信息。
在具体实施中,所述寄存器可以设置在后处理模块内部,从而方便软件与硬件之间进行交互。
光学参数统计单元20用于对显示块进行光学参数统计处理,获得光学参数统计值,以便判断单元30能够基于光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对目标显示字符进行调整处理。
所述光学参数统计单元20为硬件单元,因此,本实施例通过在视频图像处理装置中增加小量硬件单元来完成对视频图像局部光学参数的统计,无需利用软件进行光学参数统计处理,相应降低了CPU负载、运算量及带宽,并且对硬件模块的改动小,还有利于降低芯片设计的复杂度和芯片占用面积。
本实施例中,光学参数统计单元20获得各个显示块中所有像素的统计值,以实现光学参数统计处理。其中,所述像素的光学参数统计值可以为所有像素的光学参数和,或者也可以为所有像素的光学参数平均值。
作为一实施例,所述光学参数为亮度;所述光学参数统计单元20为亮度统计单元,所述亮度统计单元用于对各个所述显示块进行 亮度统计处理,获得亮度统计值。
其中,所述亮度统计值可以为当前显示块所有像素的亮度和,也可以为当前显示块所有像素的亮度平均值。
需要说明的是,本实施例中,所述光学参数为亮度,所述光学参数统计单元20为亮度统计单元仅作为一种示例,所述光学参数统计单元20的实施方式不仅限于此。
例如:所述光学参数可以为色度,所述光学参数统计单元还可以为色度统计单元,所述色度统计单元用于对各个显示块进行色度统计处理,获得色度统计值。通过获得色度统计值,也能够将各个显示块与对应的目标显示字符的色度值进行比较,从而判断是否需要对目标显示字符进行调整处理。
所述光学参数统计单元20为硬件单元。作为一实施例,所述光学参数统计单元可以为逻辑电路。
本实施例中,所述后处理模块包括寄存器,所述光学参数统计值存储于所述寄存器内。
本实施例中,所述裁剪处理单元60,用于对所述视频图像进行裁剪处理,输出多个分辨率不同的所述裁剪后图像。
相应地,所述视频图像处理装置还包括:多路选择器90,用于在不同时间段内,从多个裁剪后图像中选择一个裁剪后图像,输出至光学参数统计单元20。
在不同时间段内,多路选择器90从多个裁剪后图像中选择一个裁剪后图像,输出至所述光学参数统计单元,从而在不同时间段内,光学参数统计单元20能够对不同分辨率的裁剪后图像中的显示块进行光学参数统计处理,进而能够实现对光学参数统计单元20的分时复用,相应无需对多个裁剪后图像分别设置对应的光学参数统计单元20,进而有利于减小芯片所占用的面积,进而节约成本。
在具体实施中,后处理模块按行对图像进行读入及处理,所述后 处理模块内部有相应的行计数,在处理完相应的行数后,即完成一帧图像的处理。
相应地,本实施例中,所述裁剪后图像包括依次输出的多行子图像;多行子图像依次输入至光学参数统计单元20。
相应地,本实施例中,所述光学参数统计单元20包括:显示块判断器(未示出),用于判断当前行子图像是否包含所述显示块;光学参数统计器(未示出),用于在当前行子图像包含所述显示块时,对当前行子图像中的显示块进行光学参数统计处理。
由于所述裁剪后图像被分割成多行子图像,多行子图像依次输入至光学参数统计单元20,因此,需要显示块判断器对当前行子图像是否包含所述显示块进行判断,从而能够仅对包含有显示块的当前行子图像中的显示块进行判断,进而减少运算量,并且保证光学参数统计单元对显示块正确进行光学参数统计处理。
需要说明的是,所述后处理模块还包括:镜像翻转单元70,用于在光学参数统计单元对各个所述显示块进行光学参数统计处理之后,对所述裁剪后图像进行镜像翻转处理,输出处理后图像。
在其他实施例中,所述镜像翻转单元还可以省去。
判断单元30基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理。
其中,目标显示字符为需要在待叠加区域的各个显示块上叠加并显示的字符,从而在视频画面的基础上,能够提供额外的显示信息。
通过基于所述光学参数统计值和所述目标显示字符光学参数值之间的差值,对是否需要对所述目标显示字符进行调整处理进行判断,从而能够保证目标显示字符能够在各个显示块上正常显示。
具体地,所述目标显示字符可以包括菜单、提示信息、日期、地址、图标(如:logo)、摄像机信息等内容。
本实施例中,所述显示块与一个或多个所述目标显示字符相对应。因此,判断单元30基于当前显示块的光学参数统计值与对应的目标显示字符光学参数值之间的差值,判断是否需要对当前显示块对应的目标显示字符进行调整处理。
本实施例中,所述光学参数为亮度,因此,判断单元30基于当前块的亮度统计值与对应的目标显示字符光学亮度值之间的差值,判断是否需要对当前显示块对应的目标显示字符进行调整处理。
在其他实施例中,所述光学参数还可以为其他类型的光学参数,例如:色度,相应地,判断单元基于当前块的色度统计值与对应的目标显示字符光学色度值之间的差值,判断是否需要对当前显示块对应的目标显示字符进行调整处理。
具体地,本实施例中,所述判断单元30用于判断所述差值是否大于或等于预设阈值;在当所述差值小于或等于所述预设阈值时,判断需要进行调整处理;在当所述差值大于所述预设阈值时,判断无需进行调整处理。
判断单元30将所述光学参数统计值与所述目标显示字符光学参数值的差值,和所述预设阈值进行比较,在当所述差值小于或等于所述预设阈值时,则说明所述显示块与所述目标显示字符之间的光学参数差异过小,若直接将所述显示块和对应的目标显示字符进行叠加,则将无法正常在所述显示块上显示所述目标显示字符,因此,判断需要对目标显示字符进行调整处理,以便能够正常在显示块上显示所述目标显示字符。
在当所述差值大于所述预设阈值时,则说明所述显示块与所述目标显示字符之间的光学参数具有足够的差异,以便在将所述显示块与对应的目标显示字符进行叠加后,目标显示字符能够在作为背景的显示块上正常显示。
在具体实施中,所述预设阈值基于经验值或用户喜好设置。
本实施例中,判断单元30,用于从寄存器内调用所述光学参数统计信息。
字符调整单元40在当所述差值小于或等于预设阈值时,对目标显示字符进行调整处理,获得所述叠加字符,所述叠加字符的光学参数值与对应显示块的光学参数统计值之间的差值,大于所述预设阈值,从而字符叠加单元50对叠加字符和对应的显示块进行叠加后,能够正常在显示块上显示所述目标显示字符。
所述叠加字符用于后续与对应的显示块进行叠加,以在显示块上显示目标显示字符。
本实施例中,字符调整单元40用于调整所述目标显示字符的光学参数,获得叠加字符,所述叠加字符的光学参数值与对应的显示块的光学参数统计值之间的差值,大于所述预设阈值。
也就是说,在对目标显示字符进行调整处理之后,所获得叠加字符与对应的显示块之间的光学参数具有足够的差异,以使字符叠加单元50对所述叠加字符和所述显示块进行叠加后,能够在所述显示块上正常显示所述叠加字符。
本实施例中,以所述光学参数为亮度为示例进行说明。在具体实施中,字符调整单元40对所述目标显示字符进行调整处理可以包括:对所述目标显示字符进行颜色取反处理。
在具体实施中,所述目标显示字符通常为黑色或白色的醒目字符,通过进行颜色取反处理,也就是说,将白色的目标显示字符变成黑色的叠加字符,将黑色的目标显示字符变成白色的叠加字符,不仅操作简单,而且还能够使后续将叠加字符与显示块进行叠加后,能够在视频图像中清晰地显示所述目标相似字符。
在其他实施例中,当目标显示字符不是黑色或白色的字符时,例如:32位(bit)或16位的颜色,字符调整单元对目标显示字符进行调整处理包括按位取反处理。
在另一些实施例中,所述光学参数还可以为其他类型的光学参数,例如:色度,相应地,在当前显示块的色度统计值与对应的目标显示字符色度值之间的差值小于或等于预设阈值时,字符调整单元对当前块对应的目标显示字符进行调整处理。具体地,字符调整单元对当前显示块对应的目标显示字符进行色彩调整处理,获得叠加字符,所述叠加字符的色度值与对应的显示块的色度统计值之间的差值大于预设阈值。作为示例,字符调整单元对当前显示块对应的目标显示字符进行色彩调整处理可以包括:调整所述目标显示字符的白平衡和色彩丰富度等。
字符叠加单元50对所述待叠加区域进行字符叠加处理,以便在待叠加区域显示所述目标显示字符。
本实施例中,第一子单元501,用于基于所述叠加字符,对各个所述显示块进行叠加处理。
所述叠加字符为经过调整处理后的目标显示字符,从而叠加字符的光学参数值与对应的显示块的光学参数统计值具有足够的差异,相应地,在将所述叠加字符与对应的显示块进行叠加后,目标显示字符能够在所述显示块上清楚地显示且容易辨别,进而保证在视频画面上额外显示的内容信息能够正常显示,优化了用户体验和感受度。
本实施例中,字符叠加单元50还包括:第二子单元502,用于在判断无需进行调整处理时,将所述目标显示字符与对应的显示块进行叠加。
也就是说,在无需进行调整处理时,将目标显示字符与对应的显示块直接叠加时,便能够在显示块上正常的显示所述目标显示字符,使得所述目标显示字符是清楚且容易辨别的。
在具体实施中,基于所述叠加字符和目标显示字符,获得叠加画布(canvas)。所述叠加画布用于与当前的视频图像进行叠加。
本实施例中,所述叠加画布包括所述叠加字符及其对应的位置信 息、以及所述目标显示字符及其对应的位置信息。
相应地,字符叠加单元50将所述视频图像与叠加画布进行叠加。
为了解决所述问题,本发明实施例还提供一种设备,该设备可以通过装载程序形式的上述视频图像处理方法,以实现本发明实施例提供的视频图像处理方法。
本发明实施例提供的设备的一种可选硬件结构可以如图4所示,包括:至少一个处理器01,至少一个通信接口02,至少一个存储器03和至少一个通信总线04。
在本发明实施例中,处理器01、通信接口02、存储器03、通信总线04的数量为至少一个,且处理器01、通信接口02、存储器03通过通信总线04完成相互间的通信。
可选的,通信接口02可以为用于进行网络通信的通信模块的接口,如GSM模块的接口。
可选的,处理器01可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。
可选的,存储器03可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
其中,存储器03存储一条或多条计算机指令,所述一条或多条计算机指令被处理器01执行以实现本发明实施例提供的视频图像处理方法。
需要说明的是,上述的实现终端设备还可以包括与本发明实施例公开内容可能并不是必需的其他器件(未示出);鉴于这些其他器件对于理解本发明实施例公开内容可能并不是必需,本发明实施例对此不进行逐一介绍。
相应地,本发明实施例还提供一种存储介质,所述存储介质存储 有一条或多条计算机指令,所述一条或多条计算机指令用于实现本发明实施例所述的视频图像处理方法。
所述存储介质为计算机可读存储介质,存储介质可以为只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、U盘、移动硬盘、磁盘或光盘等各种可以存储程序代码的介质。
上述本发明的实施方式是本发明的元件和特征的组合。除非另外提及,否则所述元件或特征可被视为选择性的。各个元件或特征可在不与其它元件或特征组合的情况下实践。另外,本发明的实施方式可通过组合部分元件和/或特征来构造。本发明的实施方式中所描述的操作顺序可重新排列。任一实施方式的一些构造可被包括在另一实施方式中,并且可用另一实施方式的对应构造代替。对于本领域技术人员而言明显的是,所附权利要求中彼此没有明确引用关系的权利要求可组合成本发明的实施方式,或者可在提交本申请之后的修改中作为新的权利要求包括。
本发明的实施方式可通过例如硬件、固件、软件或其组合的各种手段来实现。在硬件配置方式中,根据本发明示例性实施方式的方法可通过一个或更多个专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理器件(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器等来实现。
在固件或软件配置方式中,本发明的实施方式可以模块、过程、功能等形式实现。软件代码可存储在存储器单元中并由处理器执行。存储器单元位于处理器的内部或外部,并可经由各种己知手段向处理器发送数据以及从处理器接收数据。
虽然本发明披露如上,但本发明并非限定于此。任何本领域技术人员,在不脱离本发明的精神和范围内,均可作各种更动与修改,因此本发明的保护范围应当以权利要求所限定的范围为准。

Claims (20)

  1. 一种视频图像处理方法,其特征在于,包括:
    获取视频图像中的待叠加区域,所述待叠加区域包括多个显示块;
    利用光学参数统计单元,对各个所述显示块进行光学参数统计处理,获得光学参数统计值;其中,所述光学参数统计单元为硬件单元;
    基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理;
    在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理,获得叠加字符,所述叠加字符的光学参数值与对应显示块的光学参数统计值之间的差值,大于所述预设阈值;
    对所述待叠加区域进行字符叠加处理,所述字符叠加处理包括:将所述叠加字符与对应的显示块进行叠加。
  2. 如权利要求1所述的视频图像处理方法,其特征在于,所述视频图像处理方法还包括:在获取所述待叠加区域和所述显示块之后,在利用光学参数统计单元,对各个所述显示块进行光学参数统计处理之前,将所述显示块的信息配置至所述光学参数统计单元。
  3. 如权利要求1所述的视频图像处理方法,其特征在于,所述视频图像处理方法还包括:对所述视频图像进行后处理,所述后处理包括:对所述视频图像进行裁剪处理,输出裁剪后图像;
    获取视频图像中的待叠加区域包括:获取所述裁剪后图像中的待叠加区域。
  4. 如权利要求3所述的视频图像处理方法,其特征在于,所述裁剪后图像包括多行子图像;多行子图像依次输入至光学参数统计单元;
    利用光学参数统计单元,对各个所述显示块进行亮度统计处理的步骤包括:判断当前行子图像是否包含所述显示块;如果是,则所述 光学参数统计单元对当前行子图像中的显示块进行亮度统计处理。
  5. 如权利要求2、3或4所述的视频图像处理方法,其特征在于,对所述视频图像进行裁剪处理的步骤中,输出多个分辨率不同的所述裁剪后图像;
    所述视频图像处理方法还包括:在不同时间段内,从多个裁剪后图像中选择一个裁剪后图像,输出至所述光学参数统计单元。
  6. 如权利要求1所述的视频图像处理方法,其特征在于,所述光学参数为亮度;
    所述光学参数统计单元为亮度统计单元;对各个所述显示块进行光学参数统计处理包括:利用所述亮度统计单元,对各个所述显示块进行亮度统计处理,获得亮度统计值;
    基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理包括:基于所述亮度统计值和目标显示字符亮度值之间的差值,判断是否需要对所述目标显示字符进行亮度调整处理。
  7. 如权利要求6所述的视频图像处理方法,其特征在于,对所述目标显示字符进行调整处理的步骤包括:对所述目标显示字符进行颜色取反处理。
  8. 如权利要求1所述的视频图像处理方法,其特征在于,所述光学参数为色度;
    所述光学参数统计单元为色度统计单元;对各个所述显示块进行光学参数统计处理包括:利用所述色度统计单元,对各个所述显示块进行色度统计处理,获得色度统计值;
    基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理包括:基于所述色度统计值和目标显示字符色度值之间的差值,判断是否需要对所述目标显示字符进行色度调整处理;
    对所述目标显示字符进行调整处理包括:对当前显示块对应的目标显示字符进行色彩调整处理。
  9. 如权利要求1、6至8任一项所述的视频图像处理方法,其特征在于,在当所述差值大于预设阈值时,判断无需进行调整处理;所述字符叠加处理还包括:在判断无需进行调整处理时,将所述目标显示字符与对应的显示块进行叠加。
  10. 一种视频图像处理装置,其特征在于,包括:
    获取单元,用于获取视频图像中的待叠加区域,所述待叠加区域包括多个显示块;
    光学参数统计单元,用于对所述各个显示块进行光学参数统计处理,获得光学参数统计值;所述光学参数统计单元为硬件单元;
    判断单元,用于基于所述光学参数统计值和目标显示字符光学参数值之间的差值,判断是否需要对所述目标显示字符进行调整处理;
    字符调整单元,用于在当所述差值小于或等于预设阈值时,对所述目标显示字符进行调整处理,获得叠加字符,所述叠加字符的光学参数值与对应显示块的光学参数统计值之间的差值,大于所述预设阈值;
    字符叠加单元,用于对所述待叠加区域进行字符叠加处理;所述字符叠加单元包括第一子单元,用于将所述叠加字符与对应的显示块进行叠加。
  11. 如权利要求10所述的视频图像处理装置,其特征在于,所述视频图像处理装置还包括:信息配置单元,用于将所述获取单元输出的所述待叠加区域和所述显示块的信息配置至所述光学参数统计单元。
  12. 如权利要求10所述的视频图像处理装置,其特征在于,所述视频图像处理装置还包括:后处理模块,用于对所述视频图像进行后处理;所述后处理模块包括:裁剪处理单元,用于对所述视频图 像进行裁剪处理,输出裁剪后图像;
    所述获取单元,用于获取所述裁剪后图像中的待叠加区域。
  13. 如权利要求12所述的视频图像处理装置,其特征在于,所述裁剪后图像包括多行子图像;多行子图像依次输入至光学参数统计单元;
    所述光学参数统计单元包括:显示块判断器,用于判断当前行子图像是否包含所述显示块;统计器,用于在当前行子图像包含所述显示块时,对当前行子图像中的显示块进行光学参数统计处理。
  14. 如权利要求12或13所述的视频图像处理装置,其特征在于,所述裁剪处理单元,用于对所述视频图像进行裁剪处理,输出多个分辨率不同的所述裁剪后图像;
    所述视频图像处理装置还包括:多路选择器,用于在不同时间段内,从多个裁剪后图像中选择一个裁剪后图像,输出至所述光学参数统计单元。
  15. 如权利要求12所述的视频图像处理装置,其特征在于,所述后处理模块包括寄存器,所述光学参数统计值存储于所述寄存器内;
    所述判断单元,用于从所述寄存器内调用所述光学参数统计值。
  16. 如权利要求10所述的视频图像处理装置,其特征在于,所述光学参数为亮度;所述光学参数统计单元为亮度统计单元,所述亮度统计单元用于对各个所述显示块进行亮度统计处理,获得亮度统计值;
    或者,
    所述光学参数为色度;所述光学参数统计单元为色度统计单元;所述亮度统计单元用于对各个所述显示块进行色度统计处理,获得色度统计值。
  17. 如权利要求10所述的视频图像处理装置,其特征在于,所述字符 叠加单元还包括:第二子单元,用于在判断无需进行调整处理时,将所述目标显示字符与对应的显示块进行叠加。
  18. 如权利要求10所述的视频图像处理装置,其特征在于,所述光学参数统计单元包括逻辑电路。
  19. 一种设备,其特征在于,包括至少一个存储器和至少一个处理器,所述存储器存储有一条或多条计算机指令,其中,所述一条或多条计算机指令被所述处理器执行以实现如权利要求1-9任一项所述的视频图像处理方法。
  20. 一种存储介质,其特征在于,所述存储介质存储有一条或多条计算机指令,所述一条或多条计算机指令用于实现如权利要求1-9任一项所述的视频图像处理方法。
PCT/CN2022/115939 2022-03-31 2022-08-30 视频图像处理方法及装置、设备以及存储介质 WO2023184850A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210333869.2A CN114745520A (zh) 2022-03-31 2022-03-31 视频图像处理方法及装置、设备以及存储介质
CN202210333869.2 2022-03-31

Publications (1)

Publication Number Publication Date
WO2023184850A1 true WO2023184850A1 (zh) 2023-10-05

Family

ID=82280075

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/115939 WO2023184850A1 (zh) 2022-03-31 2022-08-30 视频图像处理方法及装置、设备以及存储介质

Country Status (2)

Country Link
CN (1) CN114745520A (zh)
WO (1) WO2023184850A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114745520A (zh) * 2022-03-31 2022-07-12 晶晨芯半导体(成都)有限公司 视频图像处理方法及装置、设备以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006210978A (ja) * 2005-01-25 2006-08-10 Matsushita Electric Ind Co Ltd タイトル編集・再生装置
CN101299804A (zh) * 2008-05-28 2008-11-05 华为技术有限公司 字符叠加方法和装置
CN102209205A (zh) * 2011-06-14 2011-10-05 中国科学院长春光学精密机械与物理研究所 电视跟踪器中的视频叠加显示装置
CN109151549A (zh) * 2018-09-10 2019-01-04 天津市亚安科技有限公司 实现监控视频画面字符叠加反色显示系统及方法、处理器
CN111683213A (zh) * 2020-06-16 2020-09-18 中国北方车辆研究所 基于感兴趣区域灰度图像的自适应字符叠加系统及方法
CN114745520A (zh) * 2022-03-31 2022-07-12 晶晨芯半导体(成都)有限公司 视频图像处理方法及装置、设备以及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006210978A (ja) * 2005-01-25 2006-08-10 Matsushita Electric Ind Co Ltd タイトル編集・再生装置
CN101299804A (zh) * 2008-05-28 2008-11-05 华为技术有限公司 字符叠加方法和装置
CN102209205A (zh) * 2011-06-14 2011-10-05 中国科学院长春光学精密机械与物理研究所 电视跟踪器中的视频叠加显示装置
CN109151549A (zh) * 2018-09-10 2019-01-04 天津市亚安科技有限公司 实现监控视频画面字符叠加反色显示系统及方法、处理器
CN111683213A (zh) * 2020-06-16 2020-09-18 中国北方车辆研究所 基于感兴趣区域灰度图像的自适应字符叠加系统及方法
CN114745520A (zh) * 2022-03-31 2022-07-12 晶晨芯半导体(成都)有限公司 视频图像处理方法及装置、设备以及存储介质

Also Published As

Publication number Publication date
CN114745520A (zh) 2022-07-12

Similar Documents

Publication Publication Date Title
CN109547662B (zh) 非模态视频和静态帧捕获
US10110821B2 (en) Image processing apparatus, method for controlling the same, and storage medium
WO2023184850A1 (zh) 视频图像处理方法及装置、设备以及存储介质
CN111741274A (zh) 一种支持画面局部放大和漫游的超高清视频监看方法
JP2014096749A (ja) 撮像装置及び画像処理方法
WO2017161768A1 (zh) 一种在视频画面中生成标题背景的方法及装置
US9761203B2 (en) Semiconductor device, video display system, and method of processing signal
WO2020027902A1 (en) Combined monochrome and chromatic camera sensor
US20190130543A1 (en) Image processing apparatus, method for processing image and computer-readable recording medium
CN113660425A (zh) 图像处理方法、装置、电子设备及可读存储介质
CN112437237A (zh) 拍摄方法及装置
US11127375B2 (en) Systems and methods for graphical layer blending
CN113676674A (zh) 图像处理方法、装置、电子设备及可读存储介质
CN113674153A (zh) 图像处理芯片、电子设备、图像处理方法和存储介质
JP2006303631A (ja) オンスクリーン表示装置及びオンスクリーンディスプレイ生成方法
CN110519530B (zh) 基于硬件的画中画显示方法及装置
CN111613168A (zh) 一种影像显示处理方法、装置以及计算机可读存储介质
US8488897B2 (en) Method and device for image filtering
US20230070778A1 (en) Image processing circuit and image processing method
US20170278286A1 (en) Method and electronic device for creating title background in video frame
TWI493502B (zh) 處理影像旋轉的方法與裝置
US11736657B2 (en) Image capturing device and image processing method that enlarges a selected region within an image
JP6249637B2 (ja) 画像合成装置
US20230244432A1 (en) Display control device, display device, and display control method for images with different dynamic ranges
WO2021259191A1 (zh) 录像镜像处理方法、装置、摄像设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22934670

Country of ref document: EP

Kind code of ref document: A1