WO2022007787A1 - Image processing method and apparatus, device and medium - Google Patents

Image processing method and apparatus, device and medium Download PDF

Info

Publication number
WO2022007787A1
WO2022007787A1 PCT/CN2021/104709 CN2021104709W WO2022007787A1 WO 2022007787 A1 WO2022007787 A1 WO 2022007787A1 CN 2021104709 W CN2021104709 W CN 2021104709W WO 2022007787 A1 WO2022007787 A1 WO 2022007787A1
Authority
WO
WIPO (PCT)
Prior art keywords
light source
source area
pixel
image
area
Prior art date
Application number
PCT/CN2021/104709
Other languages
French (fr)
Chinese (zh)
Inventor
华路延
Original Assignee
广州虎牙科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010647193.5A external-priority patent/CN113920299A/en
Priority claimed from CN202011043175.2A external-priority patent/CN112153303B/en
Application filed by 广州虎牙科技有限公司 filed Critical 广州虎牙科技有限公司
Publication of WO2022007787A1 publication Critical patent/WO2022007787A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the embodiments of the present application relate to the technical field of image processing, for example, to an image processing method, apparatus, device, and medium.
  • the brightness enhancement solution in the related art can only improve the overall brightness of the image, and cannot adjust the brightness of the image or video. light source display effect.
  • Embodiments of the present application provide an image processing method, apparatus, device, and medium, so as to adjust the display effect of a light source in a video or image, and improve the degree of freedom and diversity of the display effect of the video or image.
  • an embodiment of the present application provides an image processing method, including:
  • the light source area of the target image is processed according to the light source model.
  • acquiring the light source model of the target image includes: acquiring the target image, and determining a light source area in the target image; building a light source model in the light source area;
  • Processing the light source area of the target image according to the light source model includes: determining an opacity parameter of the light source area according to the light source model; The light source texture and the opacity parameter of the light source area are preselected to obtain the display texture in the light source area.
  • the light source model for acquiring the target image includes: a light source model for acquiring an optical image; the light source model represents a light source shape and a light source position in the optical image;
  • the processing of the light source area of the target image according to the light source model includes: obtaining light source information to be added according to the light source model and a preset mapping curve; the light source information to be added represents the light source of the optical image
  • the pixel added value of each pixel point in all the pixel points in the area, the light source area is the image area determined by the light source shape and the light source position; update the light source area according to the light source information to be added.
  • an embodiment of the present application further provides an image processing device, the device includes: a light source model acquisition module, configured to acquire a light source model of an optical image; a light source area processing module, configured to The light source area of the optical image is processed.
  • the light source model acquisition module includes: a light source area determination unit, configured to acquire a target image and determine a light source area in the target image; a light source model construction unit, configured to construct a light source model in the light source area ;
  • the light source area processing module includes: an opacity parameter determination unit configured to determine an opacity parameter of the light source area according to the light source model; a light source area adjustment unit configured to The preselected light source texture in the light source area and the opacity parameter of the light source area are used to obtain the display texture in the light source area.
  • the light source model obtaining module is a light source model configured to obtain an optical image; the light source model represents the light source shape and light source position in the optical image;
  • the light source area processing module includes: an information processing unit configured to obtain light source information to be added according to the light source model and a preset mapping curve; the light source information to be added represents each pixel in the light source area.
  • the pixel added value of each pixel point, the light source area is the image area determined by the light source shape and the light source position; the image updating unit is configured to update each pixel in the light source area according to the light source information to be added The pixel value of the point to get the target processed image.
  • an embodiment of the present application further provides a computer device, the computer device includes: one or more processors; a memory configured to store one or more programs, when the one or more programs are stored When executed by the one or more processors, the one or more processors are caused to implement the image processing method described in any embodiment.
  • an embodiment of the present application further provides a computer-readable storage medium storing a computer program, and when the program is executed by a processor, the image processing method described in any of the embodiments is implemented.
  • FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 3 is an example diagram of identification of a light source area provided by an embodiment of the present application.
  • FIG. 5 is an exemplary diagram of a light source model provided by an embodiment of the present application.
  • FIG. 6 is an example diagram of a calculation parameter of an opacity parameter provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 8 is a flowchart of another image processing method provided by an embodiment of the present application.
  • FIG. 9A is a schematic diagram of an optical image before processing by the image processing method provided by an embodiment of the present application.
  • FIG. 9B is a schematic diagram of a target processed image obtained after processing the optical image in FIG. 9A by using the image processing method provided by the embodiment of the present application.
  • FIG. 10 is a flowchart of a light source model for acquiring an optical image provided by an embodiment of the present application
  • FIG. 11 is a schematic diagram of the division of an optical image provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of obtaining transparency degree information according to an embodiment of the present application.
  • FIG. 14 is a flowchart of updating the pixel value of each pixel in the light source area according to the light source information to be added, and obtaining a target processing image, according to an embodiment of the present application;
  • FIG. 15 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of an image processing device according to an embodiment of the present application.
  • FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present application.
  • the method may be performed by the image processing apparatus provided in any embodiment of the present application, and the apparatus may be composed of hardware and/or software, and may generally be integrated in computer equipment, such as an intelligent mobile terminal.
  • the image processing method provided in this embodiment includes the following steps.
  • the light source model is a model for simulating the shape of the illumination range of the light source in the target image.
  • the light source area may be determined according to a light source model, or the light source area in the target image may be determined by detecting the brightness value of each pixel point in all the pixel points in the target image, for example, obtaining each pixel in the target image.
  • the brightness value of each pixel point is screened out, and the pixel points whose brightness value is greater than the set brightness threshold value are selected, and the maximum connected area formed by these pixel points is used as the light source area.
  • the image processing method provided by this embodiment obtains the light source model of the image and uses the light source model to process the light source area of the image, so as to adjust the light source display effect of the light source area in the image, thereby improving the light source display effect.
  • multimedia live broadcast has attracted people's attention because of its novel form and rich content.
  • multimedia live broadcast software In order to improve the live video effect, multimedia live broadcast software usually has a video retouching function. If the multimedia live broadcast software can provide rich lighting display effects, it will improve the live video effect.
  • the image re-illumination method driven by big data can achieve the effect of re-illuminating the outdoor input image at a specified time.
  • this method is complicated to implement and has a long image processing cycle, and is only suitable for image processing, not for video processing, such as real-time processing of live video, and this method has a single heavy lighting effect, which is not suitable for complex live scenes.
  • FIG. 2 is a flowchart of an image processing method provided by an embodiment of the present application. This embodiment can be applied to the case of adjusting the display effect of the light source in the image or video.
  • the method can be executed by the image processing apparatus provided in any embodiment of the present application, and the apparatus can be composed of hardware and/or software, and can generally be integrated In the computer equipment, for example, it can be an intelligent mobile terminal.
  • the image processing method provided by this embodiment includes the following steps.
  • the target image refers to an image that needs to be adjusted for the display effect of the light source, and the target image may be an image or a video image frame in a video.
  • the video image frame obtained in real time by the live video software is used as the target image.
  • the light source area refers to the area where the light is displayed in the target image when the light source illuminates the object.
  • the light source area in the target image may be determined by detecting the brightness value of each pixel point in all the pixel points in the target image. For example, the brightness value of each pixel in the target image is obtained, and the pixels whose brightness value is greater than the set brightness threshold are screened out, and the largest connected area formed by these pixels is used as the light source area.
  • determining the light source area in the target image includes: determining the light source area in the target image by using a light source detection model obtained by pre-training.
  • the training method of the light source detection model is as follows: an image sample set is obtained, the image sample set includes a large number of image samples, and each image sample is marked with a light source area.
  • the light source area is marked by the light source detection frame; the image sample set is trained by the target detection frame to obtain the light source detection model.
  • Inputting the target image into the light source detection model can output the information of the light source area of the target image, for example, outputting the coordinate information of the light source target frame used to identify the light source area.
  • the block 21 identifies the target image
  • the block 22 identifies the light source target frame
  • the area within the block 22 includes the light source area of the target image.
  • the light source model refers to a model for simulating the shape of the illumination range of the light source, for example, a circular light source model, or an elliptical light source model.
  • the light source model is constructed with the center point of the light source area as the center.
  • a light source model 23 is constructed at the center point in the light source area 22 , and the shape parameters of the light source model 23 can be determined according to the display range of the light source in the light source area 22 .
  • Opacity parameter used to indicate how opaque the pixels in the area are. If the opacity parameter of a pixel is 0%, then the pixel is completely transparent (that is, invisible), and the opacity parameter is 100%, which means a completely opaque pixel (that is, the display original pixels). A pixel's opacity parameter between 0% and 100% allows the pixel to show through the background as though it were through glass (translucency).
  • the opacity parameter can be expressed as a percentage or a real number from 0 to 1.
  • the opacity parameter of each pixel in all the pixels in the light source area is determined, that is, the transparency of the light source of each pixel in the light source area in the actual scene is simulated, and then the effect of light illuminating the object can be simulated.
  • set the opacity parameter of the pixels outside the light source model in the light source area to 0% or 0, according to the pixel position of each pixel in the light source model in the light source model according to the pixel position of each pixel point in the light source model in the light source area. to set the opacity parameter of each pixel.
  • the original image texture refers to the original texture of the target image.
  • Pre-selected light source textures refer to pre-selected light source textures used to provide light sources of different colors to simulate different light source effects within the light source area of the target image.
  • the preselected light source texture in the light source area is implemented by filling with different pixel values. Assuming that the light source effect of normal light is simulated in the light source area, the pixel red, green and blue (RGB) values of all pixels of the preselected light source texture are (1,1,1).
  • the light source area is the superimposed area of the original image texture and the preselected light source texture, and the opacity parameter of the original image texture is 100% or 1, that is, it is completely opaque (the original pixels are displayed).
  • the opacity parameter of the preselected light source texture is based on The opacity parameter of the light source area determined by the light source model is consistent.
  • the preselected light source texture is adjusted using the matching opacity parameter, and the adjustment result is superimposed with the original image texture, so as to obtain the display texture in the light source area.
  • the process of texture overlay may result in the overflow of the overlay result of pixel values, resulting in the inability to display the correct texture.
  • the display texture in the light source area is obtained according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area, including: The pixel value of the image texture and the pre-selected light source texture in the light source area are superimposed, and the superposition result is corrected to obtain the superimposed texture in the light source area; use the opacity parameter of the light source area to adjust the superimposed texture to obtain the pending texture; use the light source area The original image texture inside is compensated for the to-be-determined texture to obtain the display texture in the light source area.
  • the pixel RGB value of the pixel in the original image texture is D (D ⁇ 1)
  • the pixel RGB value of the pixel in the preselected light source texture is S (S ⁇ 1)
  • the opacity parameter A of the pixel uses the opacity parameter A of the pixel to adjust the pixel RGB value of the pixel in the superimposed texture to obtain the pixel RGB value A*T of the pixel in the undetermined texture. Since the opacity parameter of the original image texture included in the overlay texture is 100% or 1, the original pixel value should be displayed, so it is necessary to use the pixel RGB value D of the pixel in the original image texture for the pixel in the pending texture. The pixel RGB value A*T is compensated to obtain the pixel RGB value F in the display texture of the pixel in the light source area.
  • the pixel RGB value D of the pixel in the original image texture can be adjusted using (1-A)
  • the result of (1-A)*D compensates the pixel RGB value A*T of the pixel in the undetermined texture.
  • the opacity parameter A of the pixel point is the opacity parameter of the pixel point in the light source area determined according to the light source model.
  • the pixel position of the pixel in the light source area is different, and the opacity parameter of the pixel is different.
  • S130 includes: obtaining the light source area according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area through a graphics processor (Graphics Processing Unit, GPU). Display texture inside.
  • a graphics processor Graphics Processing Unit, GPU
  • multiple pixels in the light source area can be processed at the same time, so as to improve the rate of calculating the pixel RGB value F in the display texture of the multiple pixel points in the light source area.
  • a processed image corresponding to the target image can be obtained, in which the display effect of the light source in the light source area has been adjusted.
  • a light source model is constructed in the light source area, which can simulate the illumination range of light sources with different shapes; the opacity of the light source area is determined according to the light source model parameter, which can simulate the effect of light illuminating the object; the display texture in the obtained light source area is determined according to the original image texture, pre-selected light source texture and opacity parameters of the light source area.
  • the display texture is different, which realizes the adjustment of the display effect of the light source in the image, greatly enriches the application scene, and improves the degree of freedom and diversity of the image display effect.
  • the above technical solution is simple to implement and has a short processing period, and is suitable for application in online real-time video such as live video.
  • FIG. 4 is a flowchart of another image processing method provided by an embodiment of the present application. This embodiment is described on the basis of the above-mentioned embodiment, wherein constructing a light source model in the light source area includes: detecting the first length of the light source area in the horizontal direction and the second length in the vertical direction; When the first length is equal to the second length, in the light source area, take the center point of the light source area as the center and the first length as the diameter to construct a circular light source model; when the first length is not equal to the second length, In the light source area, with the center point of the light source area as the center, and the first length and the second length as the major and minor axes, an elliptical light source model is constructed.
  • the image processing method provided in this embodiment includes the following steps.
  • the first length of the light source area in the horizontal direction and the second length in the vertical direction are detected, that is, the area span of the light source area in the horizontal direction and the vertical direction is detected, and then the shape of the light source model can be determined.
  • the first length of the light source region in the horizontal direction and the second length in the vertical direction of the light source region are determined by detecting the luminance values of a plurality of pixel points.
  • the length of the line segment AB is the first length of the light source region in the horizontal direction
  • the length of the line segment CD is the second length of the light source region in the vertical direction.
  • a circular light source model can be constructed in the light source area, and the diameter of the light source model is the first length (or the second length).
  • the center of the light source model that is, the light source focus
  • the center point of the light source area such as point O as shown in Figure 5.
  • (x1, y1) are the coordinates of point O, and e is the diameter (first length or second length) of the light source model.
  • an elliptical light source model can be constructed in the light source area, the axis length of the light source model in the horizontal direction is the first length, and the axis length in the vertical direction is the second length , the length of the long axis is the longer of the first length and the second length, and the length of the short axis is the shorter of the first length and the second length.
  • Set the center of the light source model that is, the light source focus
  • the center point of the light source area such as point O as shown in Figure 5.
  • the light source model 23 is an elliptical light source model constructed, and the expression is as follows:
  • (x1, y1) are the coordinates of point O
  • c is the length of the line segment AB (ie the first length)
  • d is the length of the line segment CD (ie the second length).
  • the center of the light source model may also be set at the center point of the light source target frame.
  • the first length and the second length are the range lengths of the light source area in the horizontal direction and the vertical direction in the light source target frame, respectively.
  • multiple light source models can be constructed to suit different image scenarios.
  • the opacity parameter of each pixel in the light source area is determined according to the light source model.
  • the opacity parameter of each pixel is related to the positional relationship between each pixel and the light source model.
  • determining the opacity parameter of the light source area according to the light source model includes: setting the opacity parameter of the pixels outside the light source model in the light source area as a first target constant value, and the first target constant value is used for Indicates full transparency; according to the distance between each pixel in the light source model and the pixel in the center of the light source model, the opacity parameter of each pixel is set; The smaller the distance between the pixel point and the center pixel point, the smaller the value of the opacity parameter of the one pixel point.
  • the pixels are divided into two types: those outside the light source model and those inside the light source model.
  • the opacity parameter of the pixels outside the light source model is set to the first target constant value representing full transparency, such as 0 or 0%, that is, the pixel values of these pixels are completely transparent;
  • the opacity parameter is related to the coordinate position of the pixel point.
  • the value of the opacity parameter of the pixel point closer to the center of the light source model is smaller, that is, the brighter the light source is displayed, and the opacity parameter of the pixel point in the light source model is in 0-1 or between 0%-100%.
  • the opacity parameter of the pixel point within the light source model in the light source area can be calculated according to the following formula: Among them, A is the opacity parameter of the pixel point; a is the distance from the pixel point to the long axis or horizontal diameter of the light source model, b is the distance from the pixel point to the short axis or vertical diameter of the light source model, and a' is the pixel point along the The distance from the short axis direction or the vertical diameter direction to the boundary of the light source model, and b' is the distance from the pixel point to the boundary of the light source model along the long axis direction or the horizontal diameter direction.
  • This embodiment provides a simplified calculation method of the opacity parameter.
  • the proportion of the distance between the horizontal diameter of the light source model and the distance between the pixel point and the vertical diameter of the light source model is calculated, and the value of the opacity parameter of the pixel point is calculated to simulate the transparency of the pixel point.
  • the light source target frame 22 is used to identify the light source area, and an elliptical light source model 23 is constructed in the light source target frame 22 (ie, the light source area).
  • the opacity parameter of any pixel point M outside the light source model 23 is set to 0 or 0%
  • the opacity parameter AN of any pixel point N in the light source model 23 is calculated by the following formula:
  • NP1 is the distance from the pixel point N to the long axis of the light source model (that is, a)
  • NQ1 is the distance from the pixel point N to the short axis of the light source model (that is, b)
  • NP2 is the pixel point N along the short axis direction to the light source model.
  • the distance from the boundary of the pixel point (ie, a'), and NQ2 is the distance from the pixel point N to the boundary of the light source model (ie, b') along the long axis direction
  • NP1 is the distance from the pixel point N to the horizontal diameter of the light source model
  • NQ1 is the vertical diameter from the target pixel point N to the light source model
  • NP2 is the distance from the target pixel N to the boundary of the light source model along the vertical diameter direction
  • NQ2 is the distance from the target pixel N to the boundary of the light source model along the horizontal direction.
  • the method further includes: setting an inconsistency of the original image texture outside the light source area
  • the transparency parameter and the opacity parameter of the displayed texture in the light source area are both the second target constant value, and the second target constant value is used to represent opacity; according to the opacity of the original image texture outside the light source area and the original image texture outside the light source area parameters, the display texture in the light source area, and the opacity parameter of the display texture in the light source area to generate a target processing image corresponding to the target image.
  • the target image is identified in the RGBA color space, R represents red (Red), G represents green (Green), B represents blue (Blue), and A represents an opacity parameter.
  • the area outside the light source area in the target image does not need to be adjusted for the light source display effect, the area outside the light source area still displays the RGB texture of the original image.
  • the opacity parameter of the original image RGB texture outside the light source area and the opacity parameter of the display RGB texture in the light source area can be set as the second target constant value for representing opacity, Like 1 or 100%.
  • the display effect texture obtained by combining the original image RGB texture outside the light source area and the display RGB texture in the light source area can be rendered by the pre-created GPU, and after the rendering is completed, the target processing image is generated for display.
  • the adjustment of the display effect of the light source in the video or image is realized, the degree of freedom and diversity of the display effect of the video or image is improved, and it is suitable for scenes with various light sources changing.
  • the above technical solution can also be combined with the method of the GPU rendering pipeline, so that the change of the display effect of the light source is suitable for real-time video processing.
  • FIG. 7 is a flowchart of another image processing method provided by an embodiment of the present application. This embodiment provides an implementation manner on the basis of the above-mentioned embodiment.
  • the image processing method provided in this embodiment includes the following steps.
  • Video image frames are identified in the RGBA color space.
  • S320 Input the target image into a pre-trained light source detection model to obtain coordinate information of the light source target frame in the target image.
  • the light source focus of the light source model is determined, that is, the center point (x1, y1) of the light source target frame, and the light source model is constructed based on the light source focus (x1, y1).
  • A is the opacity parameter of the pixel point
  • a is the distance from the pixel point to the long axis or horizontal diameter of the light source model
  • b is the distance from the pixel point to the short axis or vertical diameter of the light source model
  • a' is the pixel point along the The distance from the short axis direction or the vertical diameter direction to the boundary of the light source model
  • b' is the distance from the pixel point to the boundary of the light source model along the long axis direction or the horizontal diameter direction.
  • the opacity parameters of multiple pixels in the light source target frame determined in this step will participate in the processing of pixels in the light source area.
  • the pre-selected light source texture can display the light source effects of different colors by filling different pixel RGB values, and then simulate different light source effects, and can fill the corresponding pixel RGB values of different light source effects according to actual needs.
  • the light source area (or the area within the light source target frame) is the superimposed area of the original image texture and the preselected light source texture, and the pixel RGB values need to be processed.
  • the opacity parameters of multiple pixels in the light source area are determined by S340; The areas outside the light source area do not need to be processed, and the original image texture can be displayed, and the opacity parameter of these areas is set to 1.
  • the calculation rate of the superimposed texture in the light source area and the display texture in the light source area is improved.
  • FIG. 8 is a flowchart of another image processing method provided by an embodiment of the present application.
  • the image processing method may include the following steps.
  • the light source model characterizes the light source shape and light source location in the optical image.
  • the shape of the light source can be a square, a circle, an ellipse, a semi-circle, etc.
  • the "non-black pixels" in the optical image for example, the pixels whose RGB value (ie, the pixel RGB value) is not 0
  • the composed image area is fitted to obtain the light source shape.
  • the position of the light source can be represented by the pixel coordinates determined by each of the plurality of pixels in the optical image corresponding to the shape of the light source; the optical image can also be divided into grids, and the optical image with "non-black pixels" can be divided into a grid.
  • the coordinate position corresponding to at least one grid is used as the light source position.
  • the light source information to be added represents the pixel added value of each pixel point in the light source area, and the light source area is an image area determined by the shape of the light source and the position of the light source.
  • the pixel added value may be determined according to the light source intensity and the RGB value to be added of the pixel point
  • the light source intensity may determine the brightness information of each pixel point in the target image
  • the RGB value to be added of the pixel point may determine the target image.
  • a light source is simulated, and the light source information to be added corresponding to the simulated light source is added to the light source area, so that the image display information (light source information to be added) corresponding to the simulated light source is added to the optical image, which is conducive to enriching the optical display information of the image, providing More visual presentations.
  • the pixels of the optical image outside the light source area are all "black pixels” (for example, pixels whose RGB values are all 0), there is no light source at the positions of the pixels outside the light source area.
  • the image processing method is beneficial to enrich the optical display information of the image and provide more visual display effects.
  • FIG. 9A is a schematic diagram of an optical image before processing by the image processing method provided by the embodiment of the present application.
  • FIG. 9B is a schematic diagram of a target processed image obtained after processing the optical image in FIG. 9A by using the image processing method provided by the embodiment of the present application.
  • the optical image and the target processing image can be compared, and the two have changed greatly in the image area with the light source (light source area), and the light source information to be added is added to the light source area of the optical image, so that the target image has richer optics. Display information. That is to say, by using the image processing method provided by the embodiments of the present application, more visual display effects are provided for the image.
  • FIG. 10 is a flowchart of a light source model for acquiring an optical image provided by an embodiment of the present application.
  • S31 acquiring a light source model of the optical image, which may include the following steps.
  • the optical image may be meshed according to M*N to obtain a meshed image with M*N meshes, where M and N are both positive integers greater than or equal to 2.
  • FIG. 11 is a schematic diagram of dividing an optical image according to an embodiment of the present application. The optical image is divided into grids to obtain a grid image with 6*7 grids shown in FIG. 11 .
  • S312 Acquire at least one grid having a light source in the grid image.
  • the leg area of the "dog" shown in Fig. 11 is an image area with a light source (that is, the blank space outlined in Fig. 11), then the blank space in the grid image is regarded as a The above grid with light sources.
  • the position of the light source is the coordinate position of the box corresponding to at least one grid having the light source in the grid image. If there is only one grid with a light source in the grid image, the box coordinate position of the grid is used as the light source position. If there are multiple grids with light sources in the grid image, the frame coordinate positions corresponding to the multiple networks are taken as the light source positions, including two possible situations: In one case, if multiple grids are continuously distributed, then integrate the multiple grids to obtain a light source position; in the other case, if multiple grids are discretely distributed, obtain the light sources in the discretely distributed grid area. Location.
  • the area enclosed by the multiple grids may be a square area, but the shape of the light source may be a circle, a trapezoid, an ellipse, etc. in the square area.
  • the virtual light source corresponding to the optical image may be a point light source.
  • the point light source can be enhanced, and then the pixel value (light source information to be added) can be added to the light source area of the optical image, so as to improve the darker part in the optical image, which is beneficial to the optical image. Observation and recognition of target objects in images.
  • FIG. 12 is a flowchart of another image processing method provided by the embodiment of the present application.
  • S32 obtaining information about the light source to be added according to the light source model and the preset mapping curve, which may include the following steps.
  • S321 Match the transparency level information with a preset mapping curve to obtain the brightness information to be added of the light source area.
  • the transparency degree information represents the brightness information of the light source area
  • the brightness information to be added represents the brightness addition value of each pixel in the light source area.
  • the degree of transparency information may be obtained according to the intensity of the light source corresponding to the optical image, and the intensity of the light source may be obtained by: obtaining the length of the first line segment between any pixel point in the light source area and the center point of the light source area, The length of the second line segment between the edge point and the center point of the light source region on the extension line of the first line segment, the edge point and the arbitrary pixel point are located on the same side of the center point, and the first The length of the line segment is divided by the length of the second line segment to obtain the light source intensity corresponding to any one pixel point.
  • the light source intensity of each pixel in the light source region that is, the transparency of each pixel
  • the above-mentioned transparency information is also obtained.
  • the color information to be added represents the color added value of each pixel in the light source area.
  • the light source change requirement may be identified according to different optical images, or may be set by the user through an operation instruction, for example, different RGB pixel values may be added to each pixel in the light source area.
  • the brightness and color of each pixel in the light source area of the optical image are adjusted using the light source information to be added to obtain the target processing image.
  • the visual data of the target processed image will be clearer and brighter, making the target object recognition in the image more accurate.
  • FIG. 13 is a schematic diagram of obtaining transparency degree information provided by an embodiment of the present application.
  • the light source area shown in FIG. 13 is a rectangular area, and the above-mentioned transparency degree information can be obtained in the following manner.
  • the first edge point is any pixel point on the boundary of the light source area.
  • the first distance is the distance between the center point O and the first edge point b
  • the second distance is the distance between the area center point O and the first area point a
  • the first area point a is composed of the center point O and the first edge point b. Any pixel on the line segment.
  • the first distance is Ob and the second distance is Oa.
  • FIG. 14 is a flowchart of updating the pixel value of each pixel in the light source area according to the light source information to be added to obtain a target processing image provided by an embodiment of the present application.
  • the above S33: updating the pixel value of each pixel in the light source area according to the information of the light source to be added, to obtain the target image may include the following steps.
  • the first pixel is any pixel in the light source area.
  • the first pixel point may be the first area point a shown in FIG. 13 .
  • S332 Determine the first pixel addition value of the first pixel point according to the light source information to be added.
  • the first pixel addition value includes a first luminance addition value and a first color addition value.
  • the first color addition value may be (1, 1, 1); for each pixel in the light source area, setting different color addition values for each pixel can simulate different light source effects.
  • the first luminance addition value may be represented by the intensity of the light source.
  • the value of the first pixel is V
  • the value of the first color addition is G
  • the value of the intermediate pixel is H.
  • the above-mentioned intermediate pixel value H, first color addition value G, and first pixel value V are all less than or equal to 1.
  • the first target pixel value I is:
  • Each pixel can have the same U ⁇ value, and some pixels in the light source area can also have different U ⁇ values.
  • the light source information corresponding to the optical image can be adjusted, the optical display information of the image can be enriched, and more visual display effects can be provided.
  • the user can also manually set the newly added light source.
  • the coordinates, shape and color of the simulated light source can be set directly through the operation command, so as to add the set light source to the optical image, and the user can The target image can be viewed.
  • adding a red light source to an optical image means adding a red color channel value to each pixel in the optical image to obtain the target processing image.
  • FIG. 15 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application. This embodiment may be applicable to the case of adjusting the display effect of the light source in the image or video, and the apparatus may be implemented by means of software and/or hardware, and may generally be integrated in computer equipment. As shown in FIG. 15 , the device has a light source model acquisition module 410 and a light source region processing module 420 .
  • the light source model obtaining module 410 is configured to obtain the light source model of the target image; the light source region processing module 420 is configured to process the light source region of the target image according to the light source model.
  • FIG. 16 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application.
  • the light source model acquisition module 410 includes: a light source area determination unit 411 and a light source model construction unit 412
  • the light source area processing module 420 includes an opacity parameter determination unit 421 and a light source area adjustment unit 422 .
  • the light source area determination unit 411 is configured to acquire a target image and determine the light source area in the target image; the light source model construction unit 412 is configured to construct a light source model in the light source area; the opacity parameter determination unit 421 is configured to The model determines the opacity parameter of the light source area; the light source area adjustment unit 422 is set to obtain the desired light source area according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area. The display texture in the light source area.
  • a light source model is constructed in the light source area, which can simulate the illumination range of light sources with different shapes; the opacity of the light source area is determined according to the light source model parameter, which can simulate the effect of light illuminating the object; the display texture in the obtained light source area is determined according to the original image texture, pre-selected light source texture and opacity parameters of the light source area.
  • the display texture is different, which realizes the adjustment of the display effect of the light source in the image, greatly enriches the application scene, and improves the degree of freedom and diversity of the image display effect.
  • the above technical solution is simple to implement and has a short processing period, and is suitable for application in online real-time video such as live video.
  • the light source area adjustment unit 422 is configured to superimpose the pixel value of the original image texture in the light source area and the preselected light source texture in the light source area, and correct the superposition result, Obtain the superimposed texture in the light source area; use the opacity parameter of the light source area to adjust the superimposed texture to obtain a pending texture; use the original image texture in the light source area to compensate the pending texture to obtain Display texture within the light source area.
  • the light source area determining unit 411 is configured to acquire a target image, and determine the light source area in the target image through a pre-trained light source detection model.
  • the light source model building unit 412 is configured to detect the first length and the second length of the light source area in the horizontal direction and the vertical direction respectively; when the first length and the second length are equal, the light source area is In the light source area, a circular light source model is constructed with the center point of the light source area as the center and the first length as the diameter; when the first length is not equal to the second length, in the light source area , taking the center point of the light source area as the center, taking the longer of the first length and the second length as the long axis, and taking the shorter of the first length and the second length as the length For the short axis, construct an elliptical light source model.
  • the opacity parameter determining unit 421 is configured to set the opacity parameters of all the pixels in the light source area outside the light source model as a first target constant value, and the first target constant value is used to represent full transparency; according to The size of the distance between each pixel point in all the pixel points in the light source model in the light source area and the center pixel point of the light source model, and the opacity parameter of each pixel point is set; wherein , the smaller the distance between one pixel in the light source model and the central pixel, the smaller the value of the opacity parameter of the one pixel.
  • the opacity parameter determining unit 421 is set to the size of the distance between each pixel point in the light source area and the central pixel point of the light source model according to the distance between each pixel point in the light source model in the light source model in the following manner: , set the opacity parameter of each pixel point: calculate the opacity parameter of each pixel point within the light source model in the light source area according to the following formula: Among them, A is the opacity parameter of each pixel, a is the distance from each pixel to the long axis or horizontal diameter of the light source model, and b is the distance from each pixel to the light source model The distance from the short axis or the vertical diameter of , a' is the distance from the short axis direction or the vertical diameter direction of each pixel to the boundary of the light source model, b' is the long axis direction of each pixel Or the distance from the horizontal diameter direction to the boundary of the light source model.
  • the light source area adjustment unit 422 is configured to obtain through the GPU, according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area Display texture within the light source area.
  • the above-mentioned device further includes: a target processing image generation module, which is configured to, in the light source area adjustment unit, according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area, After obtaining the display texture in the light source area, set the opacity parameter of the original image texture outside the light source area and the opacity parameter of the display texture in the light source area to be the second target constant value, the second target constant value.
  • a target processing image generation module which is configured to, in the light source area adjustment unit, according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area, After obtaining the display texture in the light source area, set the opacity parameter of the original image texture outside the light source area and the opacity parameter of the display texture in the light source area to be the second target constant value, the second target constant value.
  • the target constant value is used to represent opacity; according to the original image texture outside the light source area, the opacity parameter of the original image texture outside the light source area, the display texture in the light source area, and the display in the light source area The opacity parameter of the texture to generate the target processing image corresponding to the target image.
  • FIG. 17 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application.
  • the light source area processing module 420 may include: an information processing unit 423 and an image updating unit 424 .
  • the light source model acquisition module 410 is a light source model configured to acquire optical images.
  • the light source model characterizes the light source shape and light source location in the optical image.
  • the information processing unit 423 is configured to obtain the information of the light source to be added according to the light source model and the preset mapping curve.
  • the light source information to be added represents the pixel added value of each pixel point in all the pixel points in the light source area of the optical image, and the light source area is an image area determined by the shape of the light source and the position of the light source.
  • the image updating unit 424 is configured to update the pixel value of each pixel in the light source area according to the light source information to be added, so as to obtain the target processing image.
  • the information processing unit 423 is configured to match the brightness information with the preset mapping curve to obtain the brightness information to be added in the light source area, the brightness information represents the brightness information of the light source area, and the brightness information to be added Characterize the added value of the brightness of each pixel in the light source area; in response to the changing requirements of the light source, obtain the color information to be added in the light source area, and the color information to be added represents the added color value of each pixel in the light source area; according to the brightness to be added information and the color information to be added to obtain the pixel added value of each pixel in the light source area of the optical image.
  • the light source model determination module 410 , the information processing unit 423 and the image updating unit 424 may cooperate to implement the image processing method of Embodiment 1 or Embodiment 5 and possible sub-steps of the method.
  • the image processing apparatus provided by the embodiment of the present application can execute the image processing method provided by any embodiment of the present application, and has functional modules corresponding to the execution method.
  • FIG. 18 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the computer device includes a processor 50, a memory 51, an input device 52 and an output device 53; the number of processors 50 in the computer device can be one or more, and one processor 50 is taken as an example in FIG. 18 ;
  • the processor 50, the memory 51, the input device 52 and the output device 53 in the computer equipment may be connected by a bus or in other ways. In FIG. 18, the connection by a bus is taken as an example.
  • the memory 51 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the image processing method in the embodiments of the present application (for example, the image processing method shown in FIG. 15 ).
  • the processor 50 executes various functional applications and data processing of the computer device by running the software programs, instructions and modules stored in the memory 51 , ie, implements the above-mentioned image processing method.
  • the memory 51 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of computer equipment, and the like.
  • the memory 51 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device.
  • memory 51 may include memory located remotely from processor 50, which may be connected to a computer device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 52 may be configured to receive input numerical or character information and to generate key signal input related to user settings and function control of the computer device.
  • the output device 53 may include a display device such as a display screen.
  • FIG. 19 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • the image processing device 10 includes a memory 11 , a processor 12 and a communication interface 13 .
  • the memory 11 , the processor 12 and the communication interface 13 are directly or indirectly electrically connected to each other to realize data transmission or interaction. For example, these elements may be electrically connected to each other through one or more communication buses or signal lines.
  • the memory 11 may be configured to store software programs and modules, such as program instructions/modules corresponding to the image processing methods provided in the embodiments of the present application, and the processor 12 executes various functions by executing the software programs and modules stored in the memory 11. applications and data processing.
  • the communication interface 13 can be used for signaling or data communication with other node devices.
  • the image processing apparatus 10 may have a plurality of communication interfaces 13 in the present application.
  • the memory 11 can be, but is not limited to, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable only memory Read memory (Erasable Programmable Read-Only Memory, EPROM), Electrical Erasable Programmable Read-Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
  • RAM Random Access Memory
  • ROM read only memory
  • PROM programmable read only memory
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EPROM Electrical Erasable Programmable Read-Only Memory
  • EEPROM Electrical Erasable Programmable Read-Only Memory
  • the processor 12 may be an integrated circuit chip with signal processing capability.
  • the processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it may also be a digital signal processor (Digital Signal Processing, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the image processing device 10 may also implement a display function through a GPU, a display screen, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 12 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the above-mentioned image processing device 10 can be, but is not limited to, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super mobile personal computer (Ultra-Mobile Personal Computer, UMPC), netbook, personal digital assistant (Personal Digital Assistant, PDA) and other terminals, the embodiment of the present application does not impose any restrictions on the specific type of the image processing device.
  • AR Augmented Reality
  • VR Virtual Reality
  • PDA Personal Digital Assistant
  • the structures illustrated in the embodiments of the present application do not constitute a limitation on the image processing apparatus 10 .
  • the image processing apparatus 10 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • Embodiments of the present application also provide a computer-readable storage medium storing a computer program, where the computer program is used to execute an image processing method when executed by a computer processor.
  • the method includes: acquiring a light source model of an optical image; and processing the light source area of the optical image according to the light source model. .
  • the computer-readable storage medium storing the computer program provided by the embodiment of the present application is not limited to the above method operations, and can also perform related operations in the image processing method provided by any embodiment of the present application.
  • the present application can be implemented by means of software and general hardware, and can also be implemented by hardware. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product, and the computer software product can be stored in a computer-readable storage medium, such as a floppy disk, ROM, RAM, flash memory (FLASH), hard disk of a computer. Or CD, etc., including a plurality of instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments of the present application.
  • a computer-readable storage medium such as a floppy disk, ROM, RAM, flash memory (FLASH), hard disk of a computer. Or CD, etc.
  • the multiple units and modules included are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; the names of the multiple functional units are also It is only for the convenience of distinguishing from each other, and is not intended to limit the protection scope of the present application.

Abstract

Disclosed in the embodiments of the present invention are an image processing method and apparatus, a device and a medium. The method comprises: obtaining a light source model of a target image; and processing a light source region of the target image according to the light source model.

Description

图像处理方法、装置、设备和介质Image processing method, apparatus, device and medium
本申请要求在2020年07月07日提交中国专利局、申请号为202010647193.5的中国专利申请以及在2020年09月28日提交中国专利局、申请号为202011043175.2的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202010647193.5 filed with the China Patent Office on July 7, 2020 and the Chinese patent application with the application number 202011043175.2 filed with the China Patent Office on September 28, 2020. The entire contents of are incorporated herein by reference.
技术领域technical field
本申请实施例涉及图像处理技术领域,例如涉及一种图像处理方法、装置、设备和介质。The embodiments of the present application relate to the technical field of image processing, for example, to an image processing method, apparatus, device, and medium.
背景技术Background technique
随着社会的进步和经济的发展,移动终端在实现互联网冲浪的基础上,其拍摄图像和展示图像的功能越来越受到用户的重视,用户对移动终端的图像处理也提出了更高的要求。With the progress of society and the development of economy, on the basis of realizing Internet surfing, the functions of capturing images and displaying images of mobile terminals have attracted more and more attention from users, and users have also put forward higher requirements for image processing of mobile terminals. .
当移动终端在光线较少的场景中进行图像拍摄,拍摄得到的图像亮度偏暗,相关技术中的亮度提升的方案仅能实现对图像的整体进行亮度提升,无法较好的调整图像或视频中的光源显示效果。When the mobile terminal captures an image in a scene with less light, the brightness of the captured image is dark, and the brightness enhancement solution in the related art can only improve the overall brightness of the image, and cannot adjust the brightness of the image or video. light source display effect.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供一种图像处理方法、装置、设备和介质,以调整视频或图像中的光源显示效果,提高视频或图像显示效果的自由度及多样性。Embodiments of the present application provide an image processing method, apparatus, device, and medium, so as to adjust the display effect of a light source in a video or image, and improve the degree of freedom and diversity of the display effect of the video or image.
第一方面,本申请实施例提供了一种图像处理方法,包括:In a first aspect, an embodiment of the present application provides an image processing method, including:
获取目标图像的光源模型;Obtain the light source model of the target image;
根据所述光源模型对所述目标图像的光源区域进行处理。The light source area of the target image is processed according to the light source model.
可选地,获取目标图像的光源模型,包括:获取目标图像,并确定所述目标图像中的光源区域;在所述光源区域内构建光源模型;Optionally, acquiring the light source model of the target image includes: acquiring the target image, and determining a light source area in the target image; building a light source model in the light source area;
根据所述光源模型对所述目标图像的光源区域进行处理,包括:根据所述光源模型确定所述光源区域的不透明度参数;根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理。Processing the light source area of the target image according to the light source model includes: determining an opacity parameter of the light source area according to the light source model; The light source texture and the opacity parameter of the light source area are preselected to obtain the display texture in the light source area.
可选地,所述获取目标图像的光源模型包括:获取光学图像的光源模型; 所述光源模型表征所述光学图像中的光源形状和光源位置;Optionally, the light source model for acquiring the target image includes: a light source model for acquiring an optical image; the light source model represents a light source shape and a light source position in the optical image;
所述根据所述光源模型对所述目标图像的光源区域进行处理,包括:根据所述光源模型和预设映射曲线,得到待添加光源信息;所述待添加光源信息表征所述光学图像的光源区域中的全部像素点中的每个像素点的像素添加值,所述光源区域为所述光源形状和所述光源位置确定的图像区域;根据所述待添加光源信息更新所述光源区域中的每个像素点的像素值,得到目标处理图像。The processing of the light source area of the target image according to the light source model includes: obtaining light source information to be added according to the light source model and a preset mapping curve; the light source information to be added represents the light source of the optical image The pixel added value of each pixel point in all the pixel points in the area, the light source area is the image area determined by the light source shape and the light source position; update the light source area according to the light source information to be added. The pixel value of each pixel point to get the target processed image.
第二方面,本申请实施例还提供了一种图像处理装置,该装置包括:光源模型获取模块,设置为获取光学图像的光源模型;光源区域处理模块,设置为根据所述光源模型对所述光学图像的光源区域进行处理。In a second aspect, an embodiment of the present application further provides an image processing device, the device includes: a light source model acquisition module, configured to acquire a light source model of an optical image; a light source area processing module, configured to The light source area of the optical image is processed.
可选地,所述光源模型获取模块包括:光源区域确定单元,设置为获取目标图像,并确定所述目标图像中的光源区域;光源模型构建单元,设置为在所述光源区域内构建光源模型;Optionally, the light source model acquisition module includes: a light source area determination unit, configured to acquire a target image and determine a light source area in the target image; a light source model construction unit, configured to construct a light source model in the light source area ;
所述光源区域处理模块包括:不透明参数确定单元,设置为根据所述光源模型确定所述光源区域的不透明度参数;光源区域调整单元,设置为根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理。The light source area processing module includes: an opacity parameter determination unit configured to determine an opacity parameter of the light source area according to the light source model; a light source area adjustment unit configured to The preselected light source texture in the light source area and the opacity parameter of the light source area are used to obtain the display texture in the light source area.
可选地,所述光源模型获取模块,是设置为获取光学图像的光源模型;所述光源模型表征所述光学图像中的光源形状和光源位置;Optionally, the light source model obtaining module is a light source model configured to obtain an optical image; the light source model represents the light source shape and light source position in the optical image;
所述光源区域处理模块包括:信息处理单元,设置为根据所述光源模型和预设映射曲线,得到待添加光源信息;所述待添加光源信息表征所述光源区域中的全部像素点中的每个像素点的像素添加值,所述光源区域为所述光源形状和所述光源位置确定的图像区域;图像更新单元,设置为根据所述待添加光源信息更新所述光源区域中的每个像素点的像素值,得到目标处理图像。The light source area processing module includes: an information processing unit configured to obtain light source information to be added according to the light source model and a preset mapping curve; the light source information to be added represents each pixel in the light source area. The pixel added value of each pixel point, the light source area is the image area determined by the light source shape and the light source position; the image updating unit is configured to update each pixel in the light source area according to the light source information to be added The pixel value of the point to get the target processed image.
第三方面,本申请实施例还提供了一种计算机设备,所述计算机设备包括:一个或多个处理器;存储器,设置为存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现任意实施例所述的图像处理方法。In a third aspect, an embodiment of the present application further provides a computer device, the computer device includes: one or more processors; a memory configured to store one or more programs, when the one or more programs are stored When executed by the one or more processors, the one or more processors are caused to implement the image processing method described in any embodiment.
第四方面,本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,该程序被处理器执行时实现任意实施例所述的图像处理方法。In a fourth aspect, an embodiment of the present application further provides a computer-readable storage medium storing a computer program, and when the program is executed by a processor, the image processing method described in any of the embodiments is implemented.
附图说明Description of drawings
为了说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,以下附图仅示出了本申请的一些实施例,因此不应被看作是对 范围的限定。In order to illustrate the technical solutions of the embodiments of the present application, the accompanying drawings required in the embodiments will be briefly introduced below. The following drawings only show some embodiments of the present application, and therefore should not be regarded as compromising the scope of the application. limit.
图1是本申请实施例提供的一种图像处理方法的流程图;1 is a flowchart of an image processing method provided by an embodiment of the present application;
图2是本申请实施例提供的另一种图像处理方法的流程图;2 is a flowchart of another image processing method provided by an embodiment of the present application;
图3是本申请实施例提供的一种光源区域的标识示例图;3 is an example diagram of identification of a light source area provided by an embodiment of the present application;
图4是本申请实施例提供的另一种图像处理方法的流程图;4 is a flowchart of another image processing method provided by an embodiment of the present application;
图5是本申请实施例提供的一种光源模型的示例图;5 is an exemplary diagram of a light source model provided by an embodiment of the present application;
图6是本申请实施例提供的一种不透明度参数的计算参数的示例图;6 is an example diagram of a calculation parameter of an opacity parameter provided by an embodiment of the present application;
图7是本申请实施例提供的另一种图像处理方法的流程图;7 is a flowchart of another image processing method provided by an embodiment of the present application;
图8为本申请实施例提供的另一种图像处理方法的流程图;8 is a flowchart of another image processing method provided by an embodiment of the present application;
图9A为利用本申请实施例提供的图像处理方法处理前的光学图像的示意图;FIG. 9A is a schematic diagram of an optical image before processing by the image processing method provided by an embodiment of the present application;
图9B为利用本申请实施例提供的图像处理方法对图9A中的光学图像进行处理后,得到的目标处理图像的示意图。FIG. 9B is a schematic diagram of a target processed image obtained after processing the optical image in FIG. 9A by using the image processing method provided by the embodiment of the present application.
图10为本申请实施例提供的一种获取光学图像的光源模型的流程图;10 is a flowchart of a light source model for acquiring an optical image provided by an embodiment of the present application;
图11为本申请实施例提供的一种光学图像的划分示意图;11 is a schematic diagram of the division of an optical image provided by an embodiment of the present application;
图12为本申请实施例提供的另一种图像处理方法的流程图;12 is a flowchart of another image processing method provided by an embodiment of the present application;
图13为本申请实施例提供的一种获取透亮程度信息的示意图;13 is a schematic diagram of obtaining transparency degree information according to an embodiment of the present application;
图14为本申请实施例提供的一种根据待添加光源信息更新光源区域中的每个像素点的像素值,得到目标处理图像的流程图;14 is a flowchart of updating the pixel value of each pixel in the light source area according to the light source information to be added, and obtaining a target processing image, according to an embodiment of the present application;
图15是本申请实施例提供的一种图像处理装置的结构示意图;FIG. 15 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application;
图16是本申请实施例提供的另一种图像处理装置的结构示意图;FIG. 16 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application;
图17是本申请实施例提供的另一种图像处理装置的结构示意图;FIG. 17 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application;
图18是本申请实施例提供的一种计算机设备的结构示意图;18 is a schematic structural diagram of a computer device provided by an embodiment of the present application;
图19为本申请实施例提供的一种图像处理设备的结构示意图。FIG. 19 is a schematic structural diagram of an image processing device according to an embodiment of the present application.
具体实施方式detailed description
下面结合附图和实施例对本申请进行说明。此处所描述的实施例仅仅用于解释本申请,而非对本申请的限定。为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。The present application will be described below with reference to the accompanying drawings and embodiments. The embodiments described here are only used to explain the present application, but not to limit the present application. For convenience of description, the drawings only show some but not all structures related to the present application.
实施例一Example 1
图1是本申请实施例提供的一种图像处理方法的流程图。该方法可以由本申请任意实施例提供的图像处理装置来执行,该装置可由硬件和/或软件组成,并一般可集成在计算机设备中,例如可以是智能移动终端。FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present application. The method may be performed by the image processing apparatus provided in any embodiment of the present application, and the apparatus may be composed of hardware and/or software, and may generally be integrated in computer equipment, such as an intelligent mobile terminal.
如图1所示,本实施例提供的图像处理方法包括以下步骤。As shown in FIG. 1 , the image processing method provided in this embodiment includes the following steps.
S10、获取目标图像的光源模型。S10. Acquire a light source model of the target image.
可选的,光源模型是用于模拟目标图像中光源照射范围的形状的模型。Optionally, the light source model is a model for simulating the shape of the illumination range of the light source in the target image.
S20、根据所述光源模型对所述目标图像的光源区域进行处理。S20. Process the light source area of the target image according to the light source model.
可选的,所述光源区域可以根据光源模型确定,或者可以通过检测目标图像中全部像素点中的每个像素点的亮度值来确定目标图像中的光源区域,例如,获取目标图像中的每个像素点的亮度值,并筛选出亮度值大于设定亮度阈值的像素点,将由这些像素点构成的最大连通区域作为光源区域。Optionally, the light source area may be determined according to a light source model, or the light source area in the target image may be determined by detecting the brightness value of each pixel point in all the pixel points in the target image, for example, obtaining each pixel in the target image. The brightness value of each pixel point is screened out, and the pixel points whose brightness value is greater than the set brightness threshold value are selected, and the maximum connected area formed by these pixel points is used as the light source area.
本实施例提供的图像处理方法通过获取图像的光源模型,利用光源模型对图像的光源区域进行处理,可以实现对图像中的光源区域的光源显示效果的调整,进而提高光源显示效果。The image processing method provided by this embodiment obtains the light source model of the image and uses the light source model to process the light source area of the image, so as to adjust the light source display effect of the light source area in the image, thereby improving the light source display effect.
实施例二Embodiment 2
随着信息技术的不断发展,多媒体直播因形式新颖、内容丰富受到人们的关注。为了提升视频直播效果,多媒体直播软件通常具有视频修饰功能。若多媒体直播软件能提供丰富的灯光显示效果,将能提升视频直播效果。With the continuous development of information technology, multimedia live broadcast has attracted people's attention because of its novel form and rich content. In order to improve the live video effect, multimedia live broadcast software usually has a video retouching function. If the multimedia live broadcast software can provide rich lighting display effects, it will improve the live video effect.
基于大数据驱动的图像重光照方法,能够达到户外输入图像在指定时间下进行重光照的效果。但是,此方法实现复杂,图像处理周期长,仅适合对图像进行处理,不适合对视频进行处理,例如对直播视频进行实时处理,而且此方法的重光照效果单一,并不适合直播场景的复杂环境以及直播场景对显示多样性的需求。The image re-illumination method driven by big data can achieve the effect of re-illuminating the outdoor input image at a specified time. However, this method is complicated to implement and has a long image processing cycle, and is only suitable for image processing, not for video processing, such as real-time processing of live video, and this method has a single heavy lighting effect, which is not suitable for complex live scenes. Demand for display diversity in environments and live broadcast scenarios.
图2是本申请实施例提供的一种图像处理方法的流程图。本实施例可适用于对图像或者视频中的光源显示效果进行调整的情况,该方法可以由本申请任意实施例提供的图像处理装置来执行,该装置可由硬件和/或软件组成,并一般可集成在计算机设备中,例如可以是智能移动终端。FIG. 2 is a flowchart of an image processing method provided by an embodiment of the present application. This embodiment can be applied to the case of adjusting the display effect of the light source in the image or video. The method can be executed by the image processing apparatus provided in any embodiment of the present application, and the apparatus can be composed of hardware and/or software, and can generally be integrated In the computer equipment, for example, it can be an intelligent mobile terminal.
如图2所示,本实施例提供的图像处理方法包括以下步骤。As shown in FIG. 2 , the image processing method provided by this embodiment includes the following steps.
S110、获取目标图像,并确定目标图像中的光源区域。S110. Acquire a target image, and determine a light source area in the target image.
目标图像,指的是需要进行光源显示效果调整的图像,目标图像可以是一张图像,也可以是视频中的一个视频图像帧。可选的,将通过视频直播软件实时获取的视频图像帧作为目标图像。The target image refers to an image that needs to be adjusted for the display effect of the light source, and the target image may be an image or a video image frame in a video. Optionally, the video image frame obtained in real time by the live video software is used as the target image.
光源区域,指的是光源照射物体时在目标图像中显示光照的区域。The light source area refers to the area where the light is displayed in the target image when the light source illuminates the object.
示例性的,可以通过检测目标图像中全部像素点中的每个像素点的亮度值来确定目标图像中的光源区域。例如,获取目标图像中的每个像素点的亮度值,并筛选出亮度值大于设定亮度阈值的像素点,将由这些像素点构成的最大连通区域作为光源区域。Exemplarily, the light source area in the target image may be determined by detecting the brightness value of each pixel point in all the pixel points in the target image. For example, the brightness value of each pixel in the target image is obtained, and the pixels whose brightness value is greater than the set brightness threshold are screened out, and the largest connected area formed by these pixels is used as the light source area.
作为一种可选的实施方式,确定所述目标图像中的光源区域,包括:通过预先训练得到的光源检测模型,确定目标图像中的光源区域。As an optional implementation manner, determining the light source area in the target image includes: determining the light source area in the target image by using a light source detection model obtained by pre-training.
光源检测模型的训练方法如下:获取图像样本集,图像样本集中包括大量图像样本,每个图像样本上标注有光源区域。可选的,光源区域通过光源检测框进行标注;通过目标检测框架对图像样本集进行训练,得到光源检测模型。可选的,通过YOLO(You Only Look Once)目标检测框架对图像样本集进行训练。The training method of the light source detection model is as follows: an image sample set is obtained, the image sample set includes a large number of image samples, and each image sample is marked with a light source area. Optionally, the light source area is marked by the light source detection frame; the image sample set is trained by the target detection frame to obtain the light source detection model. Optionally, use the YOLO (You Only Look Once) target detection framework to train the image sample set.
将目标图像输入光源检测模型,即可输出目标图像的光源区域的信息,例如可以是输出用于标识光源区域的光源目标框的坐标信息。如图3所示,方框21标识的是目标图像,方框22标识的是光源目标框,方框22内的区域包括目标图像的光源区域。Inputting the target image into the light source detection model can output the information of the light source area of the target image, for example, outputting the coordinate information of the light source target frame used to identify the light source area. As shown in FIG. 3 , the block 21 identifies the target image, the block 22 identifies the light source target frame, and the area within the block 22 includes the light source area of the target image.
S120、在光源区域内构建光源模型,并根据光源模型确定光源区域的不透明度参数。S120. Build a light source model in the light source area, and determine an opacity parameter of the light source area according to the light source model.
光源模型,指的是用于模拟光源光照范围的形状的模型,例如可以是圆形的光源模型,或者可以是椭圆形的光源模型。The light source model refers to a model for simulating the shape of the illumination range of the light source, for example, a circular light source model, or an elliptical light source model.
可选的,以光源区域的中心点为中心构建光源模型。如图3所示,在光源区域22内的中心点处构建光源模型23,光源模型23的形状参数可以根据光源区域22内的光源显示范围来确定。Optionally, the light source model is constructed with the center point of the light source area as the center. As shown in FIG. 3 , a light source model 23 is constructed at the center point in the light source area 22 , and the shape parameters of the light source model 23 can be determined according to the display range of the light source in the light source area 22 .
不透明度参数,用于指示区域内的像素点的不透明程度。如果一个像素点的不透明度参数为0%,那所述一个像素点就是完全透明的(也就是看不见的),而不透明度参数为100%则意味着一个完全不透明的像素点(也就是显示原本的像素点)。像素点的不透明度参数在0%和100%之间,则使得像素点可以透过背景显示出来,就像透过玻璃(半透明性)一样。不透明度参数可以用百分比或者0到1的实数表示。Opacity parameter, used to indicate how opaque the pixels in the area are. If the opacity parameter of a pixel is 0%, then the pixel is completely transparent (that is, invisible), and the opacity parameter is 100%, which means a completely opaque pixel (that is, the display original pixels). A pixel's opacity parameter between 0% and 100% allows the pixel to show through the background as though it were through glass (translucency). The opacity parameter can be expressed as a percentage or a real number from 0 to 1.
根据光源模型确定光源区域内全部像素点中每个像素点的不透明度参数, 也即模拟实际场景中光源区域内的每个像素点的光源透亮程度,进而可以模拟出光照照射物体的效果。可选的,设置光源区域中的光源模型外的像素点的不透明度参数为0%或0,根据光源区域中的光源模型内的全部像素点中的每个像素点在光源模型中的像素位置来设置所述每个像素点的不透明度参数。According to the light source model, the opacity parameter of each pixel in all the pixels in the light source area is determined, that is, the transparency of the light source of each pixel in the light source area in the actual scene is simulated, and then the effect of light illuminating the object can be simulated. Optionally, set the opacity parameter of the pixels outside the light source model in the light source area to 0% or 0, according to the pixel position of each pixel in the light source model in the light source model according to the pixel position of each pixel point in the light source model in the light source area. to set the opacity parameter of each pixel.
S130、根据光源区域内的原图纹理、光源区域内的预选光源纹理以及光源区域的不透明度参数,得到光源区域内的显示纹理。S130. Obtain a display texture in the light source area according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area.
原图纹理,指的是目标图像的原有纹理。The original image texture refers to the original texture of the target image.
预选光源纹理,指的是预先选定的光源纹理,用于提供不同颜色的光源,以在目标图像的光源区域内模拟不同的光源效果。Pre-selected light source textures refer to pre-selected light source textures used to provide light sources of different colors to simulate different light source effects within the light source area of the target image.
可选的,光源区域内的预选光源纹理通过填充不同的像素值来实现。假设在光源区域内模拟正常灯光的光源效果,则预选光源纹理的所有像素点的像素红绿蓝(Red Green Blue,RGB)值均为(1,1,1)。Optionally, the preselected light source texture in the light source area is implemented by filling with different pixel values. Assuming that the light source effect of normal light is simulated in the light source area, the pixel red, green and blue (RGB) values of all pixels of the preselected light source texture are (1,1,1).
光源区域为原图纹理和预选光源纹理的叠加区域,且原图纹理的不透明度参数为100%或1,也即是完全不透明的(显示原有像素),预选光源纹理的不透明度参数与根据光源模型确定的光源区域的不透明度参数是一致的。The light source area is the superimposed area of the original image texture and the preselected light source texture, and the opacity parameter of the original image texture is 100% or 1, that is, it is completely opaque (the original pixels are displayed). The opacity parameter of the preselected light source texture is based on The opacity parameter of the light source area determined by the light source model is consistent.
示例性的,使用匹配的不透明度参数对预选光源纹理进行调整,并将调整结果与原图纹理进行叠加,即可得到光源区域内的显示纹理。但此方案中,纹理叠加的过程有可能会出现像素值的叠加结果溢出的问题,进而导致无法显示正确的纹理。Exemplarily, the preselected light source texture is adjusted using the matching opacity parameter, and the adjustment result is superimposed with the original image texture, so as to obtain the display texture in the light source area. However, in this scheme, the process of texture overlay may result in the overflow of the overlay result of pixel values, resulting in the inability to display the correct texture.
作为一种可选的实施方式,根据光源区域内的原图纹理、光源区域内的预选光源纹理,以及光源区域的不透明度参数,得到光源区域内的显示纹理,包括:将光源区域内的原图纹理与光源区域内的预选光源纹理进行像素值叠加,并对叠加结果进行修正,得到光源区域内的叠加纹理;使用光源区域的不透明度参数对叠加纹理进行调整,得到待定纹理;使用光源区域内的原图纹理对待定纹理进行补偿,得到光源区域内的显示纹理。As an optional implementation manner, the display texture in the light source area is obtained according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area, including: The pixel value of the image texture and the pre-selected light source texture in the light source area are superimposed, and the superposition result is corrected to obtain the superimposed texture in the light source area; use the opacity parameter of the light source area to adjust the superimposed texture to obtain the pending texture; use the light source area The original image texture inside is compensated for the to-be-determined texture to obtain the display texture in the light source area.
以光源区域内的一个像素点为例,记该像素点在原图纹理中的像素RGB值为D(D≤1),该像素点在预选光源纹理中的像素RGB值为S(S≤1),将S和D进行叠加。在叠加的过程中,可能存在像素RGB值的叠加和大于1的情况,进而导致像素值溢出,无法得到正确的像素RGB值。因此,对S和D的叠加结果进行修正,使像素RGB值的叠加和始终小于等于1。例如,一个像素点的像素RGB值的叠加和大于1,则将该像素点的像素RGB值的叠加和置1。也即,得到该像素点在光源区域内的叠加纹理中的像素RGB值T=S+D,T≤1。Take a pixel in the light source area as an example, record the pixel RGB value of the pixel in the original image texture is D (D≤1), and the pixel RGB value of the pixel in the preselected light source texture is S (S≤1) , and superimpose S and D. In the process of superposition, there may be cases where the pixel RGB value is superimposed and greater than 1, which in turn causes the pixel value to overflow, and the correct pixel RGB value cannot be obtained. Therefore, the superposition result of S and D is corrected so that the superposition sum of the pixel RGB values is always less than or equal to 1. For example, if the superposition sum of the pixel RGB values of a pixel point is greater than 1, the superposition sum of the pixel RGB values of the pixel point is set to 1. That is, the pixel RGB value T=S+D in the superimposed texture of the pixel in the light source area is obtained, and T≤1.
使用该像素点的不透明度参数A对该像素点在叠加纹理中的像素RGB值进 行调整,得到该像素点在待定纹理中的像素RGB值A*T。由于叠加纹理中包括的原图纹理的不透明度参数为100%或1,应显示原有像素值,因此需要使用该像素点在原图纹理中的像素RGB值D对该像素点在待定纹理中的像素RGB值A*T进行补偿,才可得到该像素点在光源区域内的显示纹理中的像素RGB值F。Use the opacity parameter A of the pixel to adjust the pixel RGB value of the pixel in the superimposed texture to obtain the pixel RGB value A*T of the pixel in the undetermined texture. Since the opacity parameter of the original image texture included in the overlay texture is 100% or 1, the original pixel value should be displayed, so it is necessary to use the pixel RGB value D of the pixel in the original image texture for the pixel in the pending texture. The pixel RGB value A*T is compensated to obtain the pixel RGB value F in the display texture of the pixel in the light source area.
由于已经使用该像素点的不透明度参数A对该像素点在原图纹理中的像素RGB值D进行调整,故可以使用(1-A)对该像素点在原图纹理中的像素RGB值D进行调整的结果(1-A)*D对该像素点在待定纹理中的像素RGB值A*T进行补偿。Since the opacity parameter A of the pixel has been used to adjust the pixel RGB value D of the pixel in the original image texture, the pixel RGB value D of the pixel in the original image texture can be adjusted using (1-A) The result of (1-A)*D compensates the pixel RGB value A*T of the pixel in the undetermined texture.
也即,该像素点在光源区域内的显示纹理中的像素RGB值F的计算公式如下:That is, the calculation formula of the pixel RGB value F in the display texture of the pixel in the light source area is as follows:
T=S+D,T≤1;T=S+D, T≤1;
F=A*T+(1-A)*D。F=A*T+(1-A)*D.
该像素点的不透明度参数A为根据光源模型确定的光源区域中该像素点的不透明度参数。像素点在光源区域中的像素位置不同,该像素点的不透明度参数就不同。The opacity parameter A of the pixel point is the opacity parameter of the pixel point in the light source area determined according to the light source model. The pixel position of the pixel in the light source area is different, and the opacity parameter of the pixel is different.
为了提高目标图像的处理速度,S130包括:通过图形处理器(Graphics Processing Unit,GPU),根据光源区域内的原图纹理、光源区域内的预选光源纹理以及光源区域的不透明度参数,得到光源区域内的显示纹理。In order to improve the processing speed of the target image, S130 includes: obtaining the light source area according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area through a graphics processor (Graphics Processing Unit, GPU). Display texture inside.
利用GPU的并行处理能力,可以同时处理光源区域内的多个像素点,以提升计算多个像素点在光源区域内的显示纹理中的像素RGB值F的速率。Using the parallel processing capability of the GPU, multiple pixels in the light source area can be processed at the same time, so as to improve the rate of calculating the pixel RGB value F in the display texture of the multiple pixel points in the light source area.
可选的,创建两个GPU,分别用于处理原图纹理和预选光源纹理。Optionally, create two GPUs for processing the original texture and the preselected light texture.
将光源区域内的显示纹理与光源区域外的原图纹理进行组合,即可得到与目标图像对应的处理图像,在处理图像中光源区域的光源显示效果已被调整。Combining the display texture in the light source area and the original image texture outside the light source area, a processed image corresponding to the target image can be obtained, in which the display effect of the light source in the light source area has been adjusted.
本申请实施例提供的技术方案中,确定出目标图像中的光源区域之后,在所述光源区域中构建光源模型,可以模拟多种不同形状的光源光照范围;根据光源模型确定光源区域的不透明度参数,可以模拟光照照射物体的效果;得出的光源区域内的显示纹理是根据光源区域的原图纹理、预选光源纹理以及不透明度参数确定的,预选光源纹理不同,得出的光源区域内的显示纹理就不同,以此实现了对图像中光源显示效果的调整,极大地丰富了应用场景,提高了图像显示效果的自由度及多样性。同时,上述技术方案实现简单,处理周期短,适宜应用于视频直播之类的在线实时视频中。In the technical solution provided by the embodiment of the present application, after the light source area in the target image is determined, a light source model is constructed in the light source area, which can simulate the illumination range of light sources with different shapes; the opacity of the light source area is determined according to the light source model parameter, which can simulate the effect of light illuminating the object; the display texture in the obtained light source area is determined according to the original image texture, pre-selected light source texture and opacity parameters of the light source area. The display texture is different, which realizes the adjustment of the display effect of the light source in the image, greatly enriches the application scene, and improves the degree of freedom and diversity of the image display effect. At the same time, the above technical solution is simple to implement and has a short processing period, and is suitable for application in online real-time video such as live video.
实施例三Embodiment 3
图4是本申请实施例提供的另一种图像处理方法的流程图。本实施例以上述实施例为基础进行说明,其中,在所述光源区域内构建光源模型,包括:检测光源区域在水平方向上的第一长度和在竖直方向上的第二长度;当第一长度与所述第二长度相等时,在光源区域内,以光源区域的中心点为中心,第一长度为直径,构建圆形的光源模型;当第一长度与第二长度不相等时,在光源区域内,以光源区域的中心点为中心,第一长度和第二长度为长短轴,构建椭圆形的光源模型。FIG. 4 is a flowchart of another image processing method provided by an embodiment of the present application. This embodiment is described on the basis of the above-mentioned embodiment, wherein constructing a light source model in the light source area includes: detecting the first length of the light source area in the horizontal direction and the second length in the vertical direction; When the first length is equal to the second length, in the light source area, take the center point of the light source area as the center and the first length as the diameter to construct a circular light source model; when the first length is not equal to the second length, In the light source area, with the center point of the light source area as the center, and the first length and the second length as the major and minor axes, an elliptical light source model is constructed.
如图4所示,本实施例提供的图像处理方法包括以下步骤。As shown in FIG. 4 , the image processing method provided in this embodiment includes the following steps.
S210、获取目标图像,并确定目标图像中的光源区域。S210. Acquire a target image, and determine a light source area in the target image.
S220、检测光源区域在水平方向上的第一长度和在竖直方向上的第二长度。S220. Detect the first length of the light source region in the horizontal direction and the second length in the vertical direction.
检测光源区域在水平方向上的第一长度,以及在竖直方向上的第二长度,也即检测光源区域在水平方向上以及竖直方向上的区域跨度,进而可以确定出光源模型的形状。The first length of the light source area in the horizontal direction and the second length in the vertical direction are detected, that is, the area span of the light source area in the horizontal direction and the vertical direction is detected, and then the shape of the light source model can be determined.
可选的,通过检测多个像素点的亮度值来确定光源区域在水平方向上的第一长度,以及在竖直方向上的第二长度。参照图5,示例的,线段AB的长度是为光源区域在水平方向上的第一长度,线段CD的长度是光源区域在竖直方向上的第二长度。Optionally, the first length of the light source region in the horizontal direction and the second length in the vertical direction of the light source region are determined by detecting the luminance values of a plurality of pixel points. Referring to FIG. 5 , for example, the length of the line segment AB is the first length of the light source region in the horizontal direction, and the length of the line segment CD is the second length of the light source region in the vertical direction.
S230、判断第一长度和第二长度是否相等,若第一长度和第二长度相等,则执行S240,若第一长度和第二长度不相等,则执行S250。S230. Determine whether the first length and the second length are equal. If the first length and the second length are equal, execute S240. If the first length and the second length are not equal, execute S250.
S240、在光源区域内,以光源区域的中心点为中心,第一长度为直径,构建圆形的光源模型,执行S260。S240. In the light source area, taking the center point of the light source area as the center and the first length as the diameter, construct a circular light source model, and execute S260.
当第一长度与第二长度相等时,即可在光源区域内构建一个圆形的光源模型,光源模型的直径即为第一长度(或第二长度)。将光源模型的中心(也即光源焦点)设置于光源区域的中心点,如图5所示的O点。When the first length is equal to the second length, a circular light source model can be constructed in the light source area, and the diameter of the light source model is the first length (or the second length). Set the center of the light source model (that is, the light source focus) at the center point of the light source area, such as point O as shown in Figure 5.
圆形的光源模型的表达式如下:The expression of the circular light source model is as follows:
Figure PCTCN2021104709-appb-000001
Figure PCTCN2021104709-appb-000001
(x1,y1)是O点的坐标,e是光源模型的直径(第一长度或第二长度)。(x1, y1) are the coordinates of point O, and e is the diameter (first length or second length) of the light source model.
S250、在光源区域内,以光源区域的中心点为中心,第一长度和第二长度为长短轴,构建椭圆形的光源模型,执行S260。S250. In the light source area, taking the center point of the light source area as the center, and the first length and the second length as the major and minor axes, construct an elliptical light source model, and execute S260.
当第一长度与第二长度不等时,即可在光源区域内构建一个椭圆形的光源 模型,光源模型水平方向上的轴长为第一长度,竖直方向上的轴长为第二长度,长轴的长度为第一长度和第二长度中的较长者,短轴的长度为第一长度和第二长度中的较短者。将光源模型的中心(也即光源焦点)设置于光源区域的中心点,如图5所示的O点。When the first length is not equal to the second length, an elliptical light source model can be constructed in the light source area, the axis length of the light source model in the horizontal direction is the first length, and the axis length in the vertical direction is the second length , the length of the long axis is the longer of the first length and the second length, and the length of the short axis is the shorter of the first length and the second length. Set the center of the light source model (that is, the light source focus) at the center point of the light source area, such as point O as shown in Figure 5.
如图5所示,光源模型23即为构建的一个椭圆形的光源模型,表达式如下:As shown in FIG. 5 , the light source model 23 is an elliptical light source model constructed, and the expression is as follows:
Figure PCTCN2021104709-appb-000002
Figure PCTCN2021104709-appb-000002
(x1,y1)为O点的坐标,c为线段AB的长度(即第一长度),d为线段CD的长度(即第二长度)。(x1, y1) are the coordinates of point O, c is the length of the line segment AB (ie the first length), and d is the length of the line segment CD (ie the second length).
在光源区域通过光源目标框(一般为方框)来标识的情况下,也可以将光源模型的中心设置于光源目标框的中心点。此时,第一长度和第二长度分别为光源目标框内水平方向和竖直方向上光源区域的范围长度。When the light source area is identified by a light source target frame (generally a box), the center of the light source model may also be set at the center point of the light source target frame. At this time, the first length and the second length are the range lengths of the light source area in the horizontal direction and the vertical direction in the light source target frame, respectively.
在此步骤中,可以构建多个的光源模型,以适应不同的图像场景。In this step, multiple light source models can be constructed to suit different image scenarios.
S260、根据光源模型确定光源区域的不透明度参数。S260. Determine the opacity parameter of the light source area according to the light source model.
在此步骤中,根据光源模型确定光源区域内的每个像素点的不透明度参数。每个像素点的不透明度参数和所述每个像素点与光源模型的位置关系相关。In this step, the opacity parameter of each pixel in the light source area is determined according to the light source model. The opacity parameter of each pixel is related to the positional relationship between each pixel and the light source model.
作为一种实施方式,将根据光源模型确定光源区域的不透明度参数,包括:设置光源区域中的光源模型之外的像素点的不透明度参数为第一目标常数值,第一目标常数值用于表示全透明;根据光源区域中光源模型之内的每个像素点与光源模型的中心的像素点的距离大小,设置所述每个个像素点的不透明度参数;其中,光源模型之内的一个像素点与中心的像素点的距离越小,所述一个像素点的不透明度参数的值就越小。As an implementation manner, determining the opacity parameter of the light source area according to the light source model includes: setting the opacity parameter of the pixels outside the light source model in the light source area as a first target constant value, and the first target constant value is used for Indicates full transparency; according to the distance between each pixel in the light source model and the pixel in the center of the light source model, the opacity parameter of each pixel is set; The smaller the distance between the pixel point and the center pixel point, the smaller the value of the opacity parameter of the one pixel point.
在光源区域中,为了模拟光源透亮分布情况,将像素点划分为光源模型之外的像素点和光源模型之内的像素点两种。光源模型之外的像素点的不透明度参数设置为表示全透明的第一目标常数值,如0或0%,也即这些像素点的像素值是全透的;光源模型之内的像素点的不透明度参数与该像素点的坐标位置相关,越靠近光源模型的中心的像素点的不透明度参数的值就越小,也即光源显示越亮,光源模型之内的像素点的不透明度参数在0-1或者0%-100%之间。In the light source area, in order to simulate the brightness distribution of the light source, the pixels are divided into two types: those outside the light source model and those inside the light source model. The opacity parameter of the pixels outside the light source model is set to the first target constant value representing full transparency, such as 0 or 0%, that is, the pixel values of these pixels are completely transparent; The opacity parameter is related to the coordinate position of the pixel point. The value of the opacity parameter of the pixel point closer to the center of the light source model is smaller, that is, the brighter the light source is displayed, and the opacity parameter of the pixel point in the light source model is in 0-1 or between 0%-100%.
作为一种可选的实施方式,可以根据下述公式计算光源区域中的光源模型之内的像素点的不透明度参数:
Figure PCTCN2021104709-appb-000003
其中,A为像素点的不透明度参数;a为像素点到光源模型的长轴或者水平直径的距离,b为像素点到光源模型的短轴或者竖直直径的距离,a′为像素点沿短轴方向或者竖直直径方向到光源 模型的边界的距离,b′为像素点沿长轴方向或者水平直径方向到光源模型的边界的距离。
As an optional implementation manner, the opacity parameter of the pixel point within the light source model in the light source area can be calculated according to the following formula:
Figure PCTCN2021104709-appb-000003
Among them, A is the opacity parameter of the pixel point; a is the distance from the pixel point to the long axis or horizontal diameter of the light source model, b is the distance from the pixel point to the short axis or vertical diameter of the light source model, and a' is the pixel point along the The distance from the short axis direction or the vertical diameter direction to the boundary of the light source model, and b' is the distance from the pixel point to the boundary of the light source model along the long axis direction or the horizontal diameter direction.
本实施方式中提供了一种不透明度参数的简化计算方式,根据像素点到光源模型的长轴的距离占比以及与像素点到光源模型的短轴的距离占比,或者,根据像素点到光源模型的水平直径的距离占比以及像素点到光源模型竖直直径的距离占比,计算得到像素点的不透明度参数的值,以用于模拟该像素点的透亮程度。This embodiment provides a simplified calculation method of the opacity parameter. The proportion of the distance between the horizontal diameter of the light source model and the distance between the pixel point and the vertical diameter of the light source model is calculated, and the value of the opacity parameter of the pixel point is calculated to simulate the transparency of the pixel point.
如图6所示,使用光源目标框22标识光源区域,并在光源目标框22(也即光源区域)内构建的椭圆形的光源模型23。在光源目标框22内,光源模型23外的任意像素点M的不透明度参数设置为0或0%,光源模型23内的任意像素点N的不透明度参数A N通过下述公式来计算:
Figure PCTCN2021104709-appb-000004
其中,NP1为像素点N到光源模型的长轴的距离(即a),NQ1为像素点N到光源模型的短轴的距离(即b),NP2为像素点N沿短轴方向到光源模型的边界的距离(即a′),NQ2为像素点N沿长轴方向到光源模型的边界的距离(即b′)。
As shown in FIG. 6 , the light source target frame 22 is used to identify the light source area, and an elliptical light source model 23 is constructed in the light source target frame 22 (ie, the light source area). In the light source target frame 22, the opacity parameter of any pixel point M outside the light source model 23 is set to 0 or 0%, and the opacity parameter AN of any pixel point N in the light source model 23 is calculated by the following formula:
Figure PCTCN2021104709-appb-000004
Among them, NP1 is the distance from the pixel point N to the long axis of the light source model (that is, a), NQ1 is the distance from the pixel point N to the short axis of the light source model (that is, b), and NP2 is the pixel point N along the short axis direction to the light source model. The distance from the boundary of the pixel point (ie, a'), and NQ2 is the distance from the pixel point N to the boundary of the light source model (ie, b') along the long axis direction.
若在光源目标框22(也即光源显示区域)内构建的圆形的光源模型时,NP1为像素点N到光源模型的水平直径的距离,NQ1为目标像素点N到光源模型的竖直直径的距离,NP2为目标像素点N沿竖直直径方向到光源模型的边界的距离,NQ2为目标像素点N沿水平方向到光源模型的边界的距离。If a circular light source model is constructed in the light source target frame 22 (ie, the light source display area), NP1 is the distance from the pixel point N to the horizontal diameter of the light source model, and NQ1 is the vertical diameter from the target pixel point N to the light source model. NP2 is the distance from the target pixel N to the boundary of the light source model along the vertical diameter direction, and NQ2 is the distance from the target pixel N to the boundary of the light source model along the horizontal direction.
S270、将光源区域内的原图纹理与光源区域内的预选光源纹理进行像素值叠加,并对叠加结果进行修正,得到光源区域内的叠加纹理。S270 , superimposing the pixel value of the original image texture in the light source area and the preselected light source texture in the light source area, and correcting the superimposition result to obtain the superimposed texture in the light source area.
S280、使用光源区域的不透明度参数对叠加纹理进行调整,得到待定纹理,并使用光源区域内的原图纹理对待定纹理进行补偿,得到光源区域内的显示纹理。S280. Use the opacity parameter of the light source area to adjust the superimposed texture to obtain a pending texture, and use the original image texture in the light source area to compensate for the pending texture to obtain a display texture in the light source area.
在根据光源区域内的原图纹理、光源区域内的预选光源纹理以及光源区域的不透明度参数,得到光源区域内的显示纹理之后,所述方法还包括:设置光源区域外的原图纹理的不透明度参数以及光源区域内的显示纹理的不透明度参数均为第二目标常数值,第二目标常数值用于表示不透明;根据光源区域外的原图纹理、光源区域外的原图纹理的不透明度参数、光源区域内的显示纹理、以及光源区域内的显示纹理的不透明度参数,生成与目标图像对应的目标处理图像。After obtaining the display texture in the light source area according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area, the method further includes: setting an inconsistency of the original image texture outside the light source area The transparency parameter and the opacity parameter of the displayed texture in the light source area are both the second target constant value, and the second target constant value is used to represent opacity; according to the opacity of the original image texture outside the light source area and the original image texture outside the light source area parameters, the display texture in the light source area, and the opacity parameter of the display texture in the light source area to generate a target processing image corresponding to the target image.
在本实施例中,目标图像以RGBA色彩空间进行标识,R代表红色(Red)G代表绿色(Green),B代表蓝色(Blue),A代表不透明度参数。In this embodiment, the target image is identified in the RGBA color space, R represents red (Red), G represents green (Green), B represents blue (Blue), and A represents an opacity parameter.
由于目标图像中光源区域外的区域无需进行光源显示效果调整,则光源区 域外的区域显示的依旧是原图RGB纹理。在得到光源区域内的显示RGB纹理之后,可以同时设置光源区域外的原图RGB纹理的不透明度参数以及光源区域内的显示RGB纹理的不透明度参数为用于表示不透明的第二目标常数值,如1或100%。Since the area outside the light source area in the target image does not need to be adjusted for the light source display effect, the area outside the light source area still displays the RGB texture of the original image. After the display RGB texture in the light source area is obtained, the opacity parameter of the original image RGB texture outside the light source area and the opacity parameter of the display RGB texture in the light source area can be set as the second target constant value for representing opacity, Like 1 or 100%.
将光源区域外的原图RGB纹理,以及光源区域内的显示RGB纹理,按照不透明度参数进行组合,即可得到与目标图像对应的目标处理图像,在该目标处理图像中已完成对光源显示效果的调整。Combine the RGB texture of the original image outside the light source area and the display RGB texture in the light source area according to the opacity parameter to obtain the target processing image corresponding to the target image, and the display effect of the light source has been completed in the target processing image. adjustment.
可以通过预先创建的GPU对由光源区域外的原图RGB纹理,以及光源区域内的显示RGB纹理组合得到的显示效果纹理进行渲染,渲染完成后生成目标处理图像进行显示。The display effect texture obtained by combining the original image RGB texture outside the light source area and the display RGB texture in the light source area can be rendered by the pre-created GPU, and after the rendering is completed, the target processing image is generated for display.
本实施例未解释之处请参见前述实施例,在此不再赘述。For the parts not explained in this embodiment, please refer to the foregoing embodiments, and details are not repeated here.
上述技术方案中,实现了对视频或图像中的灯光光源显示效果的调整,提高了视频或图像的显示效果的自由度及多样性,适用于多种光源变化的场景。同时,上述技术方案还可以结合GPU渲染管线的方法,使光源显示效果变化适用于视频实时处理。In the above technical solution, the adjustment of the display effect of the light source in the video or image is realized, the degree of freedom and diversity of the display effect of the video or image is improved, and it is suitable for scenes with various light sources changing. At the same time, the above technical solution can also be combined with the method of the GPU rendering pipeline, so that the change of the display effect of the light source is suitable for real-time video processing.
实施例四Embodiment 4
图7是本申请实施例提供的另一种图像处理方法的流程图。本实施例以上述实施例的基础上提供了一种实施方式。FIG. 7 is a flowchart of another image processing method provided by an embodiment of the present application. This embodiment provides an implementation manner on the basis of the above-mentioned embodiment.
如图7所示,本实施例提供的图像处理方法包括以下步骤。As shown in FIG. 7 , the image processing method provided in this embodiment includes the following steps.
S310、通过视频直播软件实时获取视频图像帧,将所述视频图像帧作为目标图像。S310. Acquire a video image frame in real time through live video software, and use the video image frame as a target image.
视频图像帧以RGBA色彩空间进行标识。Video image frames are identified in the RGBA color space.
S320、将目标图像输入预先训练得到的光源检测模型,得到所述目标图像中的光源目标框的坐标信息。S320: Input the target image into a pre-trained light source detection model to obtain coordinate information of the light source target frame in the target image.
S330、根据光源目标框的坐标信息以及光源目标框内的水平方向和竖直方向上的光源区域的范围,在光源目标框内创建光源模型。S330. Create a light source model in the light source target frame according to the coordinate information of the light source target frame and the range of the light source area in the horizontal direction and the vertical direction in the light source target frame.
根据光源目标框的坐标信息,确定光源模型的光源焦点,即光源目标框的中心点(x1,y1),并基于光源焦点(x1,y1)构建光源模型。According to the coordinate information of the light source target frame, the light source focus of the light source model is determined, that is, the center point (x1, y1) of the light source target frame, and the light source model is constructed based on the light source focus (x1, y1).
当光源目标框内的水平方向和竖直方向上的光源区域的范围相等时,构建圆形的光源模型;当光源目标框内的水平方向和竖直方向上的光源区域的范围不相等时,构建椭圆形的光源模型。When the range of the light source area in the horizontal direction and the vertical direction in the light source target frame is equal, a circular light source model is constructed; when the range of the light source area in the horizontal direction and the vertical direction in the light source target frame is not equal, Build an elliptical light source model.
以建椭圆形的光源模型为例,表达式如下:
Figure PCTCN2021104709-appb-000005
c≠d。
Taking building an elliptical light source model as an example, the expression is as follows:
Figure PCTCN2021104709-appb-000005
c≠d.
S340、将光源目标框中的光源模型外的像素点的不透明度参数设置为0,根据目标公式计算光源目标框中的光源模型内的像素点的不透明度参数。S340. Set the opacity parameter of the pixels outside the light source model in the light source target frame to 0, and calculate the opacity parameters of the pixels in the light source model in the light source target frame according to the target formula.
目标公式如下:
Figure PCTCN2021104709-appb-000006
其中,A为像素点的不透明度参数;a为像素点到光源模型的长轴或者水平直径的距离,b为像素点到光源模型的短轴或者竖直直径的距离,a′为像素点沿短轴方向或者竖直直径方向到光源模型的边界的距离,b′为像素点沿长轴方向或者水平直径方向到光源模型的边界的距离。
The target formula is as follows:
Figure PCTCN2021104709-appb-000006
Among them, A is the opacity parameter of the pixel point; a is the distance from the pixel point to the long axis or horizontal diameter of the light source model, b is the distance from the pixel point to the short axis or vertical diameter of the light source model, and a' is the pixel point along the The distance from the short axis direction or the vertical diameter direction to the boundary of the light source model, and b' is the distance from the pixel point to the boundary of the light source model along the long axis direction or the horizontal diameter direction.
此步骤确定的光源目标框中多个像素点的不透明度参数将参与对光源区域像素的处理过程。The opacity parameters of multiple pixels in the light source target frame determined in this step will participate in the processing of pixels in the light source area.
S350、通过预先创建的两个GPU分别渲染原图纹理和预选光源纹理。S350. Render the original image texture and the preselected light source texture through two pre-created GPUs respectively.
预选光源纹理通过填充不同的像素RGB值,可以显示不同的颜色的光源效果,进而模拟不同的光源效果,可以根据实际需要填充不同的光源效果相应的像素RGB值。The pre-selected light source texture can display the light source effects of different colors by filling different pixel RGB values, and then simulate different light source effects, and can fill the corresponding pixel RGB values of different light source effects according to actual needs.
S360、通过GPU将光源区域内的原图纹理与光源区域内的预选光源纹理进行像素值叠加,并对叠加结果进行修正,得到光源区域内的叠加纹理;使用光源区域的不透明度参数对叠加纹理进行调整,得到待定纹理,并使用光源区域内的原图纹理对待定纹理进行补偿,得到光源区域内的显示纹理。S360 , superimposing pixel values of the original image texture in the light source area and the preselected light source texture in the light source area through the GPU, and correcting the superposition result to obtain the superimposed texture in the light source area; use the opacity parameter of the light source area to superimpose the superimposed texture Make adjustments to obtain the pending texture, and use the original image texture in the light source area to compensate the pending texture to obtain the display texture in the light source area.
光源区域(或者光源目标框内的区域)为原图纹理和预选光源纹理的叠加区域,需要对像素RGB值进行处理,光源区域内的多个像素点的不透明度参数通过S340确定;目标图像中光源区域之外的区域无需进行处理,显示原图纹理即可,且这些区域的不透明度参数设置为1。The light source area (or the area within the light source target frame) is the superimposed area of the original image texture and the preselected light source texture, and the pixel RGB values need to be processed. The opacity parameters of multiple pixels in the light source area are determined by S340; The areas outside the light source area do not need to be processed, and the original image texture can be displayed, and the opacity parameter of these areas is set to 1.
在本步骤中,基于GPU的并行计算能力,提高了光源区域内的叠加纹理以及光源区域内的显示纹理的计算速率。In this step, based on the parallel computing capability of the GPU, the calculation rate of the superimposed texture in the light source area and the display texture in the light source area is improved.
以光源区域内的一个像素点为例,该像素点的像素RGB值的处理过程如下:T=S+D,T≤1;F=A*T+(1-A)*D;其中,A为像素点的不透明度参数;S为像素点在预选光源纹理中的像素RGB值,D为像素点在原图纹理中的像素RGB值,T为像素点在叠加纹理中的像素RGB值,F为像素点在显示纹理中的像素RGB值。Taking a pixel in the light source area as an example, the processing process of the pixel RGB value of the pixel is as follows: T=S+D, T≤1; F=A*T+(1-A)*D; where A is The opacity parameter of the pixel; S is the pixel RGB value of the pixel in the pre-selected light source texture, D is the pixel RGB value of the pixel in the original image texture, T is the pixel RGB value of the pixel in the superimposed texture, F is the pixel The RGB value of the pixel in the display texture.
S370、设置光源区域外的原图纹理的不透明度参数以及光源区域内的显示纹理的不透明度参数均为1,并组合光源区域外的原图纹理以及光源区域内的显示纹理,通过GPU管线渲染后生成与目标图像对应的目标处理图像。S370. Set the opacity parameter of the original image texture outside the light source area and the opacity parameter of the display texture in the light source area to be 1, combine the original image texture outside the light source area and the display texture in the light source area, and render through the GPU pipeline The target processing image corresponding to the target image is then generated.
本实施例未解释之处请参见前述实施例,在此不再赘述。For the unexplained part of this embodiment, please refer to the foregoing embodiment, which will not be repeated here.
实施例五Embodiment 5
随着社会的进步和经济的发展,移动终端在实现互联网冲浪的基础上,其拍摄图像和展示图像的功能越来越受到用户的重视,用户对移动终端的图像处理也提出了更高的要求。当移动终端在光线较少的场景中进行图像拍摄,拍摄得到的图像亮度偏暗,虽然相关技术中具有亮度提升的方案,但是其仅能实现对图像的整体进行亮度提升,无法改变很多图像在拍摄时的光源场景,即,相关技术中的技术方案无法对视觉数据增加光源。因此,如何为视觉数据提供更加丰富的光学显示信息是亟需解决的问题。With the progress of society and the development of economy, on the basis of realizing Internet surfing, the functions of capturing images and displaying images of mobile terminals have attracted more and more attention from users, and users have also put forward higher requirements for image processing of mobile terminals. . When the mobile terminal captures an image in a scene with less light, the brightness of the captured image is dark. Although the related art has a brightness enhancement solution, it can only improve the overall brightness of the image, and cannot change the brightness of many images. The light source scene during shooting, that is, the technical solutions in the related art cannot add light sources to the visual data. Therefore, how to provide more abundant optical display information for visual data is an urgent problem to be solved.
本申请实施例提供一种图像处理方法。请参见图8,图8为本申请实施例提供的另一种图像处理方法的流程图。The embodiments of the present application provide an image processing method. Please refer to FIG. 8 , which is a flowchart of another image processing method provided by an embodiment of the present application.
如图8所示,该图像处理方法可以包括以下步骤。As shown in FIG. 8 , the image processing method may include the following steps.
S31,获取光学图像的光源模型。S31, a light source model of the optical image is acquired.
该光源模型表征光学图像中的光源形状和光源位置。该光源形状可以是正方形、圆形、椭圆形、半圆形等,如,可以将光学图像中的“非黑像素点”(如,RGB值(即像素RGB值)不为0的像素点)组成的图像区域拟合得到光源形状。该光源位置可以使用该光源形状对应的光学图像中的多个像素点中的每个像素点确定的像素坐标来表示;还可以将光学图像进行网格划分,将具有“非黑像素点”的至少一个网格对应的坐标位置作为光源位置。The light source model characterizes the light source shape and light source location in the optical image. The shape of the light source can be a square, a circle, an ellipse, a semi-circle, etc. For example, the "non-black pixels" in the optical image (for example, the pixels whose RGB value (ie, the pixel RGB value) is not 0) can be used. The composed image area is fitted to obtain the light source shape. The position of the light source can be represented by the pixel coordinates determined by each of the plurality of pixels in the optical image corresponding to the shape of the light source; the optical image can also be divided into grids, and the optical image with "non-black pixels" can be divided into a grid. The coordinate position corresponding to at least one grid is used as the light source position.
S32,根据光源模型和预设映射曲线,得到待添加光源信息。S32, according to the light source model and the preset mapping curve, obtain the information of the light source to be added.
该待添加光源信息表征光源区域中的每个像素点的像素添加值,光源区域为光源形状和光源位置确定的图像区域。The light source information to be added represents the pixel added value of each pixel point in the light source area, and the light source area is an image area determined by the shape of the light source and the position of the light source.
例如,该像素添加值可以是根据光源强度和像素点的RGB待添加值进行确定的,光源强度可以确定目标图像中的每个像素点的亮度信息,像素点的RGB待添加值可以确定目标图像中的每个像素点的色彩信息。如,模拟一个光源,针对该光源区域添加前述模拟光源对应的待添加光源信息,使得光学图像中增加模拟光源对应的图像显示信息(待添加光源信息),有利于丰富图像的光学显示信息,提供更多的视觉展示效果。For example, the pixel added value may be determined according to the light source intensity and the RGB value to be added of the pixel point, the light source intensity may determine the brightness information of each pixel point in the target image, and the RGB value to be added of the pixel point may determine the target image. The color information of each pixel in . For example, a light source is simulated, and the light source information to be added corresponding to the simulated light source is added to the light source area, so that the image display information (light source information to be added) corresponding to the simulated light source is added to the optical image, which is conducive to enriching the optical display information of the image, providing More visual presentations.
S33,根据待添加光源信息更新光源区域中的每个像素点的像素值,得到目标图像。S33: Update the pixel value of each pixel in the light source area according to the light source information to be added to obtain a target image.
由于光源区域外的光学图像的像素点都为“黑色像素点”(如,像素RGB值 均为0的像素点),因此光源区域外的像素点的位置不存在光源,使用本申请实施例提供的图像处理方法,有利于丰富图像的光学显示信息,提供更多的视觉展示效果。Since the pixels of the optical image outside the light source area are all "black pixels" (for example, pixels whose RGB values are all 0), there is no light source at the positions of the pixels outside the light source area. The image processing method is beneficial to enrich the optical display information of the image and provide more visual display effects.
为了便于理解上述实施例提供的图像处理方法的处理效果,请参见图9A和图9B。图9A为利用本申请实施例提供的图像处理方法处理前的光学图像的示意图。图9B为利用本申请实施例提供的图像处理方法对图9A中的光学图像进行处理后,得到的目标处理图像的示意图。可以比较光学图像与目标处理图像,两者在具有光源的图像区域(光源区域)发生了较大的变化,在光学图像的光源区域中增加了待添加光源信息,使得目标图像具有较为丰富的光学显示信息。也就是说,使用本申请实施例提供的图像处理方法,为图像提供了更多的视觉展示效果。To facilitate understanding of the processing effects of the image processing methods provided in the above embodiments, please refer to FIG. 9A and FIG. 9B . FIG. 9A is a schematic diagram of an optical image before processing by the image processing method provided by the embodiment of the present application. FIG. 9B is a schematic diagram of a target processed image obtained after processing the optical image in FIG. 9A by using the image processing method provided by the embodiment of the present application. The optical image and the target processing image can be compared, and the two have changed greatly in the image area with the light source (light source area), and the light source information to be added is added to the light source area of the optical image, so that the target image has richer optics. Display information. That is to say, by using the image processing method provided by the embodiments of the present application, more visual display effects are provided for the image.
由于图像可能包括有较为复杂的自然环境或人物等目标对象,这些目标对象在图像拍摄过程中可能仅仅得到了些许的光源照射,导致拍摄得到的光学图像较暗,不便于对这些目标对象进行观察和识别。本申请实施例在图8的基础上,给出一种可能的实现方式,请参见图10,图10为本申请实施例提供的一种获取光学图像的光源模型的流程图。S31:获取光学图像的光源模型,可以包括如下步骤。Since the image may include target objects such as complex natural environments or people, these target objects may only be illuminated by a little light source during the image capture process, resulting in a dark optical image, which is inconvenient to observe these target objects. and identification. On the basis of FIG. 8 , this embodiment of the present application provides a possible implementation manner. Please refer to FIG. 10 . FIG. 10 is a flowchart of a light source model for acquiring an optical image provided by an embodiment of the present application. S31 : acquiring a light source model of the optical image, which may include the following steps.
S311,将光学图像进行网格划分得到网格图像。S311 , dividing the optical image into a grid to obtain a grid image.
例如,可以按照M*N对光学图像进行网格划分,得到具有M*N个网格的网格图像,M和N均为大于或等于2的正整数。如图11所示,图11为本申请实施例提供的一种光学图像的划分示意图,将光学图像进行网格划分,得到图11示出的具有6*7个网格的网格图像。For example, the optical image may be meshed according to M*N to obtain a meshed image with M*N meshes, where M and N are both positive integers greater than or equal to 2. As shown in FIG. 11 , FIG. 11 is a schematic diagram of dividing an optical image according to an embodiment of the present application. The optical image is divided into grids to obtain a grid image with 6*7 grids shown in FIG. 11 .
S312,获取网格图像中的具有光源的至少一个网格。S312: Acquire at least one grid having a light source in the grid image.
例如,请继续参见图11,图11示出的“狗”的腿部区域为具有光源的图像区域(即图11中勾勒出的空白格),则将网格图像中的该空白格作为一个上述的具有光源的网格。For example, please continue to refer to Fig. 11, the leg area of the "dog" shown in Fig. 11 is an image area with a light source (that is, the blank space outlined in Fig. 11), then the blank space in the grid image is regarded as a The above grid with light sources.
S313,将至少一个网格对应的方框坐标位置作为光源位置。S313, taking the coordinate position of the box corresponding to at least one grid as the light source position.
也就是说,该光源位置即为网格图像中具有光源的至少一个网格对应的方框坐标位置。若网格图像中仅有一个具有光源的网格,则将该网络的方框坐标位置作为光源位置。若网格图像中有多个具有光源的网格,则将该多个网络对应的方框坐标位置作为光源位置将该多个网络对应的方框坐标位置作为光源位置包括两种可能的情况:一种情况是,多个网格连续分布,则将该多个网格进行整合,获取一个光源位置;另一种情况是,多个网格离散分布,则获取离散 分布的网格区域的光源位置。That is to say, the position of the light source is the coordinate position of the box corresponding to at least one grid having the light source in the grid image. If there is only one grid with a light source in the grid image, the box coordinate position of the grid is used as the light source position. If there are multiple grids with light sources in the grid image, the frame coordinate positions corresponding to the multiple networks are taken as the light source positions, including two possible situations: In one case, if multiple grids are continuously distributed, then integrate the multiple grids to obtain a light source position; in the other case, if multiple grids are discretely distributed, obtain the light sources in the discretely distributed grid area. Location.
S314,检测至少一个网格得到光源形状。S314, at least one grid is detected to obtain the shape of the light source.
以多个网格连续分布为例,该多个网格围成的区域可能是一个正方形区域,但是光源形状可能是正方形区域中的一个圆形、梯形、椭圆形等。Taking the continuous distribution of multiple grids as an example, the area enclosed by the multiple grids may be a square area, but the shape of the light source may be a circle, a trapezoid, an ellipse, etc. in the square area.
S315,利用光源形状和光源位置,构建光源模型。S315 , constructing a light source model by using the light source shape and the light source position.
例如,若多个网格确定的光源形状为一个正圆形,且从光源区域的中心点到光源区域的边缘点的亮度逐渐减小,则该光学图像对应的虚拟光源可能为一个点光源。使用本申请实施例提供的图像处理方法,可以将该点光源进行增强,进而添加像素值(待添加光源信息)至光学图像的光源区域中,以改善光学图像中较暗的部分,有利于光学图像中的目标对象的观察和识别。For example, if the shape of the light source determined by multiple grids is a perfect circle, and the brightness gradually decreases from the center point of the light source area to the edge point of the light source area, the virtual light source corresponding to the optical image may be a point light source. Using the image processing method provided in the embodiment of the present application, the point light source can be enhanced, and then the pixel value (light source information to be added) can be added to the light source area of the optical image, so as to improve the darker part in the optical image, which is beneficial to the optical image. Observation and recognition of target objects in images.
在对光学图像进行像素值添加时,若单纯的对每个像素点进行同样的亮色和色彩添加,则会导致得到的图像模糊不清。本申请实施例在图8的基础上,给出一种可能的实现方式,请参见图12,图12为本申请实施例提供的另一种图像处理方法的流程图。S32:根据光源模型和预设映射曲线,得到待添加光源信息,可以包括如下步骤。When adding pixel values to an optical image, if the same brightness and color are simply added to each pixel, the resulting image will be blurred. On the basis of FIG. 8 , this embodiment of the present application provides a possible implementation manner. Please refer to FIG. 12 . FIG. 12 is a flowchart of another image processing method provided by the embodiment of the present application. S32 : obtaining information about the light source to be added according to the light source model and the preset mapping curve, which may include the following steps.
S321,将透亮程度信息与预设映射曲线进行匹配,得到光源区域的待添加亮度信息。S321: Match the transparency level information with a preset mapping curve to obtain the brightness information to be added of the light source area.
该透亮程度信息表征光源区域的亮度信息,待添加亮度信息表征光源区域中的每个像素点的亮度添加值。例如,该透亮程度信息可以是根据光学图像对应光源强度得到的,该光源强度可以通过以下方式获取:获取光源区域中任意一个像素点与光源区域的中心点之间的第一线段的长度,与该第一线段的延长线上的光源区域的边缘点与中心点之间的第二线段的长度,该边缘点与所述任意一个像素点位于所述中心点的同侧,将第一线段长度除以第二线段的长度,得到该任意一个像素点对应的光源强度。当获取到光源区域中的每个像素点的光源强度(亦即是每个像素点的透亮程度)后,也就得到了上述的透亮程度信息。The transparency degree information represents the brightness information of the light source area, and the brightness information to be added represents the brightness addition value of each pixel in the light source area. For example, the degree of transparency information may be obtained according to the intensity of the light source corresponding to the optical image, and the intensity of the light source may be obtained by: obtaining the length of the first line segment between any pixel point in the light source area and the center point of the light source area, The length of the second line segment between the edge point and the center point of the light source region on the extension line of the first line segment, the edge point and the arbitrary pixel point are located on the same side of the center point, and the first The length of the line segment is divided by the length of the second line segment to obtain the light source intensity corresponding to any one pixel point. When the light source intensity of each pixel in the light source region (that is, the transparency of each pixel) is obtained, the above-mentioned transparency information is also obtained.
S322,响应于光源改变需求,得到光源区域的待添加色彩信息。S322 , obtaining color information to be added in the light source area in response to the change requirement of the light source.
该待添加色彩信息表征光源区域中的每个像素点的色彩添加值。例如,该光源改变需求可以是根据不同的光学图像进行识别得到的,也可以是用户通过操作指令设置的,如,可以是为光源区域中的每个像素点添加不同的RGB像素值。The color information to be added represents the color added value of each pixel in the light source area. For example, the light source change requirement may be identified according to different optical images, or may be set by the user through an operation instruction, for example, different RGB pixel values may be added to each pixel in the light source area.
S323,根据待添加亮度信息和待添加色彩信息,得到所述每个像素点的像素添加值。S323, according to the brightness information to be added and the color information to be added, obtain the pixel addition value of each pixel point.
使用待添加光源信息对光学图像的光源区域中的每个像素点进行亮度和色彩进行调整,得到目标处理图像。相较于光学图像,该目标处理图像的视觉数据会更加清晰和明亮,使得对图像中的目标对象识别会更加准确。The brightness and color of each pixel in the light source area of the optical image are adjusted using the light source information to be added to obtain the target processing image. Compared with the optical image, the visual data of the target processed image will be clearer and brighter, making the target object recognition in the image more accurate.
针对于上述的透亮程度信息,给出一种可能的获取方式。请参见图13,图13为本申请实施例提供的一种获取透亮程度信息的示意图,图13示出的光源区域为矩形区域,上述的透亮程度信息可以通过以下方式获取。For the above-mentioned transparency information, a possible acquisition method is given. Please refer to FIG. 13 . FIG. 13 is a schematic diagram of obtaining transparency degree information provided by an embodiment of the present application. The light source area shown in FIG. 13 is a rectangular area, and the above-mentioned transparency degree information can be obtained in the following manner.
(1)获得光源区域的中心点O和第一边缘点b。(1) Obtain the center point O and the first edge point b of the light source area.
该第一边缘点为光源区域的边界上的任意一个像素点。The first edge point is any pixel point on the boundary of the light source area.
(2)获得第一距离与第二距离。(2) Obtain the first distance and the second distance.
第一距离为中心点O和第一边缘点b的距离,第二距离为区域中心点O和第一区域点a的距离,第一区域点a为中心点O和第一边缘点b构成的线段上的任意一个像素点。例如,请继续参见图13,第一距离为Ob,第二距离为Oa。The first distance is the distance between the center point O and the first edge point b, the second distance is the distance between the area center point O and the first area point a, and the first area point a is composed of the center point O and the first edge point b. Any pixel on the line segment. For example, continuing to refer to Figure 13, the first distance is Ob and the second distance is Oa.
(3)将第二距离Oa除以第一距离Ob,得到第一区域点a的透亮程度信息。(3) Divide the second distance Oa by the first distance Ob to obtain the transparency information of the point a in the first area.
例如,以U表示第一区域点a的透亮程度,则U=Oa/Ob。For example, if U represents the degree of transparency of the point a in the first area, then U=Oa/Ob.
S321:将透亮程度信息与预设映射曲线进行匹配,得到光源区域的待添加亮度信息中可以使用以下公式确定第一区域点a的亮度添加信息:U`=U α;其中,U为上述的第一区域点a的透亮程度,α为预设映射曲线确定的光源强度大小,α≥0,U`为第一区域点a的亮度添加值。 S321: Match the degree of transparency information with the preset mapping curve, and obtain the brightness information to be added in the light source area by using the following formula to determine the brightness addition information of the first area point a: U `=U α ; wherein, U is the above-mentioned The degree of transparency of the point a in the first area, α is the intensity of the light source determined by the preset mapping curve, α≥0, and U` is the added value of the brightness of the point a in the first area.
例如,若U=0.04、α=2,则U`=0.04 2.0=0.0016;若U=0.04、α=0.5,则U`=0.04 0.5=0.2。 For example, if U=0.04 and α=2, then U`=0.04 2.0 =0.0016; if U=0.04 and α=0.5, then U`=0.04 0.5 =0.2.
由于直接将像素添加值加入原图(光学图像中),可能会导致图像的视觉数据处理出现错误,在图12的基础上,针对于更新光源区域中每个像素点的像素值,给出一种可能的实现方式,请参见图14,图14为本申请实施例提供的一种根据待添加光源信息更新光源区域中的每个像素点的像素值,得到目标处理图像的流程图。上述的S33:根据待添加光源信息更新光源区域中的每个像素点的像素值,得到目标图像,可以包括如下步骤。Since the pixel value is directly added to the original image (in the optical image), it may cause errors in the visual data processing of the image. On the basis of Figure 12, for updating the pixel value of each pixel in the light source area, a For a possible implementation, please refer to FIG. 14 , which is a flowchart of updating the pixel value of each pixel in the light source area according to the light source information to be added to obtain a target processing image provided by an embodiment of the present application. The above S33: updating the pixel value of each pixel in the light source area according to the information of the light source to be added, to obtain the target image, may include the following steps.
S331,获得第一像素点的第一像素值。S331 , obtaining the first pixel value of the first pixel point.
该第一像素点为光源区域中的任意一个像素点。如,该第一像素点可以为图13示出的第一区域点a。The first pixel is any pixel in the light source area. For example, the first pixel point may be the first area point a shown in FIG. 13 .
S332,根据待添加光源信息确定第一像素点的第一像素添加值。S332: Determine the first pixel addition value of the first pixel point according to the light source information to be added.
该第一像素添加值包括第一亮度添加值和第一色彩添加值。如,该第一色彩添加值可以是(1,1,1);针对于光源区域中的每个像素点,为每个像素点设 置不同的色彩添加值可以模拟出不同的光源效果。例如,该第一亮度添加值可以用光源强度来表示。The first pixel addition value includes a first luminance addition value and a first color addition value. For example, the first color addition value may be (1, 1, 1); for each pixel in the light source area, setting different color addition values for each pixel can simulate different light source effects. For example, the first luminance addition value may be represented by the intensity of the light source.
S333,将第一像素值与第一色彩添加值相加,得到中间像素值。S333 , adding the first pixel value and the first color addition value to obtain an intermediate pixel value.
如,令第一像素值为V,第一色彩添加值为G,则中间像素值为H。为了避免目标图像中出现“突变亮点”,上述的中间像素值H、第一色彩添加值G、第一像素值V均小于或等于1。For example, the value of the first pixel is V, the value of the first color addition is G, and the value of the intermediate pixel is H. In order to avoid the occurrence of "sudden bright spots" in the target image, the above-mentioned intermediate pixel value H, first color addition value G, and first pixel value V are all less than or equal to 1.
S334,在中间像素值小于或等于预设阈值的情况下,利用第一亮度添加值与中间像素值、第一像素值,得到第一像素点的第一目标像素值。S334 , when the intermediate pixel value is less than or equal to the preset threshold, obtain the first target pixel value of the first pixel point by using the first luminance addition value, the intermediate pixel value, and the first pixel value.
例如,在中间像素值H、第一色彩添加值G、第一像素值V均小于或等于1的情况下,令第一亮度添加值为U`,则第一目标像素值I为:For example, in the case where the intermediate pixel value H, the first color addition value G, and the first pixel value V are all less than or equal to 1, let the first brightness addition value U', then the first target pixel value I is:
I=U`*H+(1-U`)*VI=U`*H+(1-U`)*V
每个像素点可以具有相同的U`值,对于光源区域中部分像素点也可以具有不同的U`值。Each pixel can have the same U` value, and some pixels in the light source area can also have different U` values.
S335,在获取到光源区域中的每个像素点的目标像素值后,以所述第一目标像素值更新所述每个像素点的像素值,以得到目标图像。S335 , after acquiring the target pixel value of each pixel in the light source area, update the pixel value of each pixel with the first target pixel value to obtain a target image.
也就是说,可以通过每个像素点的光源强度和色彩添加值,模拟出不同的光源效果,有利于丰富图像的光学显示信息,提供更多的视觉展示效果。That is to say, different light source effects can be simulated through the light source intensity and color addition value of each pixel point, which is beneficial to enrich the optical display information of the image and provide more visual display effects.
使用本申请实施例提供的图像处理方法,可以调整光学图像对应的光源信息,丰富图像的光学显示信息,提供更多的视觉展示效果。在一种可选的实施例中,用户还可以手动设置新增光源,可以不用检测光源,直接通过操作指令设置模拟光源的坐标、形状及颜色即可,以为光学图像增加设置的光源,用户就可以查看到目标图像。如,为光学图像增加红色光源,即为光学图像中的每个像素点均增加红色的色彩通道值,以得到目标处理图像。Using the image processing method provided by the embodiment of the present application, the light source information corresponding to the optical image can be adjusted, the optical display information of the image can be enriched, and more visual display effects can be provided. In an optional embodiment, the user can also manually set the newly added light source. Instead of detecting the light source, the coordinates, shape and color of the simulated light source can be set directly through the operation command, so as to add the set light source to the optical image, and the user can The target image can be viewed. For example, adding a red light source to an optical image means adding a red color channel value to each pixel in the optical image to obtain the target processing image.
实施例六Embodiment 6
图15是本申请实施例提供的一种图像处理装置的结构示意图。本实施例可适用于对图像或者视频中光源显示效果进行调整的情况,该装置可以采用软件和/或硬件的方式实现,并一般可集成在计算机设备中。如图15所示,该装置光源模型获取模块410和光源区域处理模块420。FIG. 15 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application. This embodiment may be applicable to the case of adjusting the display effect of the light source in the image or video, and the apparatus may be implemented by means of software and/or hardware, and may generally be integrated in computer equipment. As shown in FIG. 15 , the device has a light source model acquisition module 410 and a light source region processing module 420 .
光源模型获取模块410设置为获取目标图像的光源模型;光源区域处理模块420设置为根据所述光源模型对所述目标图像的光源区域进行处理。The light source model obtaining module 410 is configured to obtain the light source model of the target image; the light source region processing module 420 is configured to process the light source region of the target image according to the light source model.
图16是本申请实施例提供的另一种图像处理装置的结构示意图。如图16 所示,光源模型获取模块410包括:光源区域确定单元411和光源模型构建单元412,光源区域处理模块420包括:不透明参数确定单元421和光源区域调整单元422。FIG. 16 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application. As shown in FIG. 16 , the light source model acquisition module 410 includes: a light source area determination unit 411 and a light source model construction unit 412 , and the light source area processing module 420 includes an opacity parameter determination unit 421 and a light source area adjustment unit 422 .
光源区域确定单元411设置为获取目标图像,并确定所述目标图像中的光源区域;光源模型构建单元412设置为在所述光源区域内构建光源模型;不透明参数确定单元421设置为根据所述光源模型确定所述光源区域的不透明度参数;光源区域调整单元422设置为根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理。The light source area determination unit 411 is configured to acquire a target image and determine the light source area in the target image; the light source model construction unit 412 is configured to construct a light source model in the light source area; the opacity parameter determination unit 421 is configured to The model determines the opacity parameter of the light source area; the light source area adjustment unit 422 is set to obtain the desired light source area according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area. The display texture in the light source area.
本申请实施例提供的技术方案中,确定出目标图像中的光源区域之后,在所述光源区域中构建光源模型,可以模拟多种不同形状的光源光照范围;根据光源模型确定光源区域的不透明度参数,可以模拟光照照射物体的效果;得出的光源区域内的显示纹理是根据光源区域的原图纹理、预选光源纹理以及不透明度参数确定的,预选光源纹理不同,得出的光源区域内的显示纹理就不同,以此实现了对图像中光源显示效果的调整,极大地丰富了应用场景,提高了图像显示效果的自由度及多样性。同时,上述技术方案实现简单,处理周期短,适宜应用于视频直播之类的在线实时视频中。In the technical solution provided by the embodiment of the present application, after the light source area in the target image is determined, a light source model is constructed in the light source area, which can simulate the illumination range of light sources with different shapes; the opacity of the light source area is determined according to the light source model parameter, which can simulate the effect of light illuminating the object; the display texture in the obtained light source area is determined according to the original image texture, pre-selected light source texture and opacity parameters of the light source area. The display texture is different, which realizes the adjustment of the display effect of the light source in the image, greatly enriches the application scene, and improves the degree of freedom and diversity of the image display effect. At the same time, the above technical solution is simple to implement and has a short processing period, and is suitable for application in online real-time video such as live video.
在一种可选的实施方式中,光源区域调整单元422是设置为将所述光源区域内的原图纹理与所述光源区域内的预选光源纹理进行像素值叠加,并对叠加结果进行修正,得到所述光源区域内的叠加纹理;使用所述光源区域的不透明度参数对所述叠加纹理进行调整,得到待定纹理;使用所述光源区域内的原图纹理对所述待定纹理进行补偿,得到所述光源区域内的显示纹理。In an optional implementation manner, the light source area adjustment unit 422 is configured to superimpose the pixel value of the original image texture in the light source area and the preselected light source texture in the light source area, and correct the superposition result, Obtain the superimposed texture in the light source area; use the opacity parameter of the light source area to adjust the superimposed texture to obtain a pending texture; use the original image texture in the light source area to compensate the pending texture to obtain Display texture within the light source area.
在一种可选的实施方式中,光源区域确定单元411是设置为获取目标图像,并通过预先训练得到的光源检测模型,确定所述目标图像中的光源区域。In an optional implementation manner, the light source area determining unit 411 is configured to acquire a target image, and determine the light source area in the target image through a pre-trained light source detection model.
光源模型构建单元412是设置为检测所述光源区域分别在水平方向和竖直方向的第一长度和第二长度;当所述第一长度与所述第二长度相等时,在所述光源区域内,以所述光源区域的中心点为中心,所述第一长度为直径,构建圆形的光源模型;当所述第一长度与所述第二长度不相等时,在所述光源区域内,以所述光源区域的中心点为中心,以所述第一长度和所述第二长度中较长的长度为长轴并以所述第一长度和所述第二长度中较短的长度为短轴,构建椭圆形的光源模型。The light source model building unit 412 is configured to detect the first length and the second length of the light source area in the horizontal direction and the vertical direction respectively; when the first length and the second length are equal, the light source area is In the light source area, a circular light source model is constructed with the center point of the light source area as the center and the first length as the diameter; when the first length is not equal to the second length, in the light source area , taking the center point of the light source area as the center, taking the longer of the first length and the second length as the long axis, and taking the shorter of the first length and the second length as the length For the short axis, construct an elliptical light source model.
不透明参数确定单元421是设置为设置所述光源区域中的所述光源模型之外的全部像素点的不透明度参数为第一目标常数值,所述第一目标常数值用于表示全透明;根据所述光源区域中所述光源模型之内的全部像素点中的每个像 素点与所述光源模型的中心像素点的距离的大小,设置所述每个所述像素点的不透明度参数;其中,所述光源模型之内的一个像素点与所述中心像素点的距离越小,所述一个像素点的不透明度参数的值就越小。The opacity parameter determining unit 421 is configured to set the opacity parameters of all the pixels in the light source area outside the light source model as a first target constant value, and the first target constant value is used to represent full transparency; according to The size of the distance between each pixel point in all the pixel points in the light source model in the light source area and the center pixel point of the light source model, and the opacity parameter of each pixel point is set; wherein , the smaller the distance between one pixel in the light source model and the central pixel, the smaller the value of the opacity parameter of the one pixel.
可选的,不透明参数确定单元421是设置为通过如下方式根据所述光源区域中所述光源模型之内的全部像素点中的每个像素点与所述光源模型的中心像素点的距离的大小,设置所述每个所述像素点的不透明度参数:根据下述公式计算所述光源区域中所述光源模型之内的每个像素点的不透明度参数:
Figure PCTCN2021104709-appb-000007
其中,A为所述每个像素点的不透明度参数,a为所述每个像素点到所述光源模型的长轴或者水平直径的距离,b为所述每个像素点到所述光源模型的短轴或者竖直直径的距离,a′为所述每个像素点沿短轴方向或者竖直直径方向到所述光源模型边界的距离,b′为所述每个像素点沿长轴方向或者水平直径方向到所述光源模型边界的距离。
Optionally, the opacity parameter determining unit 421 is set to the size of the distance between each pixel point in the light source area and the central pixel point of the light source model according to the distance between each pixel point in the light source model in the light source model in the following manner: , set the opacity parameter of each pixel point: calculate the opacity parameter of each pixel point within the light source model in the light source area according to the following formula:
Figure PCTCN2021104709-appb-000007
Among them, A is the opacity parameter of each pixel, a is the distance from each pixel to the long axis or horizontal diameter of the light source model, and b is the distance from each pixel to the light source model The distance from the short axis or the vertical diameter of , a' is the distance from the short axis direction or the vertical diameter direction of each pixel to the boundary of the light source model, b' is the long axis direction of each pixel Or the distance from the horizontal diameter direction to the boundary of the light source model.
在一种具体实施方式中,光源区域调整单元422是设置为通过GPU,根据所述光区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理。In a specific implementation manner, the light source area adjustment unit 422 is configured to obtain through the GPU, according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area Display texture within the light source area.
上述装置还包括:目标处理图像生成模块,设置为在所述光源区域调整单元根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理之后,设置所述光源区域外的原图纹理的不透明度参数以及所述光源区域内的显示纹理的不透明度参数均为第二目标常数值,所述第二目标常数值用于表示不透明;根据所述光源区域外的原图纹理、所述光源区域外的原图纹理的不透明度参数、所述光源区域内的显示纹理、以及所述光源区域内的显示纹理的不透明度参数,生成与所述目标图像对应的目标处理图像。The above-mentioned device further includes: a target processing image generation module, which is configured to, in the light source area adjustment unit, according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area, After obtaining the display texture in the light source area, set the opacity parameter of the original image texture outside the light source area and the opacity parameter of the display texture in the light source area to be the second target constant value, the second target constant value. The target constant value is used to represent opacity; according to the original image texture outside the light source area, the opacity parameter of the original image texture outside the light source area, the display texture in the light source area, and the display in the light source area The opacity parameter of the texture to generate the target processing image corresponding to the target image.
图17是本申请实施例提供的另一种图像处理装置的结构示意图。如图17所示,光源区域处理模块420可以包括:信息处理单元423和图像更新单元424。FIG. 17 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application. As shown in FIG. 17 , the light source area processing module 420 may include: an information processing unit 423 and an image updating unit 424 .
光源模型获取模块410是设置为获取光学图像的光源模型。光源模型表征光学图像中的光源形状和光源位置。The light source model acquisition module 410 is a light source model configured to acquire optical images. The light source model characterizes the light source shape and light source location in the optical image.
信息处理单元423设置为根据光源模型和预设映射曲线,得到待添加光源信息。待添加光源信息表征光学图像的光源区域中的全部像素点中的每个像素点的像素添加值,光源区域为光源形状和光源位置确定的图像区域。The information processing unit 423 is configured to obtain the information of the light source to be added according to the light source model and the preset mapping curve. The light source information to be added represents the pixel added value of each pixel point in all the pixel points in the light source area of the optical image, and the light source area is an image area determined by the shape of the light source and the position of the light source.
图像更新单元424设置为根据待添加光源信息更新光源区域中的每个像素点的像素值,得到目标处理图像。The image updating unit 424 is configured to update the pixel value of each pixel in the light source area according to the light source information to be added, so as to obtain the target processing image.
在可选的实施例中,信息处理单元423是设置为将透亮程度信息与预设映 射曲线进行匹配,得到光源区域的待添加亮度信息,透亮程度信息表征光源区域的亮度信息,待添加亮度信息表征光源区域中的每个像素点的亮度添加值;响应光源改变需求,得到光源区域的待添加色彩信息,待添加色彩信息表征光源区域中的每个像素点的色彩添加值;根据待添加亮度信息和待添加色彩信息,得到光学图像的光源区域每个像素点的像素添加值。In an optional embodiment, the information processing unit 423 is configured to match the brightness information with the preset mapping curve to obtain the brightness information to be added in the light source area, the brightness information represents the brightness information of the light source area, and the brightness information to be added Characterize the added value of the brightness of each pixel in the light source area; in response to the changing requirements of the light source, obtain the color information to be added in the light source area, and the color information to be added represents the added color value of each pixel in the light source area; according to the brightness to be added information and the color information to be added to obtain the pixel added value of each pixel in the light source area of the optical image.
光源模型确定模块410、信息处理单元423和图像更新单元424可以协同实现实施例一或实施例五的图像处理方法及该方法的可能的子步骤。The light source model determination module 410 , the information processing unit 423 and the image updating unit 424 may cooperate to implement the image processing method of Embodiment 1 or Embodiment 5 and possible sub-steps of the method.
本申请实施例所提供的图像处理装置可执行本申请任意实施例所提供的图像处理方法,具备执行方法相应的功能模块。The image processing apparatus provided by the embodiment of the present application can execute the image processing method provided by any embodiment of the present application, and has functional modules corresponding to the execution method.
实施例七 Embodiment 7
图18是本申请实施例提供的一种计算机设备的结构示意图。如图18所示,该计算机设备包括处理器50、存储器51、输入装置52和输出装置53;计算机设备中处理器50的数量可以是一个或多个,图18中以一个处理器50为例;计算机设备中的处理器50、存储器51、输入装置52和输出装置53可以通过总线或其他方式连接,图18中以通过总线连接为例。FIG. 18 is a schematic structural diagram of a computer device provided by an embodiment of the present application. As shown in FIG. 18 , the computer device includes a processor 50, a memory 51, an input device 52 and an output device 53; the number of processors 50 in the computer device can be one or more, and one processor 50 is taken as an example in FIG. 18 ; The processor 50, the memory 51, the input device 52 and the output device 53 in the computer equipment may be connected by a bus or in other ways. In FIG. 18, the connection by a bus is taken as an example.
存储器51作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请实施例中的图像处理方法对应的程序指令/模块(例如,图15所示的图像处理装置中的光源模型获取模块410和光源区域处理模块420)。处理器50通过运行存储在存储器51中的软件程序、指令以及模块,从而执行计算机设备的多种功能应用以及数据处理,即实现上述的图像处理方法。As a computer-readable storage medium, the memory 51 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the image processing method in the embodiments of the present application (for example, the image processing method shown in FIG. 15 ). The light source model acquisition module 410 and the light source area processing module 420 in the device). The processor 50 executes various functional applications and data processing of the computer device by running the software programs, instructions and modules stored in the memory 51 , ie, implements the above-mentioned image processing method.
存储器51可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据计算机设备的使用所创建的数据等。存储器51可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器51可包括相对于处理器50远程设置的存储器,这些远程存储器可以通过网络连接至计算机设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 51 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of computer equipment, and the like. The memory 51 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 51 may include memory located remotely from processor 50, which may be connected to a computer device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
输入装置52可设置为接收输入的数字或字符信息,以及产生与计算机设备的用户设置以及功能控制有关的键信号输入。输出装置53可包括显示屏等显示设备。The input device 52 may be configured to receive input numerical or character information and to generate key signal input related to user settings and function control of the computer device. The output device 53 may include a display device such as a display screen.
实施例八Embodiment 8
本申请实施例提供一种图像方法,应用于图像处理设备。请参见图19,图19为本申请实施例提供的一种图像处理设备的结构示意图,该图像处理设备10包括存储器11、处理器12和通信接口13。该存储器11、处理器12和通信接口13相互之间直接或间接地电性连接,以实现数据的传输或交互。例如,这些元件相互之间可通过一条或多条通讯总线或信号线实现电性连接。存储器11可设置为存储软件程序及模块,如本申请实施例所提供的图像处理方法对应的程序指令/模块,处理器12通过执行存储在存储器11内的软件程序及模块,从而执行多种功能应用以及数据处理。该通信接口13可用于与其他节点设备进行信令或数据的通信。在本申请中该图像处理设备10可以具有多个通信接口13。An embodiment of the present application provides an image method, which is applied to an image processing device. Referring to FIG. 19 , FIG. 19 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. The image processing device 10 includes a memory 11 , a processor 12 and a communication interface 13 . The memory 11 , the processor 12 and the communication interface 13 are directly or indirectly electrically connected to each other to realize data transmission or interaction. For example, these elements may be electrically connected to each other through one or more communication buses or signal lines. The memory 11 may be configured to store software programs and modules, such as program instructions/modules corresponding to the image processing methods provided in the embodiments of the present application, and the processor 12 executes various functions by executing the software programs and modules stored in the memory 11. applications and data processing. The communication interface 13 can be used for signaling or data communication with other node devices. The image processing apparatus 10 may have a plurality of communication interfaces 13 in the present application.
存储器11可以是但不限于,随机存取存储器(Random Access Memory,RAM),只读存储器(Read Only Memory,ROM),可编程只读存储器(Programmable Read-Only Memory,PROM),可擦除只读存储器(Erasable Programmable Read-Only Memory,EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-Only Memory,EEPROM)等。The memory 11 can be, but is not limited to, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable only memory Read memory (Erasable Programmable Read-Only Memory, EPROM), Electrical Erasable Programmable Read-Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
处理器12可以是一种集成电路芯片,具有信号处理能力。该处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。The processor 12 may be an integrated circuit chip with signal processing capability. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it may also be a digital signal processor (Digital Signal Processing, DSP), an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
该图像处理设备10还可以通过GPU,显示屏,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器12可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The image processing device 10 may also implement a display function through a GPU, a display screen, and an application processor. The GPU is a microprocessor for image processing, which connects the display screen and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 12 may include one or more GPUs that execute program instructions to generate or alter display information.
上述的图像处理设备10可以是,但不限于手机、平板电脑、可穿戴设备、车载设备、增强现实(Augmented Reality,AR)/虚拟现实(Virtual Reality,VR)设备、笔记本电脑、超级移动个人计算机(Ultra-Mobile Personal Computer,UMPC)、上网本、个人数字助理(Personal Digital Assistant,PDA)等终端上,本申请实施例对图像处理设备的具体类型不作任何限制。The above-mentioned image processing device 10 can be, but is not limited to, a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super mobile personal computer (Ultra-Mobile Personal Computer, UMPC), netbook, personal digital assistant (Personal Digital Assistant, PDA) and other terminals, the embodiment of the present application does not impose any restrictions on the specific type of the image processing device.
本申请实施例示意的结构并不构成对图像处理设备10的限定。在本申请另一些实施例中,图像处理设备10可以包括比图示更多或更少的部件,或者组合一些部件,或者拆分一些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。The structures illustrated in the embodiments of the present application do not constitute a limitation on the image processing apparatus 10 . In other embodiments of the present application, the image processing apparatus 10 may include more or less components than shown, or combine some components, or separate some components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
实施例十Embodiment ten
本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,计算机程序在由计算机处理器执行时用于执行一种图像处理方法。该方法包括:获取光学图像的光源模型;根据所述光源模型对所述光学图像的光源区域进行处理。。Embodiments of the present application also provide a computer-readable storage medium storing a computer program, where the computer program is used to execute an image processing method when executed by a computer processor. The method includes: acquiring a light source model of an optical image; and processing the light source area of the optical image according to the light source model. .
本申请实施例所提供的存储有计算机程序的计算机可读存储介质,计算机程序不限于如上的方法操作,还可以执行本申请任意实施例所提供的图像处理方法中的相关操作。The computer-readable storage medium storing the computer program provided by the embodiment of the present application is not limited to the above method operations, and can also perform related operations in the image processing method provided by any embodiment of the present application.
通过以上关于实施方式的描述,所属领域的技术人员可以了解到,本申请可借助软件及通用硬件来实现,也可以通过硬件实现。基于这样的理解,本申请的技术方案本质上可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、ROM、RAM、闪存(FLASH)、硬盘或光盘等,包括多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请多个实施例的方法。From the above description of the embodiments, those skilled in the art can understand that the present application can be implemented by means of software and general hardware, and can also be implemented by hardware. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product, and the computer software product can be stored in a computer-readable storage medium, such as a floppy disk, ROM, RAM, flash memory (FLASH), hard disk of a computer. Or CD, etc., including a plurality of instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments of the present application.
上述图像处理装置的实施例中,所包括的多个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;多个功能单元的名称也只是为了便于相互区分,并不用于限制本申请的保护范围。In the above embodiment of the image processing apparatus, the multiple units and modules included are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; the names of the multiple functional units are also It is only for the convenience of distinguishing from each other, and is not intended to limit the protection scope of the present application.

Claims (20)

  1. 一种图像处理方法,包括:An image processing method, comprising:
    获取目标图像的光源模型;Obtain the light source model of the target image;
    根据所述光源模型对所述目标图像的光源区域进行处理。The light source area of the target image is processed according to the light source model.
  2. 如权利要求1所述的方法,其中,所述获取目标图像的光源模型,包括:获取所述目标图像,并确定所述目标图像中的光源区域;在所述光源区域内构建所述光源模型;The method according to claim 1, wherein the acquiring a light source model of the target image comprises: acquiring the target image and determining a light source area in the target image; constructing the light source model in the light source area ;
    所述根据所述光源模型对所述目标图像的光源区域进行处理,包括:根据所述光源模型确定所述光源区域的不透明度参数;根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理。The processing of the light source area of the target image according to the light source model includes: determining an opacity parameter of the light source area according to the light source model; according to the original image texture in the light source area, the light source area The preselected light source texture in the light source area and the opacity parameter of the light source area are obtained to obtain the display texture in the light source area.
  3. 根据权利要求2所述的方法,其中,所述根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理,以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理,包括:The method according to claim 2, wherein, according to the original image texture in the light source area, the pre-selected light source texture in the light source area, and the opacity parameter of the light source area, obtaining the light source area in the light source area display textures, including:
    将所述光源区域内的原图纹理与所述光源区域内的预选光源纹理进行像素值叠加,并对叠加结果进行修正,得到所述光源区域内的叠加纹理;superimposing the pixel value of the original image texture in the light source area and the preselected light source texture in the light source area, and correcting the superimposed result to obtain the superimposed texture in the light source area;
    使用所述光源区域的不透明度参数对所述叠加纹理进行调整,得到待定纹理;Using the opacity parameter of the light source area to adjust the superimposed texture to obtain a pending texture;
    使用所述光源区域内的原图纹理对所述待定纹理进行补偿,得到所述光源区域内的显示纹理。Compensating the pending texture by using the original image texture in the light source area to obtain a display texture in the light source area.
  4. 根据权利要求2所述的方法,其中,所述确定所述目标图像中的光源区域,包括:The method according to claim 2, wherein the determining the light source area in the target image comprises:
    通过预先训练得到的光源检测模型,确定所述目标图像中的光源区域。The light source area in the target image is determined by using the light source detection model obtained by pre-training.
  5. 根据权利要求4所述的方法,其中,所述在所述光源区域内构建所述光源模型,包括:The method according to claim 4, wherein the constructing the light source model in the light source area comprises:
    检测所述光源区域在水平方向上的第一长度,和所述光源区域在竖直方向上的第二长度;detecting a first length of the light source area in the horizontal direction, and a second length of the light source area in the vertical direction;
    在所述第一长度与所述第二长度相等的情况下,在所述光源区域内,以所述光源区域的中心点为中心,所述第一长度为直径,构建圆形的光源模型;In the case that the first length is equal to the second length, in the light source area, with the center point of the light source area as the center, and the first length as the diameter, a circular light source model is constructed;
    在所述第一长度与所述第二长度不相等的情况下,在所述光源区域内,以所述光源区域的中心点为中心,以所述第一长度和所述第二长度中较长的长度为长轴并以所述第一长度和所述第二长度中较短的长度为短轴,构建椭圆形的 光源模型。In the case where the first length and the second length are not equal, in the light source area, the center point of the light source area is taken as the center, and the first length and the second length are longer than the first length and the second length. The long length is the long axis and the shorter of the first length and the second length is the short axis, and an elliptical light source model is constructed.
  6. 根据权利要求5所述的方法,其中,所述根据所述光源模型确定所述光源区域的不透明度参数,包括:The method according to claim 5, wherein the determining the opacity parameter of the light source region according to the light source model comprises:
    设置所述光源区域中的所述光源模型之外的全部像素点的不透明度参数为第一目标常数值,所述第一目标常数值用于表示全透明;setting the opacity parameter of all the pixels in the light source area outside the light source model as a first target constant value, and the first target constant value is used to represent full transparency;
    根据所述光源区域中的所述光源模型之内的全部像素点中的每个像素点与所述光源模型的中心的像素点的距离的大小,设置所述每个像素点的不透明度参数;其中,所述光源模型之内的一个像素点与所述中心的像素点的距离越小,所述一个像素点的不透明度参数的值就越小。Set the opacity parameter of each pixel point according to the size of the distance between each pixel point in all the pixel points in the light source model in the light source area and the pixel point in the center of the light source model; Wherein, the smaller the distance between one pixel in the light source model and the central pixel, the smaller the value of the opacity parameter of the one pixel.
  7. 根据权利要求2所述的方法,其中,所述根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理,包括:The method according to claim 2, wherein, according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area, obtain the light source area in the light source area. Display textures, including:
    通过图形处理器GPU,根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理。Through the graphics processor GPU, the display texture in the light source area is obtained according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area.
  8. 根据权利要求2所述的方法,在所述根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理之后,还包括:The method according to claim 2, wherein the display in the light source area is obtained according to the original image texture in the light source area, the preselected light source texture in the light source area, and the opacity parameter of the light source area After the texture, also include:
    设置所述光源区域外的原图纹理的不透明度参数以及所述光源区域内的显示纹理的不透明度参数均为第二目标常数值,所述第二目标常数值用于表示不透明;Setting the opacity parameter of the original image texture outside the light source area and the opacity parameter of the display texture in the light source area are both the second target constant value, and the second target constant value is used to represent opacity;
    根据所述光源区域外的原图纹理、所述光源区域外的原图纹理的不透明度参数、所述光源区域内的显示纹理、以及所述光源区域内的显示纹理的不透明度参数,生成与所述目标图像对应的目标处理图像。According to the original image texture outside the light source area, the opacity parameter of the original image texture outside the light source area, the display texture in the light source area, and the opacity parameter of the display texture in the light source area, a The target processing image corresponding to the target image.
  9. 如权利要求1所述的方法,其中,所述获取目标图像的光源模型包括:获取光学图像的光源模型;所述光源模型表征所述光学图像中的光源形状和光源位置;The method of claim 1, wherein the acquiring a light source model of the target image comprises: acquiring a light source model of an optical image; the light source model characterizes the light source shape and the light source position in the optical image;
    所述根据所述光源模型对所述目标图像的光源区域进行处理,包括:The processing of the light source area of the target image according to the light source model includes:
    根据所述光源模型和预设映射曲线,得到待添加光源信息;所述待添加光源信息表征所述光学图像的光源区域中的全部像素点中的每个像素点的像素添加值,所述光源区域为所述光源形状和所述光源位置确定的图像区域;According to the light source model and the preset mapping curve, the light source information to be added is obtained; the light source information to be added represents the pixel added value of each pixel point in all the pixel points in the light source area of the optical image, the light source The area is an image area determined by the shape of the light source and the position of the light source;
    根据所述待添加光源信息更新所述光源区域中的每个像素点的像素值,得 到目标处理图像。The pixel value of each pixel in the light source area is updated according to the light source information to be added to obtain a target processing image.
  10. 根据权利要求9所述的方法,其中,所述获取光学图像的光源模型,包括:The method according to claim 9, wherein the acquiring the light source model of the optical image comprises:
    将所述光学图像进行网格划分得到网格图像;performing grid division on the optical image to obtain a grid image;
    获取所述网格图像中的具有光源的至少一个网格;acquiring at least one grid with light sources in the grid image;
    将所述至少一个网格对应的方框坐标位置作为所述光源位置;Taking the coordinate position of the box corresponding to the at least one grid as the light source position;
    检测所述至少一个网格得到所述光源形状;Detecting the at least one grid to obtain the light source shape;
    利用所述光源形状和所述光源位置,构建所述光源模型。Using the light source shape and the light source position, the light source model is constructed.
  11. 根据权利要求9所述的方法,其中,所述根据所述光源模型和预设映射曲线,得到待添加光源信息,包括:The method according to claim 9, wherein the obtaining information of the light source to be added according to the light source model and the preset mapping curve comprises:
    将透亮程度信息与所述预设映射曲线进行匹配,得到所述光源区域的待添加亮度信息;所述透亮程度信息表征所述光源区域的亮度信息,所述待添加亮度信息表征所述光源区域中的每个像素点的亮度添加值;Matching the brightness information with the preset mapping curve to obtain brightness information of the light source area to be added; the brightness information represents the brightness information of the light source area, and the brightness information to be added represents the light source area The brightness of each pixel in the added value;
    响应于光源改变需求,得到所述光源区域的待添加色彩信息;所述待添加色彩信息表征光源区域中的每个像素点的色彩添加值;Obtaining color information to be added in the light source area in response to a change in the light source; the color information to be added represents the color addition value of each pixel in the light source area;
    根据所述待添加亮度信息和所述待添加色彩信息,得到所述光源区域中每个像素点的像素添加值。According to the to-be-added brightness information and the to-be-added color information, a pixel added value of each pixel in the light source area is obtained.
  12. 根据权利要求11所述的方法,其中,所述透亮程度信息通过以下方式获取:The method according to claim 11, wherein the transparency information is obtained by:
    获取所述光源区域内的每个像素点与所述光源区域的中心点之间的第一线段的长度;obtaining the length of the first line segment between each pixel in the light source area and the center point of the light source area;
    获取所述第一线段的延长线上的所述光源区域的边界点与所述中心点之间的第二线段的长度,其中,所述光源区域的边界点与所述每个像素点位于所述中心的同一侧;Obtain the length of the second line segment between the boundary point of the light source region and the center point on the extension of the first line segment, wherein the boundary point of the light source region and each pixel point are located at the same side of the center;
    将所述第一线段的长度除以所述第二线段的长度,得到所述每个像素点的透亮程度信息。Divide the length of the first line segment by the length of the second line segment to obtain the transparency information of each pixel point.
  13. 根据权利要求11所述的方法,其中,所述根据所述待添加光源信息更新所述光源区域中的每个像素点的像素值,得到目标处理图像,包括:The method according to claim 11, wherein the updating the pixel value of each pixel in the light source area according to the light source information to be added to obtain the target processing image comprises:
    获得所述光源区域中的每个像素点的像素值;obtaining the pixel value of each pixel in the light source area;
    根据所述待添加光源信息确定所述每个像素点的像素添加值;所述每个像素点的像素添加值包括所述每个像素点的亮度添加值和所述每个像素点的色彩 添加值;The pixel added value of each pixel is determined according to the light source information to be added; the pixel added value of each pixel includes the brightness added value of each pixel and the color added value of each pixel value;
    将所述每个像素点的像素值与所述每个像素点的色彩添加值相加,得到中间像素值;adding the pixel value of each pixel point and the color addition value of each pixel point to obtain an intermediate pixel value;
    在所述中间像素值小于或等于预设阈值的情况下,利用所述每个像素点的亮度添加值、所述中间像素值、和所述每个像素点的像素值,得到所述每个像素点的目标像素值;In the case that the intermediate pixel value is less than or equal to a preset threshold, using the brightness addition value of each pixel point, the intermediate pixel value, and the pixel value of each pixel point to obtain the each pixel point The target pixel value of the pixel point;
    以所述每个像素点的目标像素值更新所述每个像素点的像素值,以得到所述目标处理图像。The pixel value of each pixel is updated with the target pixel value of each pixel to obtain the target processed image.
  14. 根据权利要求13所述的方法,其中,所述利用所述每个像素点的亮度添加值、所述中间像素值、和所述每个像素点的像素值,得到所述每个像素点的目标像素值,包括:The method according to claim 13, wherein the brightness addition value of each pixel point, the intermediate pixel value, and the pixel value of each pixel point are used to obtain the brightness value of each pixel point. Target pixel values, including:
    根据以下公式获取所述每个像素点的目标像素值:The target pixel value of each pixel is obtained according to the following formula:
    I=U`*H+(1-U`)*VI=U`*H+(1-U`)*V
    其中,所述每个像素点的目标像素值为I,所述每个像素点的亮度添加值为U`,所述中间像素值为H,所述每个像素点的像素值为V。Wherein, the target pixel value of each pixel point is 1, the brightness added value of each pixel point is U', the intermediate pixel value is H, and the pixel value of each pixel point is V.
  15. 一种图像处理装置,包括:An image processing device, comprising:
    光源模型获取模块,设置为获取目标图像的光源模型;The light source model acquisition module is set to acquire the light source model of the target image;
    光源区域处理模块,设置为根据所述光源模型对所述目标图像的光源区域进行处理。The light source area processing module is configured to process the light source area of the target image according to the light source model.
  16. 如权利要求15所述的装置,其中,所述光源模型获取模块包括:The apparatus of claim 15, wherein the light source model obtaining module comprises:
    光源区域确定单元,设置为获取目标图像,并确定所述目标图像中的光源区域;a light source area determination unit, configured to acquire a target image and determine a light source area in the target image;
    光源模型构建单元,设置为在所述光源区域内构建光源模型;a light source model building unit, configured to build a light source model in the light source area;
    所述光源区域处理模块包括:The light source area processing module includes:
    不透明参数确定单元,设置为根据所述光源模型确定所述光源区域的不透明度参数;an opacity parameter determining unit, configured to determine an opacity parameter of the light source region according to the light source model;
    光源区域调整单元,设置为根据所述光源区域内的原图纹理、所述光源区域内的预选光源纹理以及所述光源区域的不透明度参数,得到所述光源区域内的显示纹理。The light source area adjustment unit is configured to obtain the display texture in the light source area according to the original image texture in the light source area, the preselected light source texture in the light source area and the opacity parameter of the light source area.
  17. 如权利要求15所述的装置,其中,所述光源模型获取模块,是设置为获取光学图像的光源模型;所述光源模型表征所述光学图像中的光源形状和光 源位置;The device of claim 15, wherein the light source model acquisition module is a light source model configured to acquire an optical image; the light source model characterizes the light source shape and light source position in the optical image;
    所述光源区域处理模块包括:The light source area processing module includes:
    信息处理单元,设置为根据所述光源模型和预设映射曲线,得到待添加光源信息;所述待添加光源信息表征所述光源区域中的全部像素点中的每个像素点的像素添加值,所述光源区域为所述光源形状和所述光源位置确定的图像区域;an information processing unit, configured to obtain light source information to be added according to the light source model and a preset mapping curve; the light source information to be added represents the pixel addition value of each pixel point in all the pixel points in the light source area, The light source area is an image area determined by the shape of the light source and the position of the light source;
    图像更新单元,设置为根据所述待添加光源信息更新所述光源区域中的每个像素点的像素值,得到目标处理图像。The image updating unit is configured to update the pixel value of each pixel in the light source area according to the light source information to be added to obtain a target processing image.
  18. 根据权利要求17所述的装置,其中,所述信息处理单元是设置为:The apparatus of claim 17, wherein the information processing unit is configured to:
    将透亮程度信息与所述预设映射曲线进行匹配,得到所述光源区域的待添加亮度信息;所述透亮程度信息表征所述光源区域的亮度信息,所述待添加亮度信息表征所述光源区域中的每个像素点的亮度添加值;Matching the brightness information with the preset mapping curve to obtain brightness information of the light source area to be added; the brightness information represents the brightness information of the light source area, and the brightness information to be added represents the light source area The brightness of each pixel in the added value;
    响应于光源改变需求,得到所述光源区域的待添加色彩信息;所述待添加色彩信息表征光源区域中的每个像素点的色彩添加值;Obtaining color information to be added in the light source area in response to a change in the light source; the color information to be added represents the color addition value of each pixel in the light source area;
    根据所述待添加亮度信息和所述待添加色彩信息,得到所述光源区域中的每个像素点的像素添加值。According to the to-be-added brightness information and the to-be-added color information, a pixel added value of each pixel in the light source area is obtained.
  19. 一种计算机设备,其中,所述计算机设备包括:A computer device, wherein the computer device comprises:
    至少一个处理器;at least one processor;
    存储器,设置为存储至少一个程序,memory, arranged to store at least one program,
    当所述至少一个程序被所述至少一个处理器执行时,使得所述至少一个处理器实现如权利要求1-14中任一所述的图像处理方法。When the at least one program is executed by the at least one processor, the at least one processor is caused to implement the image processing method according to any one of claims 1-14.
  20. 一种计算机可读存储介质,存储有计算机程序,该程序被处理器执行时实现如权利要求1-14中任一所述的图像处理方法。A computer-readable storage medium storing a computer program, which implements the image processing method according to any one of claims 1-14 when the program is executed by a processor.
PCT/CN2021/104709 2020-07-07 2021-07-06 Image processing method and apparatus, device and medium WO2022007787A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010647193.5A CN113920299A (en) 2020-07-07 2020-07-07 Image processing method, apparatus, device and medium
CN202010647193.5 2020-07-07
CN202011043175.2A CN112153303B (en) 2020-09-28 2020-09-28 Visual data processing method and device, image processing equipment and storage medium
CN202011043175.2 2020-09-28

Publications (1)

Publication Number Publication Date
WO2022007787A1 true WO2022007787A1 (en) 2022-01-13

Family

ID=79553617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/104709 WO2022007787A1 (en) 2020-07-07 2021-07-06 Image processing method and apparatus, device and medium

Country Status (1)

Country Link
WO (1) WO2022007787A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119853A (en) * 2022-01-26 2022-03-01 腾讯科技(深圳)有限公司 Image rendering method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106464816A (en) * 2014-06-18 2017-02-22 佳能株式会社 Image processing apparatus and image processing method thereof
CN107197171A (en) * 2017-06-22 2017-09-22 西南大学 A kind of digital photographing processing method for adding intelligence software light source
CN109345602A (en) * 2018-09-28 2019-02-15 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
US20200204775A1 (en) * 2018-12-19 2020-06-25 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, image processing method, and storage medium
CN112153303A (en) * 2020-09-28 2020-12-29 广州虎牙科技有限公司 Visual data processing method and device, image processing equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106464816A (en) * 2014-06-18 2017-02-22 佳能株式会社 Image processing apparatus and image processing method thereof
CN107197171A (en) * 2017-06-22 2017-09-22 西南大学 A kind of digital photographing processing method for adding intelligence software light source
CN109345602A (en) * 2018-09-28 2019-02-15 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
US20200204775A1 (en) * 2018-12-19 2020-06-25 Canon Kabushiki Kaisha Image processing apparatus, image capturing apparatus, image processing method, and storage medium
CN112153303A (en) * 2020-09-28 2020-12-29 广州虎牙科技有限公司 Visual data processing method and device, image processing equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119853A (en) * 2022-01-26 2022-03-01 腾讯科技(深圳)有限公司 Image rendering method, device, equipment and medium
WO2023142607A1 (en) * 2022-01-26 2023-08-03 腾讯科技(深圳)有限公司 Image rendering method and apparatus, and device and medium

Similar Documents

Publication Publication Date Title
CN102254340B (en) Method and system for drawing ambient occlusion images based on GPU (graphic processing unit) acceleration
CN108305271B (en) Video frame image processing method and device
CN111145135B (en) Image descrambling processing method, device, equipment and storage medium
US20200302579A1 (en) Environment map generation and hole filling
US10719920B2 (en) Environment map generation and hole filling
CN112153303B (en) Visual data processing method and device, image processing equipment and storage medium
CN104157005A (en) Image-based HDR (high-dynamic range) illumination rendering method
US20230074060A1 (en) Artificial-intelligence-based image processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
WO2022007787A1 (en) Image processing method and apparatus, device and medium
CN110177287A (en) A kind of image procossing and live broadcasting method, device, equipment and storage medium
US20240087219A1 (en) Method and apparatus for generating lighting image, device, and medium
CN114638950A (en) Method and equipment for drawing virtual object shadow
CN109427089B (en) Mixed reality object presentation based on ambient lighting conditions
TWI808321B (en) Object transparency changing method for image display and document camera
US10424236B2 (en) Method, apparatus and system for displaying an image having a curved surface display effect on a flat display panel
TWI678927B (en) Method for dynamically adjusting clarity of image and image processing device using the same
CN113920299A (en) Image processing method, apparatus, device and medium
WO2022132153A1 (en) Gating of contextual attention and convolutional features
CN106777725B (en) Verification method and device for microcirculation image algorithm
US20230316640A1 (en) Image processing apparatus, image processing method, and storage medium
US20230410406A1 (en) Computer-readable non-transitory storage medium having image processing program stored therein, image processing apparatus, image processing system, and image processing method
US20230317023A1 (en) Local dimming for artificial reality systems
Zhi-Hao et al. Study on vehicle safety image system optimization
CN115484504A (en) Image display method, image display device, electronic device, and storage medium
CN116934940A (en) Method for generating model map by using panorama based on ray tracing technology

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21838902

Country of ref document: EP

Kind code of ref document: A1