CN113628096A - Image processing method, device and equipment - Google Patents

Image processing method, device and equipment Download PDF

Info

Publication number
CN113628096A
CN113628096A CN202010379300.0A CN202010379300A CN113628096A CN 113628096 A CN113628096 A CN 113628096A CN 202010379300 A CN202010379300 A CN 202010379300A CN 113628096 A CN113628096 A CN 113628096A
Authority
CN
China
Prior art keywords
target
dimension
components
coordinate
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010379300.0A
Other languages
Chinese (zh)
Inventor
朱宝华
王杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010379300.0A priority Critical patent/CN113628096A/en
Publication of CN113628096A publication Critical patent/CN113628096A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture

Abstract

The application provides an image processing method, an image processing device and image processing equipment, wherein the method comprises the following steps: determining a target rectangular area from the target image based on a plurality of vertex coordinates of the area to be shielded; determining a target brightness value according to the brightness values of all the brightness components in the target rectangular region, and filling all the brightness components in the target rectangular region according to the target brightness value; and determining a target chromaticity value according to the chromaticity values of all the chromaticity components in the target rectangular region, and filling all the chromaticity components in the target rectangular region according to the target chromaticity values. Through the technical scheme of the application, the luminance component and the chrominance component of the target rectangular region can be filled, so that the target rectangular region is closer to actual content, but image details cannot be recognized.

Description

Image processing method, device and equipment
Technical Field
The present application relates to the field of video technologies, and in particular, to an image processing method, apparatus, and device.
Background
Mosaic processing is an image processing means for masking some areas of an image (hereinafter referred to as an area to be masked) so that the area to be masked is not viewed as a privacy area, protecting user privacy.
And if the area to be shielded is a rectangular area, acquiring coordinates of a point at the upper left corner of the area to be shielded, and the width and the height of the area to be shielded. And then, determining a region to be shielded from the image based on the coordinates, width and height of the upper left corner point, and filling the same brightness value in the region to be shielded to realize mosaic processing of the region to be shielded.
As the privacy mask application goes deeper, the shape of the region to be masked increases, and the region to be masked may be an irregular polygon. For an irregular polygonal region to be masked, how to implement mosaic processing of the region to be masked is not an effective implementation manner at present, and user experience is poor.
Disclosure of Invention
The application provides an image processing method, which comprises the following steps:
determining a target rectangular area from the target image based on a plurality of vertex coordinates of the area to be shielded;
determining a target brightness value according to the brightness values of all the brightness components in the target rectangular region, and filling all the brightness components in the target rectangular region according to the target brightness value;
and determining a target chromaticity value according to the chromaticity values of all the chromaticity components in the target rectangular region, and filling all the chromaticity components in the target rectangular region according to the target chromaticity values.
Filling all brightness components in the target rectangular region according to the target brightness value, including: filling a first row of brightness components in the target rectangular region according to the target brightness value; copying the brightness value of the first row of brightness components to the Nth row of brightness components in the target rectangular area;
filling all chroma components in the target rectangular area according to the target chroma value, wherein the filling comprises the following steps: filling a first row of chrominance components in the target rectangular region according to the target chrominance value; copying the chromatic values of the first row of chromatic components to the Nth row of chromatic components in the target rectangular area;
where N is each value between 2 and the maximum number of rows of the target rectangular area.
Each row in the target rectangular region comprises P luminance components and P chrominance components;
the copying the luminance values of the first row of luminance components to the nth row of luminance components in the target rectangular region includes: copying the brightness values of the P brightness components of the first row to the P brightness components of the Nth row in the target rectangular area in parallel based on a parallel processing device;
the copying the chrominance values of the first row of chrominance components to the nth row of chrominance components in the target rectangular area includes: based on the parallel processing device, the chroma values of the P chroma components in the first row are copied to the P chroma components in the Nth row in the target rectangular area in parallel.
The method for determining the target rectangular area from the target image based on the multiple vertex coordinates of the area to be shielded comprises the following steps: determining a minimum coordinate and a maximum coordinate of a first dimension according to a plurality of vertex coordinates of an area to be shielded;
determining at least two second-dimension scanning lines of a second dimension based on a preset first step length in the range of the minimum coordinate and the maximum coordinate, wherein the second dimension is perpendicular to the first dimension;
and aiming at each second dimension scanning line, determining the intersection point coordinate of the second dimension scanning line and the boundary line of the region to be shielded, and determining a target rectangular region from the target image according to the intersection point coordinate.
The determining a target rectangular area from the target image according to the intersection point coordinates comprises the following steps:
determining a minimum coordinate of a second dimension and a maximum coordinate of the second dimension based on a plurality of intersection point coordinates of two adjacent second-dimension scanning lines and a boundary line of the area to be shielded;
determining at least two first-dimension scanning lines of a first dimension based on a preset second step length in the range of the minimum coordinate of the second dimension and the maximum coordinate of the second dimension;
and determining a target rectangular region from the target image based on the regions formed by two adjacent first-dimension scanning lines and two adjacent second-dimension scanning lines.
After determining the minimum coordinate of the second dimension and the maximum coordinate of the second dimension based on the coordinates of a plurality of intersection points of two adjacent second-dimension scanning lines and the boundary line of the region to be shielded, the method further comprises:
if the difference value between the maximum coordinate of the second dimension and the minimum coordinate of the second dimension is smaller than a preset second step length, adjusting the maximum coordinate of the second dimension so that the difference value between the adjusted maximum coordinate of the second dimension and the minimum coordinate of the second dimension is not smaller than the preset second step length; or the like, or, alternatively,
and adjusting the minimum coordinate of the second dimension to ensure that the difference value between the maximum coordinate of the second dimension and the adjusted minimum coordinate of the second dimension is not less than a preset second step length.
For example, after the target rectangular area is determined from the target image according to the intersection coordinates, the method further includes:
if the vertex coordinates of the area to be shielded are not located in all the target rectangular areas, determining the target rectangular area matched with the vertex coordinates from all the target rectangular areas, and adjusting the target rectangular area matched with the vertex coordinates so that the adjusted target rectangular area comprises the vertex coordinates.
Before the determining the target rectangular region from the target image based on the multiple vertex coordinates of the region to be shielded, the method further includes: acquiring a first polygonal area for mosaic processing of a target image; if the vertex coordinates of the first polygon area are all located in the target image, determining the first polygon area as an area to be shielded; and if the vertex coordinates of the first polygon area are partially located in the target image, determining a second polygon area according to the first polygon area and the target image, wherein the vertex coordinates of the second polygon area are all located in the target image, and determining the second polygon area as an area to be shielded.
The present application provides an image processing apparatus, the apparatus including: the determining module is used for determining a target rectangular area from the target image based on a plurality of vertex coordinates of the area to be shielded;
the brightness processing module is used for determining a target brightness value according to the brightness values of all the brightness components in the target rectangular region and filling all the brightness components in the target rectangular region according to the target brightness value;
and the chrominance processing module is used for determining a target chrominance value according to the chrominance values of all chrominance components in the target rectangular area and filling all chrominance components in the target rectangular area according to the target chrominance value.
The application provides an image processing apparatus, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
determining a target rectangular area from the target image based on a plurality of vertex coordinates of the area to be shielded;
determining a target brightness value according to the brightness values of all the brightness components in the target rectangular region, and filling all the brightness components in the target rectangular region according to the target brightness value;
and determining a target chromaticity value according to the chromaticity values of all the chromaticity components in the target rectangular region, and filling all the chromaticity components in the target rectangular region according to the target chromaticity values.
According to the technical scheme, when the area to be shielded is an irregular polygon, the target rectangular area can be determined from the target image based on the vertex coordinates of the area to be shielded, mosaic processing is performed on the target rectangular area, mosaic processing of the area to be shielded is achieved, user experience is good, speed of mosaic processing is high, complexity is low, instantaneity is strong, and privacy data confidentiality is good. When mosaic processing is carried out, the luminance component and the chrominance component of the target rectangular area can be filled, so that the target rectangular area is closer to the actual content, but the image details cannot be recognized. When mosaic processing is carried out, data can be processed in parallel, so that processing resources are saved, and processing performance is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a schematic flow chart diagram of an image processing method in one embodiment of the present application;
FIGS. 2A-2D are schematic diagrams of the filling of the luminance component in one embodiment of the present application;
FIGS. 3A-3F are schematic diagrams illustrating the determination of a target rectangular region in one embodiment of the present application;
FIGS. 4A-4C are schematic diagrams illustrating the determination of a target rectangular region in another embodiment of the present application;
FIGS. 5A-5C are schematic diagrams of the positions of the polygon area and the target image according to one embodiment;
FIG. 6 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a hardware configuration diagram of an image processing apparatus in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
The mosaic processing is an image processing means for masking a region to be masked of a target image so that the region to be masked is not viewed as a privacy region, thereby protecting the privacy of a user. With the application of privacy masking in depth, the shapes of the regions to be masked are more and more, and when the regions to be masked are irregular polygons (such as non-rectangles), a target rectangular region can be determined from the target image based on a plurality of vertex coordinates of the regions to be masked, and mosaic processing is performed on the target rectangular region, such as filling of a luminance component and a chrominance component of the target rectangular region, so that the target rectangular region is closer to actual content, but image details cannot be recognized.
The technical solutions of the embodiments of the present application are described below with reference to specific embodiments.
Example 1: referring to fig. 1, a flow chart of an image processing method may include:
step 101, determining a target rectangular region from a target image based on a plurality of vertex coordinates of a region to be shielded, wherein the region to be shielded is an irregular polygon needing mosaic processing in the target image.
Illustratively, the target image is an image that needs to be subjected to mosaic processing, and the target image may be a one-frame target image or a multi-frame target image. When the target image is a plurality of frames of target images, each frame of target image is processed in the same manner, and for convenience of description, a frame of target image is taken as an example for description.
Illustratively, the region to be masked may be an irregular polygon (e.g., non-rectangular), and the polygon may be any number of sides. For example, the region to be shielded may be an irregular triangle, an irregular quadrangle, an irregular pentagon, an irregular hexagon, or the like, and the shape of the region to be shielded is not limited.
For example, the number of the target rectangular regions may be one or multiple, when the number of the target rectangular regions is multiple, mosaic processing may be performed on each target rectangular region, and for the mosaic processing manner of the target rectangular regions, refer to subsequent step 102 and step 103, which are not described herein again.
For example, based on a plurality of vertex coordinates of the region to be masked, a target rectangular region may be determined from the target image, and the determination process is not limited, and the determination process of the target rectangular region is described in the following embodiments.
And 102, determining a target brightness value according to the brightness values of all the brightness components in the target rectangular area, and filling all the brightness components in the target rectangular area according to the target brightness value.
Illustratively, the target rectangular region is a minimum rectangular block of the mosaic process, i.e., the mosaic process is performed separately for each target rectangular region. For each target rectangular region, the luminance values of all luminance components within the target rectangular region may be acquired, and the average value of the luminance values of all luminance components may be determined as the target luminance value. Of course, the target brightness value may be determined in other manners, not limited to the average value of the brightness values of all the brightness components, and the determination manner is not limited.
Then, all the luminance components in the target rectangular region are filled according to the target luminance value, for example, all the luminance components in the target rectangular region are filled as the target luminance value. In summary, the luminance values of all luminance components in the target rectangular region are the target luminance values.
Illustratively, if the target image is in YUV format, the target image may include a luminance component and a chrominance component. If the target image is not in YUV format, if the target image is in RGB format, the target image in RGB format may be converted into the target image in YUV format, and the conversion process is not limited, so that the target image includes a luminance component and a chrominance component. Since the target image includes a luminance component and a chrominance component, a target rectangular region in the target image may also include a luminance component and a chrominance component.
In one possible embodiment, to fill all the luminance components in the target rectangular region, the following approach may be adopted: filling a first row of brightness components in the target rectangular region according to the target brightness value; then copying the brightness value of the first row of brightness components to the Nth row of brightness components in the target rectangular area; where N is each value between 2 and the maximum number of rows of the target rectangular area.
For example, assuming that each line within the target rectangular region includes P luminance components, the luminance values of the P luminance components of the first line may be copied in parallel to the P luminance components of the nth line within the target rectangular region based on the parallel processing device, that is, the luminance values of the P luminance components are copied in parallel.
For example, the first row luminance component may be a first row luminance component, and the nth row luminance component may be an nth row luminance component; alternatively, the first row luminance component may be a first column luminance component, and the nth row luminance component may be an nth column luminance component. For convenience of description, the following description will be made by taking "row" as an example.
Referring to fig. 2A, the 1 st luminance component of the first row may be filled according to the target luminance value, then the 2 nd luminance component of the first row may be filled according to the target luminance value, and so on, until the last (i.e., the P-th) luminance component of the first row is filled according to the target luminance value.
Then, referring to fig. 2B, the luminance values of the P luminance components of the first line are copied in parallel to the P luminance components of the second line. For example, the luminance value of the 1 st luminance component of the first row is copied to the 1 st luminance component of the second row, the luminance value of the 2 nd luminance component of the first row is copied to the 2 nd luminance component of the second row, and so on, the luminance value of the P-th luminance component of the first row is copied to the P-th luminance component of the second row. The above-described copying operation is performed in parallel, i.e., only one copying operation is required to copy the luminance values of the P luminance components of the first row to the P luminance components of the 2 nd row.
Then, referring to fig. 2C, the luminance values of the P luminance components of the first row are copied in parallel to the P luminance components of the third row, and so on, until the luminance values of the P luminance components of the first row are copied in parallel to the P luminance components of the last row of the target rectangular region, referring to fig. 2D.
In the above manner, since the P luminance components of other lines are quickly copied based on the luminance values of the P luminance components of the first line, the number of times of filling the target luminance value can be reduced (the target luminance value is prevented from being filled into all the luminance components), thereby achieving the purpose of saving processing resources.
For example, when the luminance values of the P luminance components in the first row are copied to the P luminance components in the nth row in parallel, the parallel processing device may be implemented, and the parallel processing device may include, but is not limited to, an ARM (Advanced RISC Machines) processor or a DMA (Direct Memory Access) controller. Of course, the ARM processor and the DMA controller are only two examples, and the parallel processing device is not limited as long as the parallel processing device has parallel processing capability.
For example, taking an ARM processor as an example, the ARM processor (e.g., the NEON module of the ARM processor) has a function of processing multiple data in parallel, and can load 128 bits of data at a time to perform operations. Based on this, if the value of P is less than or equal to 128, the ARM processor can load the luminance values of P luminance components in the first row, that is, the luminance values of P luminance components are loaded simultaneously, and then the luminance values of P luminance components are copied to P luminance components in the nth row in parallel, and the entire row copy can be completed only by one copy operation.
And 103, determining a target chroma value according to the chroma values of all the chroma components in the target rectangular area, and filling all the chroma components in the target rectangular area according to the target chroma value.
For example, for each target rectangular region, the chroma values of all chroma components within the target rectangular region may be obtained, and the average of the chroma values of all chroma components may be determined as the target chroma value. Of course, the target chromaticity value may be determined in other manners, and the determination manner is not limited.
Then, all the chrominance components in the target rectangular region are filled according to the target chrominance value, for example, all the chrominance components in the target rectangular region are filled as the target chrominance value. In summary, the chrominance values of all chrominance components in the target rectangular region are the target chrominance values.
In one possible implementation, to fill all chrominance components within the target rectangular region, the following approach may be adopted: filling a first row of chrominance components in the target rectangular region according to the target chrominance value; then copying the chromatic values of the first row of chromatic components to the Nth row of chromatic components in the target rectangular area; where N is each value between 2 and the maximum number of rows of the target rectangular area.
For example, assuming that each line in the target rectangular area includes P chroma components, the chroma values of the P chroma components in the first line may be copied to the P chroma components in the nth line in parallel based on the parallel processing device, that is, the chroma values of the P chroma components are copied in parallel.
For example, the first row of chrominance components may be a first row of chrominance components, and the nth row of chrominance components may be an nth row of chrominance components; alternatively, the first row of chrominance components may be a first column of chrominance components and the nth row of chrominance components may be an nth column of chrominance components. For convenience of description, the following description will be made by taking "row" as an example.
For example, the 1 st chroma component of the first row may be filled according to the target chroma value, the 2 nd chroma component of the first row may be filled according to the target chroma value, and so on, until the last chroma component of the first row is filled according to the target chroma value. Then, the chrominance values of the P chrominance components of the first line are copied in parallel to the P chrominance components of the second line. And so on, until the chroma values of the P chroma components of the first line are copied to the P chroma components of the last line of the target rectangular area in parallel.
When the chrominance values of the P chrominance components in the first line are copied to the P chrominance components in the nth line in parallel, the parallel processing may be implemented by using a parallel processing device, where the parallel processing device is an ARM processor or a DMA controller.
According to the technical scheme, when the area to be shielded is an irregular polygon, the target rectangular area can be determined from the target image based on the vertex coordinates of the area to be shielded, mosaic processing is performed on the target rectangular area, mosaic processing of the area to be shielded is achieved, user experience is good, speed of mosaic processing is high, complexity is low, instantaneity is strong, and privacy data confidentiality is good. When mosaic processing is carried out, the luminance component and the chrominance component of the target rectangular area can be filled, so that the target rectangular area is closer to the actual content, but the image details cannot be recognized. When mosaic processing is carried out, data can be processed in parallel, so that processing resources are saved, and processing performance is improved.
Example 2: in step 101, a target rectangular region needs to be determined from the target image based on a plurality of vertex coordinates of the region to be masked, and in order to determine the target rectangular region, the following steps may be adopted:
step 10111, determining a minimum coordinate (a minimum coordinate of the vertex coordinates) and a maximum coordinate (a maximum coordinate of the vertex coordinates) of the first dimension according to the vertex coordinates of the region to be masked.
For example, the direction of the two-dimensional plane may be divided into a first dimension and a second dimension, and the second dimension is perpendicular to the first dimension. For example, when the first dimension is the lateral direction (i.e., horizontal direction), the second dimension is the longitudinal direction (i.e., vertical direction); when the first dimension is the longitudinal direction, the second dimension is the lateral direction. For convenience of description, in the following embodiments, the first dimension is taken as the longitudinal direction, the second dimension is taken as the transverse direction, and when the first dimension is the transverse direction and the second dimension is the longitudinal direction, the implementation manner is similar, and the subsequent processes are not repeated.
Referring to fig. 3A, assuming that the region to be shielded is an irregular pentagon, the region to be shielded includes vertex coordinate a1, vertex coordinate a2, vertex coordinate a3, vertex coordinate a4, and vertex coordinate a 5. Since the first dimension is the longitudinal direction, the minimum coordinate of the first dimension is a vertex coordinate whose longitudinal coordinate value is minimum, such as vertex coordinate a1, and the maximum coordinate of the first dimension is a vertex coordinate whose longitudinal coordinate value is maximum, such as vertex coordinate a 3.
Based on the vertex coordinates of the region to be masked, the vertex coordinate with the minimum longitudinal coordinate value can be used as the minimum coordinate of the first dimension, and the vertex coordinate with the maximum longitudinal coordinate value can be used as the maximum coordinate of the first dimension.
Step 10112, determining at least two second-dimension scanning lines of a second dimension based on a preset first step length in the range of the minimum coordinate of the first dimension and the maximum coordinate of the first dimension.
For example, in the range of the minimum coordinate of the first dimension and the maximum coordinate of the first dimension, every preset first step length, a scan line of the second dimension may be set, and for convenience of distinction, the scan line of the second dimension may be referred to as a scan line of the second dimension. The preset first step length may be configured arbitrarily according to experience, and is not limited to this, for example, the preset first step length may be 10 or 20, and when the preset first step length is 10, it means that every 10 pixel points are included, and a second-dimension scan line may be set.
Referring to fig. 3B, when the first dimension is a longitudinal direction and the second dimension is a transverse direction, the second-dimension scan lines may be transverse scan lines. Since the vertex coordinate a1 is the minimum coordinate of the first dimension and the vertex coordinate a3 is the maximum coordinate of the first dimension, within the range of the vertex coordinate a1 and the vertex coordinate a3, a plurality of second-dimension scan lines may be set, and the length between two adjacent second-dimension scan lines is a preset first step.
For example, a second-dimension scan line b1 passing through vertex coordinates a1 is set, and a second-dimension scan line b6 passing through vertex coordinates a3 is set. Then, other second-dimension scan lines are set based on the preset first step, such as the length between the second-dimension scan line b2 and the second-dimension scan line b1 is the preset first step, the length between the second-dimension scan line b3 and the second-dimension scan line b2 is the preset first step, and so on.
After the second-dimension scan line b5 is set, if the length between the second-dimension scan line b5 and the second-dimension scan line b6 is less than or equal to the preset first step length, no second-dimension scan line is set between the second-dimension scan line b5 and the second-dimension scan line b6, and thus a plurality of second-dimension scan lines are obtained.
In summary, a plurality of second dimension scan lines can be obtained within the range of the minimum coordinate of the first dimension (e.g., vertex coordinate a1) and the maximum coordinate of the first dimension (e.g., vertex coordinate a 3).
Step 10113, for each second dimension scanning line, determining the coordinates of the intersection point of the second dimension scanning line and the boundary line of the region to be shielded. Since the second-dimension scan line may intersect the boundary line of the region to be masked, the coordinates of the intersection point of the second-dimension scan line and the boundary line of the region to be masked may be determined.
For example, referring to fig. 3C, the coordinates of the intersection of the second-dimension scan line b1 and the boundary line of the region to be masked are intersection coordinates C1 (i.e., vertex coordinates a1), the coordinates of the intersection of the second-dimension scan line b2 and the boundary line of the region to be masked are intersection coordinates C2 and intersection coordinates C3, the coordinates of the intersection of the second-dimension scan line b3 and the boundary line of the region to be masked are intersection coordinates C4 and intersection coordinates C5, and so on.
Step 10114, determining a target rectangular area from the target image according to the intersection point coordinates.
For example, the minimum coordinate of the second dimension (e.g., the minimum coordinate of the intersection coordinates) and the maximum coordinate of the second dimension (e.g., the maximum coordinate of the intersection coordinates) may be determined according to a plurality of intersection coordinates of the boundary line between two adjacent second-dimension scan lines and the occlusion region, and the target rectangular region may be determined from the target image based on the minimum coordinate of the second dimension and the maximum coordinate of the second dimension.
The following describes a process of determining a target rectangular area in conjunction with a specific embodiment.
Mode 1, in one possible implementation, the target rectangular area is determined by the following steps:
step S11, determining a minimum coordinate of the second dimension and a maximum coordinate of the second dimension based on coordinates of a plurality of intersection points of two adjacent second-dimension scan lines and a boundary line of the region to be shielded.
Referring to fig. 3C, the plurality of intersection coordinates includes an intersection coordinate C2, an intersection coordinate C3, an intersection coordinate C4, and an intersection coordinate C5 for the adjacent second-dimension scan line b2 and second-dimension scan line b 3. Since the second dimension is the lateral direction, the minimum coordinate of the second dimension is the intersection coordinate where the abscissa value is minimum, such as intersection coordinate c4, and the maximum coordinate of the second dimension is the intersection coordinate where the abscissa value is maximum, such as intersection coordinate c 5.
In summary, based on the plurality of intersection point coordinates, the intersection point coordinate with the smallest abscissa value may be set as the smallest coordinate of the second dimension, and the intersection point coordinate with the largest abscissa value may be set as the largest coordinate of the second dimension.
Step S12, a first dimension scan line passing through the minimum coordinate of the second dimension is set, a first dimension scan line passing through the maximum coordinate of the second dimension is set, and a region formed by the two first dimension scan lines and the two adjacent second dimension scan lines is determined as a target rectangular region.
For example, referring to fig. 3D, for the adjacent second-dimension scan line b2 and second-dimension scan line b3, the minimum coordinate of the second dimension is intersection coordinate c4, the maximum coordinate of the second dimension is intersection coordinate c5, first-dimension scan line D1 passing intersection coordinate c4 is set, and first-dimension scan line D2 passing intersection coordinate c5 is set. Based on this, a rectangular region composed of the first-dimension scan line d1, the first-dimension scan line d2, the second-dimension scan line b2, and the second-dimension scan line b3 is determined as a target rectangular region.
In summary, a target rectangular area can be obtained for the adjacent second-dimension scan line b2 and second-dimension scan line b 3. Similarly, the target rectangular region can be obtained for any two adjacent second-dimension scan lines (e.g., b1 and b2, b3 and b4, b4 and b5, b5 and b6), and details thereof are not repeated here.
Mode 2, in another possible implementation, the target rectangular area is determined by the following steps:
step S21, determining a minimum coordinate of the second dimension and a maximum coordinate of the second dimension based on coordinates of a plurality of intersection points of two adjacent second-dimension scan lines and a boundary line of the region to be shielded.
Step S22, determining at least two first-dimension scan lines of the first dimension based on a preset second step size within a range of the minimum coordinate of the second dimension and the maximum coordinate of the second dimension.
For example, in the range of the minimum coordinate of the second dimension and the maximum coordinate of the second dimension, every preset second step length, a scan line of the first dimension may be set, and for convenience of distinction, the scan line of the first dimension may be referred to as a scan line of the first dimension. The preset second step length may be arbitrarily configured according to experience, and is not limited to this, for example, the preset second step length may be 10 or 20, and when the preset second step length is 10, it means that every 10 pixel points are included, and a first-dimension scan line may be set.
Referring to fig. 3E, for adjacent second-dimension scan lines b2 and second-dimension scan lines b3, the minimum coordinate of the second dimension is an intersection coordinate c4, the maximum coordinate of the second dimension is an intersection coordinate c5, in the range of an intersection coordinate c4 and an intersection coordinate c5, a plurality of first-dimension scan lines are set, the first-dimension scan lines are longitudinal scan lines, and the length between two adjacent first-dimension scan lines is a preset second step.
For example, a first dimension scan line d1 passing through intersection coordinate c4 and a first dimension scan line d6 passing through intersection coordinate c5 are provided. Then, other first-dimension scan lines are set based on the preset second step, such as the length between the first-dimension scan line d2 and the first-dimension scan line d1 is the preset second step, and so on.
After the first dimension scan line d5 is disposed, if the length between the first dimension scan line d5 and the first dimension scan line d6 is less than or equal to the preset second step length, the first dimension scan line is not disposed between the first dimension scan line d5 and the first dimension scan line d6, and thus a plurality of first dimension scan lines are obtained.
For another example, as shown in fig. 3F, on the basis of fig. 3E, if the length between the first dimension scan line d5 and the first dimension scan line d6 is smaller than the preset second step length, the first dimension scan line d5 'may be disposed between the first dimension scan line d4 and the first dimension scan line d6, and the length between the first dimension scan line d 5' and the first dimension scan line d6 is the preset second step length, so that a plurality of first dimension scan lines are obtained.
In summary, a plurality of first dimension scan lines can be obtained within the range of the minimum coordinate of the second dimension (e.g., intersection coordinate c4) and the maximum coordinate of the second dimension (e.g., intersection coordinate c 5).
In step S23, a rectangular target region is determined from the target image based on a region formed by two adjacent first-dimension scan lines and two adjacent second-dimension scan lines. For example, a region formed by two adjacent first-dimension scan lines and two adjacent second-dimension scan lines may be determined as a target rectangular region.
For example, referring to fig. 3E, a rectangular region composed of the first-dimension scan line d1, the first-dimension scan line d2, the second-dimension scan line b2, and the second-dimension scan line b3 may be determined as the target rectangular region. By analogy, a rectangular region composed of the first-dimension scan line d5, the first-dimension scan line d6, the second-dimension scan line b2, and the second-dimension scan line b3 may be determined as a target rectangular region.
For another example, referring to fig. 3F, unlike fig. 3E, a rectangular region composed of the first-dimension scan line d 5', the first-dimension scan line d6, the second-dimension scan line b2, and the second-dimension scan line b3 is determined as a target rectangular region, instead of the rectangular region composed of the first-dimension scan line d5, the first-dimension scan line d6, the second-dimension scan line b2, and the second-dimension scan line b3 being determined as a target rectangular region.
In summary, a plurality of target rectangular areas can be obtained for the adjacent second-dimension scan line b2 and second-dimension scan line b 3. Similarly, the target rectangular region can be obtained for any two adjacent second-dimension scan lines (e.g., b1 and b2, b3 and b4, b4 and b5, b5 and b6), and details thereof are not repeated here.
Example 3: on the basis of embodiment 2, after step S21 and before step S22, the following steps may be further performed: if the difference between the maximum coordinate of the second dimension and the minimum coordinate of the second dimension is smaller than the preset second step length, the maximum coordinate of the second dimension can be adjusted, so that the difference between the adjusted maximum coordinate of the second dimension and the minimum coordinate of the second dimension is not smaller than the preset second step length. Or, the minimum coordinate of the second dimension may be adjusted, so that a difference between the maximum coordinate of the second dimension and the adjusted minimum coordinate of the second dimension is not less than a preset second step length. Or, the maximum coordinate of the second dimension may be adjusted, and the minimum coordinate of the second dimension may be adjusted, so that a difference between the adjusted maximum coordinate of the second dimension and the adjusted minimum coordinate of the second dimension is not less than a preset second step length.
After the adjustment of the maximum coordinate of the second dimension and/or the minimum coordinate of the second dimension, step S22 may be performed based on the adjusted maximum coordinate of the second dimension and the adjusted minimum coordinate of the second dimension.
For example, if the second dimension is the horizontal direction, the difference between the maximum coordinate of the second dimension and the minimum coordinate of the second dimension is the difference between the abscissa value of the maximum coordinate and the abscissa value of the minimum coordinate. If the second dimension is the longitudinal direction, the difference between the maximum coordinate of the second dimension and the minimum coordinate of the second dimension is the difference between the ordinate value of the maximum coordinate and the ordinate value of the minimum coordinate. In the present embodiment, "transverse" is taken as an example.
Referring to fig. 3E, assuming that the minimum coordinate of the second dimension is an intersection coordinate c4, the maximum coordinate of the second dimension is an intersection coordinate c5, and the difference between the abscissa value of the intersection coordinate c5 and the abscissa value of the intersection coordinate c4 is smaller than the preset second step length, the abscissa value of the intersection coordinate c5 is adjusted, and if the abscissa value of the intersection coordinate c5 is increased, the intersection coordinate c5 'is obtained, and the difference between the abscissa value of the intersection coordinate c 5' and the abscissa value of the intersection coordinate c4 is not smaller than the preset second step length. Or, the abscissa value of the intersection coordinate c4 is adjusted, for example, the abscissa value of the intersection coordinate c4 is decreased to obtain the intersection coordinate c4 ', and the difference between the abscissa value of the intersection coordinate c5 and the abscissa value of the intersection coordinate c 4' is not less than the preset second step length. Or, the abscissa value of the intersection coordinate c4 is adjusted, for example, the abscissa value of the intersection coordinate c4 is decreased to obtain the intersection coordinate c4 ', the abscissa value of the intersection coordinate c5 is adjusted, for example, the abscissa value of the intersection coordinate c5 is increased to obtain the intersection coordinate c 5', and the difference between the abscissa value of the intersection coordinate c5 'and the abscissa value of the intersection coordinate c 4' is not less than the preset second step length.
In the foregoing embodiment, the preset second step size may be P in embodiment 1, which indicates that P pixel points exist in each row of the target rectangular region, each pixel point corresponds to one luminance component, and each pixel point corresponds to one chrominance component. The preset first step size may be N of embodiment 1, indicating that there are N rows of the target rectangular region.
Example 4: in step 101, a target rectangular region needs to be determined from the target image based on a plurality of vertex coordinates of the region to be masked, and in order to determine the target rectangular region, the following steps may be adopted:
step 10121, determining a minimum coordinate of the first dimension, a maximum coordinate of the first dimension, a minimum coordinate of the second dimension, and a maximum coordinate of the second dimension according to the vertex coordinates of the region to be masked.
For example, in this embodiment, taking the first dimension as the longitudinal direction and the second dimension as the lateral direction as an example for explanation, then: the minimum coordinate of the first dimension may be a vertex coordinate with a minimum ordinate value; the maximum coordinate of the first dimension may be a vertex coordinate with a maximum ordinate value; the minimum coordinate of the second dimension can be a vertex coordinate with the minimum horizontal coordinate value; the maximum coordinate of the second dimension may be a vertex coordinate in which an abscissa value is maximum.
Referring to fig. 4A, assuming that the region to be shielded is an irregular pentagon, the region to be shielded includes vertex coordinate a1, vertex coordinate a2, vertex coordinate a3, vertex coordinate a4, and vertex coordinate a 5.
The minimum coordinate of the first dimension is a vertex coordinate a1 whose ordinate value is minimum, and the maximum coordinate of the first dimension is a vertex coordinate a3 whose ordinate value is maximum. The minimum coordinate of the second dimension is the vertex coordinate a2 whose abscissa value is minimum, and the maximum coordinate of the second dimension is the vertex coordinate a4 whose abscissa value is maximum.
Step 10122, determining a circumscribed rectangle of the region to be shaded based on the minimum coordinate of the first dimension, the maximum coordinate of the first dimension, the minimum coordinate of the second dimension and the maximum coordinate of the second dimension.
For example, a scan line passing through the minimum coordinate of the first dimension is set, a scan line passing through the maximum coordinate of the first dimension is set, a scan line passing through the minimum coordinate of the second dimension is set, and a scan line passing through the maximum coordinate of the second dimension is set.
On the basis of fig. 4A, a circumscribed rectangle of the region to be shielded can be seen as shown in fig. 4B.
Step 10123, traversing the circumscribed rectangle of the region to be shielded by using the preset rectangular blocks to obtain a plurality of target rectangular regions. The preset rectangular block may be determined based on a preset first step size and a preset second step size, for example, the preset first step size may be a height of the preset rectangular block, and the preset second step size may be a width of the preset rectangular block.
Referring to fig. 4C, the traversal process is not limited to a plurality of target rectangular regions obtained when the preset rectangular block is used to traverse the circumscribed rectangle. In the traversal process of the external rectangle, each target rectangle region needs to be located in the external rectangle, and each pixel point in the external rectangle needs to be located in a certain target rectangle region. In the traversing process of the circumscribed rectangle, partial overlapping regions can exist in different target rectangular regions.
Example 5: in step 101, a target rectangular region needs to be determined from the target image based on a plurality of vertex coordinates of the region to be masked, and in order to determine the target rectangular region, the following steps may be adopted:
step 10131, determining a minimum coordinate of the first dimension, a maximum coordinate of the first dimension, a minimum coordinate of the second dimension, and a maximum coordinate of the second dimension according to the vertex coordinates of the region to be masked.
Step 10132, determining a circumscribed rectangle of the region to be shaded based on the minimum coordinate of the first dimension, the maximum coordinate of the first dimension, the minimum coordinate of the second dimension and the maximum coordinate of the second dimension.
Step 10133, determining the circumscribed rectangle of the region to be shielded as the target rectangular region.
Example 6: on the basis of embodiment 2, in step 10114, a plurality of target rectangular regions may be determined according to the intersection coordinates, and after determining the plurality of target rectangular regions, the following steps may be further performed:
and for each vertex coordinate of the area to be shielded, if the vertex coordinate is positioned in any target rectangular area, skipping the processing of the vertex coordinate. If the vertex coordinates are not located in all the target rectangular areas, determining the target rectangular area matched with the vertex coordinates from all the target rectangular areas, and adjusting the target rectangular area matched with the vertex coordinates so that the adjusted target rectangular area comprises the vertex coordinates.
Referring to fig. 3E, the vertex coordinates of the region to be masked are vertex coordinates a1-a5, the processing manner of the vertex coordinates is the same, and the vertex coordinate a5 is taken as an example for description. Based on the ordinate value of vertex coordinate a5, two second-dimension scan lines associated with vertex coordinate a5 are determined, with the ordinate value of vertex coordinate a5 lying between them, such as second-dimension scan line b2 and second-dimension scan line b 3.
The minimum coordinate of the second dimension (e.g., intersection coordinate c4) and the maximum coordinate of the second dimension (e.g., intersection coordinate c5) are determined based on the intersection coordinate c2 and the intersection coordinate c3 of the second-dimension scan line b2 with the boundary line of the mask region, and the intersection coordinate c4 and the intersection coordinate c5 of the second-dimension scan line b3 with the boundary line of the mask region.
If the abscissa value of the vertex coordinate a5 is smaller than the abscissa value of the intersection coordinate c4, it is determined that the vertex coordinate a5 is not located within all the target rectangular regions, and the target rectangular region matching the vertex coordinate a5 is the first target rectangular region between the second-dimension scan line b2 and the second-dimension scan line b3, and thus, the target rectangular region is adjusted so that the adjusted target rectangular region includes the vertex coordinate a 5.
If the abscissa value of the vertex coordinate a5 is greater than the abscissa value of the intersection coordinate c5, it is determined that the vertex coordinate a5 is not located within all the target rectangular regions, and the target rectangular region matching the vertex coordinate a5 is the last target rectangular region between the second-dimension scan line b2 and the second-dimension scan line b3, and thus, the target rectangular region is adjusted such that the adjusted target rectangular region includes the vertex coordinate a 5.
If the abscissa value of the vertex coordinate a5 is not less than the abscissa value of the intersection coordinate c4 and the abscissa value of the vertex coordinate a5 is not greater than the abscissa value of the intersection coordinate c5, it is determined that the vertex coordinate a5 is located in any target rectangular region, and it is not necessary to adjust the target rectangular region for the vertex coordinate a 5.
Example 7: in embodiments 1 and 2, it is necessary to determine a target rectangular region from the target image based on a plurality of vertex coordinates of the region to be masked, that is, it is necessary to determine the region to be masked first, and the region to be masked is an irregular polygon in the target image that needs to be subjected to mosaic processing. For determining the region to be masked, the following procedure can be used: a first polygon area for mosaic processing of a target image is acquired. And if the vertex coordinates of the first polygonal area are all located in the target image, determining the first polygonal area as an area to be shaded. And if the vertex coordinates of the first polygon area are partially located in the target image, determining a second polygon area according to the first polygon area and the target image, namely, the vertex coordinates of the second polygon area are all located in the target image, and determining the second polygon area as an area to be shielded. If the vertex coordinates of the first polygonal area are all located outside the target image, the mosaic processing process of the target image is skipped, namely the area to be shielded of the target image cannot be obtained, and the mosaic processing cannot be performed on the target image.
For example, a plurality of vertex coordinates may be preset, such as vertex coordinates p1(x1, y1), vertex coordinates p2(x2, y2), vertex coordinates p3(x3, y3), vertex coordinates p4(x4, y4), and vertex coordinates p5(x5, y5), and the setting manner of these vertex coordinates is not limited, and may be set by a user, or may be set by other manners.
In one possible embodiment, a plurality of vertex coordinates may be set for the target image, that is, the camera captures the target image of the picture scene, and the vertex coordinates are located in the target image regardless of whether the picture scene changes, for example, an image coordinate system (x1, y1), (x2, y2), (x3, y3), (x4, y4), (x5, y5), and the like are coordinate points in the image coordinate system, with the upper left corner point of the target image as an origin.
Based on the above, the vertex coordinates of the first polygon area may be combined into a first polygon area, and the vertex coordinates of the first polygon area are all located in the target image, and the first polygon area is determined as the area to be masked.
In another possible implementation, a plurality of vertex coordinates may be set for the picture scene, that is, when the camera captures the target image of the picture scene, along with a change of the picture scene (for example, a change of a focal length of the camera, and the like, all may cause the picture scene to change), the plurality of vertex coordinates may be located in the target image, may be partially located in the target image, and may also be located outside the target image.
For example, an image coordinate system, (x1, y1), (x2, y2), (x3, y3), (x4, y4), (x1, y5), etc. are established by taking an upper left corner point of the target image as an origin, and (x1, y1), (x2, y2), (x3, y3), (x4, y4), (x1, y5) need to be converted into coordinate points in the image coordinate system, which is not limited in the conversion process, and the converted vertex coordinates may be: vertex coordinates p1 ' (x1 ', y1 '), vertex coordinates p2 ' (x2 ', y2 '), vertex coordinates p3 ' (x3 ', y3 '), vertex coordinates p4 ' (x4 ', y4 '), vertex coordinates p5 ' (x5 ', y5 ').
Based on this, the vertex coordinates p1 ', p2 ', p3 ', p4 ', and p5 ' may constitute the first polygon region. Referring to fig. 5A, the vertex coordinates of the first polygon area are all located within the target image, and thus, the first polygon area is determined as the area to be masked.
Referring to fig. 5B, the vertex coordinates of the first polygon area are partially located within the target image, and therefore, the second polygon area may be determined based on the first polygon area and the target image, for example, the coordinates of the intersection of the first polygon area and the boundary line of the target image (i.e., four straight lines) are determined first, and the vertex coordinates of the first polygon area located within the target image are determined. Then, a region composed of the intersection coordinates and the coordinates of the vertices located within the target image may be set as the second polygon region. Obviously, the vertex coordinates of the second polygon area are all located within the target image, and the second polygon area may be determined as the area to be masked.
Referring to fig. 5C, the vertex coordinates of the first polygon area are all located outside the target image, and in this case, the area to be masked of the target image cannot be acquired, and the mosaic processing cannot be performed on the target image.
Based on the same application concept as the method, an image processing apparatus is proposed in the embodiment of the present application, and as shown in fig. 6, the image processing apparatus is a schematic structural diagram, and the apparatus includes: a determining module 61, configured to determine a target rectangular region from the target image based on a plurality of vertex coordinates of the region to be masked; the brightness processing module 62 is configured to determine a target brightness value according to brightness values of all brightness components in the target rectangular region, and fill all brightness components in the target rectangular region according to the target brightness value; and the chrominance processing module 63 is configured to determine a target chrominance value according to chrominance values of all chrominance components in the target rectangular region, and fill all chrominance components in the target rectangular region according to the target chrominance value.
For example, when the luminance processing module 62 fills all luminance components in the target rectangular region according to the target luminance value, it is specifically configured to: filling a first row of brightness components in the target rectangular region according to the target brightness value; copying the brightness value of the first row of brightness components to the Nth row of brightness components in the target rectangular area; the chroma processing module 63 is specifically configured to, when filling all chroma components in the target rectangular region according to the target chroma value: filling a first row of chrominance components in the target rectangular region according to the target chrominance value; copying the chromatic values of the first row of chromatic components to the Nth row of chromatic components in the target rectangular area;
where N is each value between 2 and the maximum number of rows of the target rectangular area.
Illustratively, each row within the target rectangular region includes P luma components and P chroma components; the brightness processing module 62 is specifically configured to copy the brightness value of the first row of brightness components to the nth row of brightness components in the target rectangular region: copying the brightness values of the P brightness components of the first row to the P brightness components of the Nth row in the target rectangular area in parallel based on a parallel processing device;
when the chrominance processing module 63 copies the chrominance values of the first row of chrominance components to the nth row of chrominance components in the target rectangular area, the chrominance processing module is specifically configured to: based on the parallel processing device, the chroma values of the P chroma components in the first row are copied to the P chroma components in the Nth row in the target rectangular area in parallel.
The determining module 61 is specifically configured to, when determining the target rectangular region from the target image based on the multiple vertex coordinates of the region to be masked: determining a minimum coordinate and a maximum coordinate of a first dimension according to a plurality of vertex coordinates of an area to be shielded; determining at least two second-dimension scanning lines of a second dimension based on a preset first step length in the range of the minimum coordinate and the maximum coordinate, wherein the second dimension is perpendicular to the first dimension; and aiming at each second dimension scanning line, determining the intersection point coordinate of the second dimension scanning line and the boundary line of the region to be shielded, and determining a target rectangular region from the target image according to the intersection point coordinate.
The determining module 61 is specifically configured to, when determining the target rectangular region from the target image according to the intersection coordinates: determining a minimum coordinate of a second dimension and a maximum coordinate of the second dimension based on a plurality of intersection point coordinates of two adjacent second-dimension scanning lines and a boundary line of the area to be shielded; determining at least two first-dimension scanning lines of a first dimension based on a preset second step length in the range of the minimum coordinate of the second dimension and the maximum coordinate of the second dimension; and determining a target rectangular region from the target image based on the regions formed by two adjacent first-dimension scanning lines and two adjacent second-dimension scanning lines.
The determining module 61 is further configured to: if the difference value between the maximum coordinate of the second dimension and the minimum coordinate of the second dimension is smaller than a preset second step length, adjusting the maximum coordinate of the second dimension so that the difference value between the adjusted maximum coordinate of the second dimension and the minimum coordinate of the second dimension is not smaller than the preset second step length; or adjusting the minimum coordinate of the second dimension to enable the difference value between the maximum coordinate of the second dimension and the adjusted minimum coordinate of the second dimension to be not less than a preset second step length.
The number of the target rectangular areas may be multiple, and the determining module 61 is further configured to: if the vertex coordinates of the area to be shielded are not located in all the target rectangular areas, determining the target rectangular area matched with the vertex coordinates from all the target rectangular areas, and adjusting the target rectangular area matched with the vertex coordinates so that the adjusted target rectangular area comprises the vertex coordinates.
The determining module 61 is further configured to: acquiring a first polygonal area for mosaic processing of a target image; if the vertex coordinates of the first polygon area are all located in the target image, determining the first polygon area as an area to be shielded; and if the vertex coordinates of the first polygon area are partially located in the target image, determining a second polygon area according to the first polygon area and the target image, wherein the vertex coordinates of the second polygon area are all located in the target image, and determining the second polygon area as an area to be shielded.
Based on the same application concept as the method described above, an image processing apparatus is proposed in the embodiment of the present application, and as shown in fig. 7, the image processing apparatus may include: a processor 71 and a machine-readable storage medium 72, the machine-readable storage medium 72 storing machine-executable instructions executable by the processor 71; the processor 71 is configured to execute machine executable instructions to perform the following steps:
determining a target rectangular area from the target image based on a plurality of vertex coordinates of the area to be shielded;
determining a target brightness value according to the brightness values of all the brightness components in the target rectangular region, and filling all the brightness components in the target rectangular region according to the target brightness value;
and determining a target chromaticity value according to the chromaticity values of all the chromaticity components in the target rectangular region, and filling all the chromaticity components in the target rectangular region according to the target chromaticity values.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the image processing method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
determining a target rectangular area from the target image based on a plurality of vertex coordinates of the area to be shielded;
determining a target brightness value according to the brightness values of all the brightness components in the target rectangular region, and filling all the brightness components in the target rectangular region according to the target brightness value;
and determining a target chromaticity value according to the chromaticity values of all the chromaticity components in the target rectangular region, and filling all the chromaticity components in the target rectangular region according to the target chromaticity values.
2. The method of claim 1,
filling all brightness components in the target rectangular region according to the target brightness value, including: filling a first row of brightness components in the target rectangular region according to the target brightness value; copying the brightness value of the first row of brightness components to the Nth row of brightness components in the target rectangular area;
filling all chroma components in the target rectangular area according to the target chroma value, wherein the filling comprises the following steps: filling a first row of chrominance components in the target rectangular region according to the target chrominance value; copying the chromatic values of the first row of chromatic components to the Nth row of chromatic components in the target rectangular area;
where N is each value between 2 and the maximum number of rows of the target rectangular area.
3. The method of claim 2,
each row in the target rectangular region comprises P luminance components and P chrominance components;
the copying the luminance values of the first row of luminance components to the nth row of luminance components in the target rectangular region includes: copying the brightness values of the P brightness components of the first row to the P brightness components of the Nth row in the target rectangular area in parallel based on a parallel processing device;
the copying the chrominance values of the first row of chrominance components to the nth row of chrominance components in the target rectangular area includes: based on the parallel processing device, the chroma values of the P chroma components in the first row are copied to the P chroma components in the Nth row in the target rectangular area in parallel.
4. The method of claim 1, wherein determining a target rectangular region from the target image based on the coordinates of the plurality of vertices of the region to be occluded comprises:
determining a minimum coordinate and a maximum coordinate of a first dimension according to a plurality of vertex coordinates of an area to be shielded;
determining at least two second-dimension scanning lines of a second dimension based on a preset first step length in the range of the minimum coordinate and the maximum coordinate, wherein the second dimension is perpendicular to the first dimension;
and aiming at each second dimension scanning line, determining the intersection point coordinate of the second dimension scanning line and the boundary line of the region to be shielded, and determining a target rectangular region from the target image according to the intersection point coordinate.
5. The method of claim 4,
the determining a target rectangular area from the target image according to the intersection point coordinates comprises the following steps:
determining a minimum coordinate of a second dimension and a maximum coordinate of the second dimension based on a plurality of intersection point coordinates of two adjacent second-dimension scanning lines and a boundary line of the area to be shielded;
determining at least two first-dimension scanning lines of a first dimension based on a preset second step length in the range of the minimum coordinate of the second dimension and the maximum coordinate of the second dimension;
and determining a target rectangular region from the target image based on the regions formed by two adjacent first-dimension scanning lines and two adjacent second-dimension scanning lines.
6. The method of claim 5,
after determining the minimum coordinate of the second dimension and the maximum coordinate of the second dimension based on the coordinates of a plurality of intersection points of two adjacent second-dimension scanning lines and the boundary line of the region to be shielded, the method further comprises:
if the difference value between the maximum coordinate of the second dimension and the minimum coordinate of the second dimension is smaller than a preset second step length, adjusting the maximum coordinate of the second dimension so that the difference value between the adjusted maximum coordinate of the second dimension and the minimum coordinate of the second dimension is not smaller than the preset second step length; or the like, or, alternatively,
and adjusting the minimum coordinate of the second dimension to ensure that the difference value between the maximum coordinate of the second dimension and the adjusted minimum coordinate of the second dimension is not less than a preset second step length.
7. The method according to claim 4, wherein the number of the target rectangular areas is plural, and after the target rectangular area is determined from the target image according to the intersection coordinates, the method further comprises:
if the vertex coordinates of the area to be shielded are not located in all the target rectangular areas, determining the target rectangular area matched with the vertex coordinates from all the target rectangular areas, and adjusting the target rectangular area matched with the vertex coordinates so that the adjusted target rectangular area comprises the vertex coordinates.
8. The method according to claim 1 or 4, wherein before determining the target rectangular region from the target image based on the coordinates of the plurality of vertices of the region to be masked, the method further comprises:
acquiring a first polygonal area for mosaic processing of a target image;
if the vertex coordinates of the first polygon area are all located in the target image, determining the first polygon area as an area to be shielded;
and if the vertex coordinates of the first polygon area are partially located in the target image, determining a second polygon area according to the first polygon area and the target image, wherein the vertex coordinates of the second polygon area are all located in the target image, and determining the second polygon area as an area to be shielded.
9. An image processing apparatus, characterized in that the apparatus comprises:
the determining module is used for determining a target rectangular area from the target image based on a plurality of vertex coordinates of the area to be shielded;
the brightness processing module is used for determining a target brightness value according to the brightness values of all the brightness components in the target rectangular region and filling all the brightness components in the target rectangular region according to the target brightness value;
and the chrominance processing module is used for determining a target chrominance value according to the chrominance values of all chrominance components in the target rectangular area and filling all chrominance components in the target rectangular area according to the target chrominance value.
10. An image processing apparatus characterized by comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
determining a target rectangular area from the target image based on a plurality of vertex coordinates of the area to be shielded;
determining a target brightness value according to the brightness values of all the brightness components in the target rectangular region, and filling all the brightness components in the target rectangular region according to the target brightness value;
and determining a target chromaticity value according to the chromaticity values of all the chromaticity components in the target rectangular region, and filling all the chromaticity components in the target rectangular region according to the target chromaticity values.
CN202010379300.0A 2020-05-07 2020-05-07 Image processing method, device and equipment Pending CN113628096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010379300.0A CN113628096A (en) 2020-05-07 2020-05-07 Image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010379300.0A CN113628096A (en) 2020-05-07 2020-05-07 Image processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN113628096A true CN113628096A (en) 2021-11-09

Family

ID=78376948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010379300.0A Pending CN113628096A (en) 2020-05-07 2020-05-07 Image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN113628096A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075905A (en) * 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction
US20060192853A1 (en) * 2005-02-26 2006-08-31 Samsung Electronics Co., Ltd. Observation system to display mask area capable of masking privacy zone and method to display mask area
CN102307275A (en) * 2011-09-09 2012-01-04 杭州海康威视数字技术股份有限公司 Method and device for performing irregular polygon mosaic processing on monitored image
CN107977946A (en) * 2017-12-20 2018-05-01 百度在线网络技术(北京)有限公司 Method and apparatus for handling image
CN111046215A (en) * 2019-12-25 2020-04-21 惠州Tcl移动通信有限公司 Image processing method and device, storage medium and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6075905A (en) * 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction
US20060192853A1 (en) * 2005-02-26 2006-08-31 Samsung Electronics Co., Ltd. Observation system to display mask area capable of masking privacy zone and method to display mask area
CN102307275A (en) * 2011-09-09 2012-01-04 杭州海康威视数字技术股份有限公司 Method and device for performing irregular polygon mosaic processing on monitored image
CN107977946A (en) * 2017-12-20 2018-05-01 百度在线网络技术(北京)有限公司 Method and apparatus for handling image
CN111046215A (en) * 2019-12-25 2020-04-21 惠州Tcl移动通信有限公司 Image processing method and device, storage medium and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周祥平,谢丽娟: "PhotoshopCS6图像处理项目任务教程", 浙江科学技术出版社, pages: 345 *

Similar Documents

Publication Publication Date Title
TWI568257B (en) System, method, and non-transitory, computer-readable stroage medium for in-stream rolling shutter compensation
CN107346546B (en) Image processing method and device
CN109429085B (en) Display device and image processing method thereof
CN112132836A (en) Video image clipping method and device, electronic equipment and storage medium
US9600747B2 (en) Image forming apparatus and control method that execute a plurality of rendering processing units in parallel
EP3206387A1 (en) Image dynamic range adjustment method, terminal, and storage media
US8107745B2 (en) Image processing device
CN111161127B (en) Picture resource rendering optimization method
CN113628096A (en) Image processing method, device and equipment
CN108762706B (en) Image processing method and device
US10475164B2 (en) Artifact detection in a contrast enhanced output image
US20200118244A1 (en) Data processing systems
US9646403B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
US8111931B2 (en) Image processing device
US20210012459A1 (en) Image processing method and apparatus
CN112650458B (en) Display screen adjusting method, storage medium and terminal equipment
WO2016139889A1 (en) Image processing device, image processing method, and display device
CN105761253A (en) High-definition screenshot method for 3D virtual data
CN112419147A (en) Image rendering method and device
JP6524644B2 (en) Image processing apparatus and electronic device
CN114527948B (en) Method and device for calculating clipping region, intelligent device and storage medium
CN108770374B (en) Image processing apparatus and image processing method
WO2023031999A1 (en) Video information processing device, method, and program
US20220414841A1 (en) Point-of-View Image Warp Systems and Methods
KR102366524B1 (en) Image processing method, image processing apparatus, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination