CN111368587B - Scene detection method, device, terminal equipment and computer readable storage medium - Google Patents

Scene detection method, device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN111368587B
CN111368587B CN201811591988.8A CN201811591988A CN111368587B CN 111368587 B CN111368587 B CN 111368587B CN 201811591988 A CN201811591988 A CN 201811591988A CN 111368587 B CN111368587 B CN 111368587B
Authority
CN
China
Prior art keywords
component
scene
image
ratio
preset threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811591988.8A
Other languages
Chinese (zh)
Other versions
CN111368587A (en
Inventor
曾鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN201811591988.8A priority Critical patent/CN111368587B/en
Publication of CN111368587A publication Critical patent/CN111368587A/en
Application granted granted Critical
Publication of CN111368587B publication Critical patent/CN111368587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis

Abstract

The embodiment of the application is suitable for the technical field of image processing, and discloses a scene detection method, a device, terminal equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring an image to be detected; dividing an image to be detected into a preset number of areas; determining a target area from the areas, wherein the target area is a white area and/or a black area; extracting a gray level histogram of the image to be detected based on the region except the target region; and determining the scene of the image to be detected according to the gray level histogram. The embodiment of the application can improve the accuracy of scene detection.

Description

Scene detection method, device, terminal equipment and computer readable storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a scene detection method, a scene detection device, terminal equipment and a computer readable storage medium.
Background
Along with the continuous development of image processing technology, the picture shooting quality and the shooting effect are also better and better.
In order to make the shooting quality and the shooting effect better, the shooting parameters are often required to be adjusted according to different shooting scenes (such as backlight, forward light, non-backlight or non-forward light). The intelligent degree of the shooting device is higher and higher, so that the existing shooting device can automatically identify different scenes, and corresponding functions are added for the different scenes, so that the shooting effect is better. For example, when a user uses a mobile phone to photograph every day, a backlight or forward light scene often appears, and an HDR (High Dynamic Range Imaging ) function is added during photographing, so that the photographing effect is better.
The current scene detection method generally determines whether backlight or forward light is generated according to the statistics of brightness information of an image frame. Thereby realizing the detection of backlight or forward light. However, when some specific scenes are encountered, for example, a large-scale white or black background appears and the point light source is irradiated, the existing scene detection method often generates misjudgment, and the scene detection accuracy is low.
Disclosure of Invention
In view of this, embodiments of the present application provide a scene detection method, apparatus, terminal device, and computer readable storage medium, so as to solve the problem in the prior art that the scene detection accuracy is low.
A first aspect of an embodiment of the present application provides a scene detection method, including:
acquiring an image to be detected;
dividing the image to be detected into a preset number of areas;
determining a target area from the areas, wherein the target area is a white area and/or a black area;
extracting a gray level histogram of the image to be detected based on the region except the target region;
and determining the scene of the image to be detected according to the gray level histogram.
With reference to the first aspect, in a possible implementation manner, the determining a target area from the areas includes:
calculating an R component, a G component and a B component of each region;
the target region is determined from the regions according to the relationship among the R component, the G component, and the B component.
With reference to the first aspect, in a possible implementation manner, the determining the target area from the area according to the relation among the R component, the G component, and the B component includes:
calculating a first ratio between an R component and a G component, a second ratio between a B component and a G component and a third ratio between an R component and a B component of each region;
calculating the sum of the R component, the G component and the B component of each region;
when a preset judging condition is met, judging the area as a white area, wherein the preset judging condition is that the first ratio is smaller than a first preset threshold value, the first ratio is larger than a fifth preset threshold value, the second ratio is smaller than the first preset threshold value, the second ratio is larger than the fifth preset threshold value, the third ratio is smaller than the first preset threshold value, the third ratio is larger than the fifth preset threshold value, and the sum is larger than a second preset threshold value and smaller than a third preset threshold value;
and when the first ratio, the second ratio and the third ratio are all smaller than the first preset threshold value and the sum is smaller than a fourth preset threshold value, judging the area as a black area.
With reference to the first aspect, in a possible implementation manner, after the determining the target area from the areas, the method further includes:
and marking the target area by a preset mark.
With reference to the first aspect, in a possible implementation manner, the determining a scene of the image to be detected according to the gray level histogram includes:
calculating the mean value of the gray level histogram;
calculating the variance of the gray level histogram according to the mean value;
judging whether the variance is larger than a fifth preset threshold value or not;
when the variance is larger than the fifth preset threshold, determining that the scene of the image to be detected is a backlight scene or a forward light scene;
and when the variance is smaller than or equal to the fifth preset threshold, determining that the scene of the image to be detected is a non-backlight scene or a non-forward light scene.
With reference to the first aspect, in a possible implementation manner, after the determining that the scene of the image to be detected is a backlight scene or a forward light scene, the method further includes:
calculating an average brightness value of each region;
counting the brightness value distribution rule of the image to be detected according to the average brightness value;
when the brightness value distribution rule accords with a first preset distribution rule, the scene of the image to be detected is a backlight scene;
and when the brightness value distribution rule accords with a second preset distribution rule, the scene of the image to be detected is a smooth scene.
With reference to the first aspect, in a possible implementation manner, after the determining a scene of the image to be detected according to the gray level histogram, the method further includes:
and executing corresponding image processing operation according to the scene of the image to be detected.
A second aspect of an embodiment of the present application provides a scene detection device, including:
the acquisition module is used for acquiring the image to be detected;
the brightness value calculation module is used for dividing the image to be detected into a preset number of areas and calculating the average brightness value of each area;
a target area determining module, configured to determine a target area from the areas, where the target area is a white area and/or a black area;
the gray information calculation module is used for extracting a gray histogram of the image to be detected based on the area except the target area;
and the scene determining module is used for determining the scene of the image to be detected according to the gray level histogram.
With reference to the second aspect, in a possible implementation manner, the target area determining module includes:
a component calculation unit configured to calculate an R component, a G component, and a B component for each of the regions;
and a determination unit configured to determine the target region from the regions according to a relationship among the R component, the G component, and the B component.
With reference to the first aspect, in a possible implementation manner, the determining unit includes:
a ratio calculating subunit, configured to calculate a first ratio between the R component and the G component, a second ratio between the B component and the G component, and a third ratio between the R component and the B component for each of the regions;
an addition and calculation subunit for calculating an addition sum of the R component, the G component, and the B component of each of the regions;
a first judging subunit, configured to judge the area as a white area when a preset judging condition is met, where the preset judging condition is that the first ratio is smaller than a first preset threshold, the first ratio is greater than a fifth preset threshold, the second ratio is smaller than the first preset threshold, the second ratio is greater than the fifth preset threshold, and the third ratio is smaller than the first preset threshold, the third ratio is greater than the fifth preset threshold, and the sum is greater than a second preset threshold and is smaller than a third preset threshold;
and the second judging subunit is used for judging the area as a black area when the first ratio, the second ratio and the third ratio are smaller than the first preset threshold value and the sum is smaller than a fourth preset threshold value.
With reference to the first aspect, in a possible implementation manner, the method further includes:
and the marking module is used for marking the target area by a preset mark.
With reference to the first aspect, in a possible implementation manner, the scene determining module includes:
the average value calculation unit is used for calculating the average value of the gray level histogram;
a variance calculating unit for calculating a variance of the gray histogram based on the mean value;
the judging unit is used for judging whether the variance is larger than a fifth preset threshold value or not;
the first scene determining unit is used for determining that the scene of the image to be detected is a backlight scene or a forward light scene when the variance is larger than the fifth preset threshold value;
and the second scene determining unit is used for determining that the scene of the image to be detected is a non-backlight scene or a non-forward light scene when the variance is smaller than or equal to the fifth preset threshold value.
With reference to the first aspect, in a possible implementation manner, the scene determining module further includes:
an average luminance value calculation unit configured to calculate an average luminance value for each of the regions;
the distribution rule statistics unit is used for counting the distribution rule of the brightness value of the image to be detected according to the average brightness value;
the backlight scene determining unit is used for determining that the scene of the image to be detected is a backlight scene when the brightness value distribution rule accords with a first preset distribution rule;
and the light-order scene determining unit is used for determining that the scene of the image to be detected is a light-order scene when the brightness value distribution rule accords with a second preset distribution rule.
With reference to the first aspect, in a possible implementation manner, the method further includes:
and the execution module is used for executing corresponding image processing operation according to the scene of the image to be detected.
A third aspect of the embodiments of the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspects described above when the computer program is executed.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method according to any one of the first aspects above.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
according to the method and the device for detecting the scene, the white area and/or the black area in the image to be detected are calculated, the gray level histogram is counted based on the white area and the area except the white area, namely, the gray level information is counted after the black area and the white area in the image are removed, then scene detection is carried out based on the gray level information, so that the black object or the white object with a large area in the image is removed, gray level information in the scene accords with actual brightness, misjudgment caused by overlarge brightness information difference is avoided, and therefore the accuracy rate of scene detection based on the gray level information is high.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow block diagram of a scene detection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a target area determining process according to an embodiment of the present application;
fig. 3 is a schematic block flow diagram of step S105 provided in the embodiment of the present application;
FIG. 4 is a schematic block flow diagram of a specific scenario determination process provided in an embodiment of the present application;
fig. 5 is a schematic block diagram of a scene detection device according to an embodiment of the present application;
fig. 6 is a schematic diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical solutions described in the present application, the following description is made by specific examples.
Example 1
The scene detection method provided by the embodiment of the application can be specifically applied to intelligent mobile terminal equipment, and the terminal equipment has a photographing function. Such as a cell phone, tablet, etc.
Referring to fig. 1, a schematic flow diagram of a scene detection method according to an embodiment of the present application is provided, where the scene detection method may include the following steps:
step S101, obtaining an image to be detected.
Step S102, dividing the image to be detected into a preset number of areas.
It should be noted that the preset number of values may be set according to actual application needs. Specifically, the number of divided areas may be set according to the platform used. For example, when using a high-pass platform, the preset number is 16×16. It should be understood that the areas of the various regions are generally the same, but that the objects of the embodiments of the present application are also achieved when the areas of the various regions are different.
Compared with the scene detection based on a single pixel point, the scene detection based on the region can greatly reduce the calculated amount, improve the operation efficiency and reduce the influence of noise in the image.
Step S103, determining a target area from the areas, wherein the target area is a white area and/or a black area.
After dividing the image into a plurality of areas, it is necessary to find out a target area from the plurality of areas, and the target area may be a black area, a white area, or a black area and a white area. In other words, in some scenes, only a white region or a black region may exist in an image, and a black region and a white region may also exist at the same time.
In a specific application, a target region in an image can be found through the ratio relation among the R component, the G component and the B component of each region. Optionally, in an embodiment, the specific process of determining the target area from the areas may include: calculating an R component, a G component and a B component of each region; the target region is determined from the regions according to the relationship among the R component, the G component, and the B component.
The relationship among the R component, the G component, and the B component may include a ratio relationship, an addition, a relationship, and the like. The area can be determined as a white area or a black area when certain conditions are met by calculating the ratio and the sum, and then comparing the ratio, the sum and the corresponding preset threshold value respectively.
Still further, referring to fig. 2, the specific process of determining the target region from the region according to the relationship among the R component, the G component, and the B component may include:
step S201, a first ratio between R component and G component, a second ratio between B component and G component, and a third ratio between R component and B component of each region are calculated.
Step S202, the addition sum of the R component, the G component and the B component of each region is calculated.
And S203, judging the area as a white area when a preset judging condition is met, wherein the preset judging condition is that the first ratio is smaller than a first preset threshold value, the first ratio is larger than a fifth preset threshold value and the second ratio is smaller than the first preset threshold value, the second ratio is larger than the fifth preset threshold value and the third ratio is smaller than the first preset threshold value, the third ratio is larger than the fifth preset threshold value and the sum of the third ratio and the second preset threshold value is smaller than the third preset threshold value.
That is, when R/G < (threshold 1) |r/G > (threshold 5) & B/G < (threshold 1) |b/G > (threshold 5) & R/B < (threshold 1) |r/B > (threshold 5) & (threshold 2) < (r+g+b) < (threshold 3), the above-mentioned regions are judged as white regions, wherein threshold1, threshold12, threshold3, and threshold5 are represented in this order as the first preset threshold, the second preset threshold, the third preset threshold, and the fifth preset threshold.
When a white region appears, the overexposure needs to be taken into consideration, so the sum of R, G, B needs to be smaller than the third threshold.
And step S204, judging the region as a black region when the first ratio, the second ratio and the third ratio are smaller than a first preset threshold value and the sum is smaller than a fourth preset threshold value.
Since the degree of reduction of each color is also different depending on the type of the lens filter film, the threshold values may be set according to the sensor used, and are not limited herein. Generally, the value range of the first preset threshold is about 0.95, the value range of the second preset threshold is about 540, the value range of the third preset threshold is about 720, the value range of the fourth preset threshold is about 150, and the value range of the fifth threshold is about 1.05.
The above describes a procedure of finding black and white regions in an image based on the ratio of R, G, B of each region, the sum, and the relation with a preset threshold. The process of determining the target area from the areas may be other, as long as the target area can be determined.
After finding the target area, the corresponding area may be marked after determining the target area in order to better identify which area is the target area in a subsequent step. Optionally, in an embodiment, after determining the target area from the areas, the method may further include: the target area is marked with a preset identification. The preset identifier may be determined according to needs, so long as the target area and other non-target areas can be distinguished. In a specific application, the black area and the white area can be marked by one mark, or the black area and the white area can be marked by two marks respectively, for example, the black area is marked by a first mark, and the white area is marked by a second mark.
Step S104, extracting a gray level histogram of the image to be detected based on the area except the target area.
After the target area is determined, gray information of the image to be detected may be calculated based on areas other than the target area. The gray information may be embodied as a gray histogram. And the regions other than the target region refer to all regions other than the target region among the divided regions.
The statistics of the gray level histogram of the image to be detected based on the regions other than the target region can be regarded as removing the black region or the white region in the image, and then counting the gray level histogram of the remaining regions. Of course, in actual operation, the removal operation may be omitted, and the gray information may be directly counted based on the region other than the target region.
When a black area or a white area with a larger area exists in the image, the brightness information of the image and the actual brightness have larger difference, and at the moment, when the scene detection is carried out by using the existing scene detection mode, misjudgment is easy to occur, larger errors occur, and the accuracy is lower. In this embodiment, for the scene which is difficult to identify and easy to misjudge in the existing method, the white area or the black area in the image is determined first, and gray statistics is performed based on the areas except the white area or the black area, so that the accuracy of the subsequent scene detection is higher.
Step S105, determining the scene of the image to be detected according to the gray level histogram.
It will be appreciated that the images differ in the light and dark distribution in different scenes, and therefore the scene of the image to be detected can be determined from the gray information of the image. For backlit or cissing scenes the variance of the gray level distribution of the histogram is larger, while for non-backlit or non-cissing scenes the variance of the gray level distribution of the histogram is smaller. In a specific application, the scene of the image to be detected can be determined by counting the variance of the gray level histogram of the image.
Alternatively, in an embodiment, referring to the schematic flow diagram of step S105 shown in fig. 3, the above process of determining a scene of an image to be detected according to a gray histogram may include:
step S301, calculating the mean value of the gray level histogram.
Step S302, calculating the variance of the gray level histogram according to the mean value.
Specifically, after the gray histogram of the image is counted, the overall mean and variance may be calculated according to the following formula.
Wherein x= (x) 1 ,x 2 ,…,x 256 ) T x i Representing the probability of the i-th gray level appearing in the image. Based on σ, the inverse luminance can be defined as B, b=σ. In general, the larger the B value, the higher the backlight probability.
Step S303, judging whether the variance is larger than a fifth preset threshold, when the variance is larger than the fifth preset threshold, proceeding to step S304, and when the variance is smaller than or equal to the fifth preset threshold, proceeding to step S305.
Step S304, determining that the scene of the image to be detected is a backlight scene or a forward light scene.
Step S305, determining that the scene of the image to be detected is a non-backlight scene or a non-forward light scene.
It should be noted that, the fifth threshold may be set according to an actual application scenario. In general, the value of the fifth threshold is about 1.05.
In general, the backlight B is used to determine the scene of an image. For example, when the fifth threshold is 1.05, and B > 1.05, the scene of the image to be detected is regarded as a backlight scene or a forward light scene, and b+.1.05, the scene of the image to be detected is regarded as a non-backlight scene or a non-forward light scene.
It can be understood that the image processing operation of the backlight scene and the forward light scene by the intelligent device such as the mobile phone is the same, so that no special distinction is needed in usual photographing, namely, in usual application, only whether the backlight or the forward light occurs in the image needs to be identified, and whether the backlight scene or the forward light scene needs to be specifically identified is not needed. Of course, in some application scenarios, after determining that the image is backlit or backlit, it may further determine whether the image is backlit or backlit.
Optionally, referring to the flowchart block diagram of the specific scene determination procedure shown in fig. 4, in an embodiment, after determining that the scene of the image to be detected is a backlight scene or a forward light scene, the method may further include:
step S401, calculating an average luminance value of each region.
And step S402, counting the distribution rule of the brightness values of the image to be detected according to the average brightness value.
Step S403, when the distribution rule of the brightness values accords with a first preset distribution rule, the scene of the image to be detected is a backlight scene.
Step S404, when the brightness value distribution rule accords with a second preset distribution rule, the scene of the image to be detected is a smooth scene.
It should be noted that, the first preset distribution rule is that the brightness value of the middle area of the image is larger than the brightness value of the surrounding area, and the second preset distribution rule is that the brightness value of the middle area of the image is smaller than the brightness value of the surrounding area. That is, based on the luminance value of each region, when the luminance value of the middle region of the image is larger than the luminance values of the surrounding regions, the scene of the image to be detected is determined to be a backlight scene, whereas when the luminance value of the middle region of the image is smaller than the luminance values of the surrounding regions, the scene is determined to be a forward light scene.
It will be appreciated that the determination of the middle and peripheral regions of the image may be divided according to actual needs. For example, four regions in the center of the image may be regarded as intermediate regions, and regions other than the intermediate regions may be regarded as peripheral regions.
After determining the scene of the image to be detected according to the image gray information, corresponding image processing operations can be performed according to the corresponding scene, so as to improve photographing quality and photographing effect.
Optionally, in an embodiment, after determining the scene of the image to be detected according to the gray histogram, the method may further include: and executing corresponding image processing operation according to the scene of the image to be detected.
In a specific application, the same operation is adopted for a backlight scene or a forward light scene, for example, the HDR function is automatically turned on, and for a non-backlight scene or a forward light scene, the corresponding image processing operation is performed.
The corresponding image processing operations in different scenes are corresponding operations in the prior art, and are not described herein.
In this embodiment, gray information is counted by calculating a white area and/or a black area in an image to be detected and based on areas other than the white area and the black area, that is, the gray information is counted after the black area and the white area in the image are removed, and then scene detection is performed based on the gray information, so that a black object or a white object with a larger area in the image is removed, gray information in a scene accords with actual brightness, misjudgment caused by overlarge brightness information difference is avoided, and therefore accuracy of scene detection based on the gray information is higher.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Example two
Referring to fig. 5, a schematic block diagram of a scene detection device according to an embodiment of the present application is provided, where the device may include:
an acquisition module 51, configured to acquire an image to be detected;
the brightness value calculating module 52 is configured to divide the image to be detected into a preset number of areas, and calculate an average brightness value of each area;
a target area determining module 53, configured to determine a target area from the areas, where the target area is a white area and/or a black area;
a gray information calculation module 54, configured to extract a gray histogram of the image to be detected based on an area other than the target area;
the scene determination module 55 is configured to determine a scene of the image to be detected according to the gray level histogram.
In a possible implementation manner, the target area determining module may include:
a component calculation unit for calculating an R component, a G component, and a B component of each region;
and a determination unit configured to determine a target region from the regions based on a relationship among the R component, the G component, and the B component.
In a possible implementation, the determining unit includes:
a ratio calculating subunit, configured to calculate a first ratio between the R component and the G component, a second ratio between the B component and the G component, and a third ratio between the R component and the B component of each region;
an addition and calculation subunit for calculating an addition sum of the R component, the G component, and the B component of each region;
a first judging subunit configured to judge the area as a white area when a preset judging condition is met, where the preset judging condition is any one of the first ratio being smaller than a first preset threshold, the first ratio being larger than a fifth preset threshold, and the second ratio being smaller than the first preset threshold, the second ratio being larger than the fifth preset threshold, and the third ratio being smaller than the first preset threshold, the third ratio being larger than the fifth preset threshold, and the sum being larger than a second preset threshold being smaller than a third preset threshold;
and the second judging subunit is used for judging the area as a black area when the first ratio, the second ratio and the third ratio are all smaller than a first preset threshold value and the sum is smaller than a fourth preset threshold value.
In a possible implementation manner, the apparatus may further include:
and the marking module is used for marking the target area by a preset mark.
In one possible implementation, the scene determining module may include:
the mean value calculation unit is used for calculating the mean value of the gray level histogram;
a variance calculating unit for calculating a variance of the gray histogram based on the mean value;
the judging unit is used for judging whether the variance is larger than a fifth preset threshold value or not;
the first scene determining unit is used for determining that the scene of the image to be detected is a backlight scene or a forward light scene when the variance is larger than a fifth preset threshold value;
and the second scene determining unit is used for determining that the scene of the image to be detected is a non-backlight scene or a non-forward light scene when the variance is smaller than or equal to a fifth preset threshold value.
In a possible implementation manner, the scene determination module may further include:
an average luminance value calculation unit for calculating an average luminance value of each region;
the distribution rule statistics unit is used for counting the distribution rule of the brightness value of the image to be detected according to the average brightness value;
the backlight scene determining unit is used for determining that the scene of the image to be detected is a backlight scene when the distribution rule of the brightness value accords with a first preset distribution rule;
and the forward light scene determining unit is used for determining that the scene of the image to be detected is a forward light scene when the distribution rule of the brightness value accords with a second preset distribution rule.
In a possible implementation manner, the apparatus may further include:
and the execution module is used for executing corresponding image processing operation according to the scene of the image to be detected.
It should be noted that, the scene detection device in this embodiment corresponds to the scene detection method in the above embodiment one by one, and the relevant description is referred to the corresponding content above, which is not repeated here.
In this embodiment, gray information is counted by calculating a white area and/or a black area in an image to be detected and based on areas other than the white area and the black area, that is, the gray information is counted after the black area and the white area in the image are removed, and then scene detection is performed based on the gray information, so that a black object or a white object with a larger area in the image is removed, gray information in a scene accords with actual brightness, misjudgment caused by overlarge brightness information difference is avoided, and therefore accuracy of scene detection based on the gray information is higher.
Example III
Fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps of the various scene detection method embodiments described above, such as steps S101 to S105 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, performs the functions of the modules or units of the apparatus embodiments described above, such as the functions of the modules 51 to 55 shown in fig. 5.
By way of example, the computer program 62 may be divided into one or more modules or units, which are stored in the memory 61 and executed by the processor 60 to complete the present application. The one or more modules or units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 62 in the terminal device 6. For example, the computer program 62 may be divided into an acquisition module, a luminance value calculation module, a target area determination module, a gradation information calculation module, and a scene determination module, each of which specifically functions as follows:
the acquisition module is used for acquiring the image to be detected;
the brightness value calculation module is used for dividing the image to be detected into a preset number of areas and calculating the average brightness value of each area;
the target area determining module is used for determining a target area from the areas, wherein the target area is a white area and/or a black area;
the gray information calculation module is used for extracting a gray histogram of the image to be detected based on the area except the target area;
and the scene determining module is used for determining the scene of the image to be detected according to the gray level histogram.
The terminal device 6 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the terminal device 6 and does not constitute a limitation of the terminal device 6, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor 60 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, terminal device and method may be implemented in other manners. For example, the apparatus, terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules or units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A scene detection method, comprising:
acquiring an image to be detected;
dividing the image to be detected into a preset number of areas;
determining a target area from the areas, wherein the target area is a white area and/or a black area;
extracting a gray level histogram of the image to be detected based on the region except the target region;
determining a scene of the image to be detected according to the gray level histogram;
wherein the determining a target area from the areas comprises:
calculating an R component, a G component and a B component of each region;
the target region is determined from the regions according to the relationship among the R component, the G component, and the B component.
2. The scene detection method according to claim 1, wherein the determining the target region from the region according to a relation among the R component, G component, and B component includes:
calculating a first ratio between an R component and a G component, a second ratio between a B component and a G component and a third ratio between an R component and a B component of each region;
calculating the sum of the R component, the G component and the B component of each region;
when a preset judging condition is met, judging the area as a white area, wherein the preset judging condition is that the first ratio is smaller than a first preset threshold value, the first ratio is larger than a fifth preset threshold value, the second ratio is smaller than the first preset threshold value, the second ratio is larger than the fifth preset threshold value, the third ratio is smaller than the first preset threshold value, the third ratio is larger than the fifth preset threshold value, and the sum is larger than a second preset threshold value and smaller than a third preset threshold value;
and when the first ratio, the second ratio and the third ratio are all smaller than the first preset threshold value and the sum is smaller than a fourth preset threshold value, judging the area as a black area.
3. The scene detection method according to claim 1, further comprising, after said determining the target area from the areas:
and marking the target area by a preset mark.
4. A scene detection method as claimed in any one of claims 1 to 3, wherein said determining a scene of said image to be detected from said gray level histogram comprises:
calculating the mean value of the gray level histogram;
calculating the variance of the gray level histogram according to the mean value;
judging whether the variance is larger than a fifth preset threshold value or not;
when the variance is larger than the fifth preset threshold, determining that the scene of the image to be detected is a backlight scene or a forward light scene;
and when the variance is smaller than or equal to the fifth preset threshold, determining that the scene of the image to be detected is a non-backlight scene or a non-forward light scene.
5. The scene detection method according to claim 4, further comprising, after said determining that the scene of the image to be detected is a backlight scene or a forward light scene:
calculating an average brightness value of each region;
counting the brightness value distribution rule of the image to be detected according to the average brightness value;
when the brightness value distribution rule accords with a first preset distribution rule, the scene of the image to be detected is a backlight scene;
and when the brightness value distribution rule accords with a second preset distribution rule, the scene of the image to be detected is a smooth scene.
6. The scene detection method according to claim 4, further comprising, after said determining a scene of the image to be detected from the gray level histogram:
and executing corresponding image processing operation according to the scene of the image to be detected.
7. A scene detection device, comprising:
the acquisition module is used for acquiring the image to be detected;
the brightness value calculation module is used for dividing the image to be detected into a preset number of areas and calculating the average brightness value of each area;
a target area determining module, configured to determine a target area from the areas, where the target area is a white area and/or a black area;
the gray information calculation module is used for extracting a gray histogram of the image to be detected based on the area except the target area;
the scene determining module is used for determining the scene of the image to be detected according to the gray level histogram;
wherein the target area determining module includes:
a component calculation unit configured to calculate an R component, a G component, and a B component for each of the regions;
and a determination unit configured to determine the target region from the regions according to a relationship among the R component, the G component, and the B component.
8. The scene detection device according to claim 7, wherein the determination unit includes:
a ratio calculating subunit, configured to calculate a first ratio between the R component and the G component, a second ratio between the B component and the G component, and a third ratio between the R component and the B component for each of the regions;
an addition and calculation subunit for calculating an addition sum of the R component, the G component, and the B component of each of the regions;
a first judging subunit, configured to judge the area as a white area when a preset judging condition is met, where the preset judging condition is that the first ratio is smaller than a first preset threshold, the first ratio is greater than a fifth preset threshold, the second ratio is smaller than the first preset threshold, the second ratio is greater than the fifth preset threshold, and the third ratio is smaller than the first preset threshold, the third ratio is greater than the fifth preset threshold, and the sum is greater than a second preset threshold and is smaller than a third preset threshold;
and the second judging subunit is used for judging the area as a black area when the first ratio, the second ratio and the third ratio are smaller than the first preset threshold value and the sum is smaller than a fourth preset threshold value.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 6 when the computer program is executed.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1 to 6.
CN201811591988.8A 2018-12-25 2018-12-25 Scene detection method, device, terminal equipment and computer readable storage medium Active CN111368587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811591988.8A CN111368587B (en) 2018-12-25 2018-12-25 Scene detection method, device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811591988.8A CN111368587B (en) 2018-12-25 2018-12-25 Scene detection method, device, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111368587A CN111368587A (en) 2020-07-03
CN111368587B true CN111368587B (en) 2024-04-16

Family

ID=71208076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811591988.8A Active CN111368587B (en) 2018-12-25 2018-12-25 Scene detection method, device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111368587B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770285B (en) * 2020-07-13 2022-02-18 浙江大华技术股份有限公司 Exposure brightness control method and device, electronic equipment and storage medium
CN112291548B (en) * 2020-10-28 2023-01-31 Oppo广东移动通信有限公司 White balance statistical method, device, mobile terminal and storage medium
CN113052836A (en) * 2021-04-21 2021-06-29 深圳壹账通智能科技有限公司 Electronic identity photo detection method and device, electronic equipment and storage medium
CN113340817B (en) * 2021-05-26 2023-05-05 奥比中光科技集团股份有限公司 Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment
CN115526788A (en) * 2022-03-18 2022-12-27 荣耀终端有限公司 Image processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809146A (en) * 2016-03-28 2016-07-27 北京奇艺世纪科技有限公司 Image scene recognition method and device
CN105959585A (en) * 2016-05-12 2016-09-21 深圳众思科技有限公司 Multi-grade backlight detection method and device
CN108337433A (en) * 2018-03-19 2018-07-27 广东欧珀移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
CN108337448A (en) * 2018-04-12 2018-07-27 Oppo广东移动通信有限公司 High-dynamic-range image acquisition method, device, terminal device and storage medium
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN108848363A (en) * 2018-05-31 2018-11-20 江苏乙生态农业科技有限公司 A kind of auto white balance method suitable for large scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006319714A (en) * 2005-05-13 2006-11-24 Konica Minolta Photo Imaging Inc Method, apparatus, and program for processing image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809146A (en) * 2016-03-28 2016-07-27 北京奇艺世纪科技有限公司 Image scene recognition method and device
CN105959585A (en) * 2016-05-12 2016-09-21 深圳众思科技有限公司 Multi-grade backlight detection method and device
CN108337433A (en) * 2018-03-19 2018-07-27 广东欧珀移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
CN108337448A (en) * 2018-04-12 2018-07-27 Oppo广东移动通信有限公司 High-dynamic-range image acquisition method, device, terminal device and storage medium
CN108848363A (en) * 2018-05-31 2018-11-20 江苏乙生态农业科技有限公司 A kind of auto white balance method suitable for large scene
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Also Published As

Publication number Publication date
CN111368587A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111368587B (en) Scene detection method, device, terminal equipment and computer readable storage medium
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
EP3496383A1 (en) Image processing method, apparatus and device
CN107403421B (en) Image defogging method, storage medium and terminal equipment
CN111311482B (en) Background blurring method and device, terminal equipment and storage medium
EP3798975A1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN110490204B (en) Image processing method, image processing device and terminal
CN110049250B (en) Camera shooting state switching method and device
CN107909569B (en) Screen-patterned detection method, screen-patterned detection device and electronic equipment
CN107908998B (en) Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium
CN106651797B (en) Method and device for determining effective area of signal lamp
CN111654637B (en) Focusing method, focusing device and terminal equipment
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN113808135B (en) Image brightness abnormality detection method, electronic device, and storage medium
CN115082350A (en) Stroboscopic image processing method and device, electronic device and readable storage medium
CN112070682A (en) Method and device for compensating image brightness
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium
CN111539975B (en) Method, device, equipment and storage medium for detecting moving object
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN115690747B (en) Vehicle blind area detection model test method and device, electronic equipment and storage medium
CN115760653B (en) Image correction method, device, equipment and readable storage medium
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium
CN113391779B (en) Parameter adjusting method, device and equipment for paper-like screen
CN112629828B (en) Optical information detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant