CN111343385B - Method, device, equipment and storage medium for determining environment brightness - Google Patents

Method, device, equipment and storage medium for determining environment brightness Download PDF

Info

Publication number
CN111343385B
CN111343385B CN202010207240.4A CN202010207240A CN111343385B CN 111343385 B CN111343385 B CN 111343385B CN 202010207240 A CN202010207240 A CN 202010207240A CN 111343385 B CN111343385 B CN 111343385B
Authority
CN
China
Prior art keywords
environment
determining
light
image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010207240.4A
Other languages
Chinese (zh)
Other versions
CN111343385A (en
Inventor
苏英菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202010207240.4A priority Critical patent/CN111343385B/en
Publication of CN111343385A publication Critical patent/CN111343385A/en
Application granted granted Critical
Publication of CN111343385B publication Critical patent/CN111343385B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for determining environment brightness, which comprises the following steps: acquiring a first image, wherein a preset area on the first image comprises a sky image; and determining a light and shade result of the environment where the object is located according to the pixel value of each pixel point in the preset area in the HSV space. In practical application, when an object is in an environment with strong light, a large difference exists between pixel values of pixel points in a captured sky image and pixel values of pixel points of the sky image captured when the object is in the environment with weak light, so that according to the pixel values of the pixel points in the sky image, whether the object is in the environment with strong light or in the environment with weak light at present can be determined, and the object is not influenced by factors such as weather, humidity, natural rules and the like, so that the accuracy of identifying the light and shade condition of the environment where the object is located can be improved.

Description

Method, device, equipment and storage medium for determining environment brightness
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method, an apparatus, a device, and a storage medium for determining ambient brightness.
Background
In many practical scenarios, it is usually necessary to identify the brightness of the environment where an object is located, so as to perform corresponding processing according to the identification condition. For example, in the field of automatic driving, when a vehicle is in a low-light environment (such as an environment in the evening or a weather in a dark weather), it is generally necessary to turn on a low beam and/or a high beam of the vehicle in order to facilitate safe driving of the vehicle.
A simpler identification scheme determines the degree of darkness of the environment in which the subject is located based on time, e.g., when beijing is from 7:00 to 19:00, the subject is generally considered to be in a bright environment (corresponding to a daytime environment), and when beijing is from 0:00 to 7:00, or from 19:00 to 24:00, the subject is generally considered to be in a dark environment (corresponding to a nighttime environment). However, the brightness of the environment is influenced by multiple factors, such as weather, humidity, and natural law change, and the accuracy of determining the brightness of the environment according to time is generally low.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for determining environment brightness, so as to improve the accuracy of identifying the brightness of the environment where an object is located.
In a first aspect, an embodiment of the present application provides a method for determining ambient brightness, including:
acquiring a first image, wherein a preset area on the first image comprises a sky image;
and determining a light and shade result of the environment where the object is located according to the pixel value of each pixel point in the preset region in the hue saturation value HSV space.
In some possible embodiments, the determining, according to the pixel value of each pixel point in the preset region in the hue saturation value HSV space, a shading result of an environment where the object is located includes:
calculating the average value of pixel values of all pixel points in the preset area in a brightness V channel of the HSV space;
when the average value is larger than a first preset threshold value, determining that the object is in an environment with stronger light;
and when the average value is not larger than a first preset threshold value, determining that the object is in an environment with weak light.
In some possible embodiments, the acquiring the first image includes:
acquiring a plurality of target images, wherein the plurality of target images comprise the first image;
then, the determining, according to the pixel value of each pixel point in the preset region in the hue saturation value HSV space, a brightness result of an environment where the object is located includes:
and determining a light and shade result of the environment where the object is located according to the pixel value of each pixel point in the preset area on each target image in the HSV space.
In some possible embodiments, the determining, according to the pixel value of each pixel point in the preset region on each target image in the HSV space, a shading result of an environment where the object is located includes:
calculating the average value of pixel values of all pixel points in a preset area on each target image in a brightness V channel of the HSV space;
determining a median value from the average values respectively corresponding to the plurality of target images;
and determining a light and shade result of the environment where the object is located according to the median.
In some possible embodiments, the determining, according to the median, a shading result of an environment where the object is located includes:
when the median is larger than a second preset threshold, determining that the object is in an environment with stronger light;
and when the median value is not larger than a second preset threshold value, determining that the object is in a weak environment.
In some possible embodiments, the acquiring a plurality of target images includes:
and continuously shooting at equal time intervals by utilizing a shooting device to obtain the plurality of target images.
In some possible embodiments, the preset region on the first image is a region formed by pixels on the 1 st row to the nth row on the first image, where N is a positive integer.
In a second aspect, an embodiment of the present application further provides an apparatus for determining ambient light and dark, where the apparatus includes:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first image, and a preset area on the first image comprises a sky image;
and the determining module is used for determining a light and shade result of the environment where the object is located according to the pixel value of each pixel point in the preset region in the hue saturation value HSV space.
In some possible embodiments, the determining module includes:
the first calculation unit is used for calculating the average value of pixel values of all pixel points in the preset area in a brightness V channel of the HSV space;
the first determining unit is used for determining that the object is in an environment with stronger light when the average value is larger than a first preset threshold value;
and the second determining unit is used for determining that the object is in an environment with weak light when the average value is not larger than a first preset threshold value.
In some possible embodiments, the acquiring module is specifically configured to acquire a plurality of target images, where the plurality of target images includes the first image;
the determining module is specifically configured to determine a light and dark result of an environment where the object is located according to pixel values of pixel points in the preset region on each target image in the HSV space.
In some possible embodiments, the determining module includes:
the second calculation unit is used for calculating the average value of pixel values of all pixel points in a preset area on each target image in a brightness V channel of the HSV space;
a third determining unit, configured to determine a median from the average values corresponding to the plurality of target images, respectively;
and the fourth determining unit is used for determining the light and shade result of the environment where the object is located according to the median.
In some possible embodiments, the fourth determining unit includes:
the first determining subunit is used for determining that the object is in an environment with stronger light when the median is greater than a second preset threshold;
and the second determining subunit is used for determining that the object is in an environment with weak light when the median value is not greater than a second preset threshold value.
In some possible embodiments, the obtaining module is specifically configured to perform continuous photographing at equal time intervals by using a photographing device to obtain the plurality of target images.
In some possible embodiments, the preset region on the first image is a region formed by pixels on the 1 st row to the nth row on the first image, where N is a positive integer.
In a third aspect, an embodiment of the present application further provides an apparatus, where the apparatus includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method for determining ambient shading according to any of the first aspect described above according to instructions in the program code.
In a fourth aspect, the present application further provides a computer-readable storage medium for storing a computer program for executing the method for determining ambient brightness according to any one of the above first aspects.
In the implementation manner of the embodiment of the application, a first image may be obtained, a preset region on the first image includes a sky image, and then, according to a pixel value of each pixel point in the preset region in an HSV space, a light and dark result of an environment where an object is located may be determined. It can be understood that, in practical application, when an object is in an environment with strong light, a pixel value of a pixel point in a captured sky image is greatly different from a pixel value of a pixel point in a sky image captured when the object is in an environment with weak light, so that according to the pixel value of the pixel point in the sky image, whether the object is currently in the environment with strong light or in the environment with weak light can be determined, and the object is not influenced by factors such as weather, humidity, natural rules and the like, so that the accuracy of identifying the light and shade conditions of the environment where the object is located can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic diagram of an exemplary application scenario in an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a method for determining ambient light and shade according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of an apparatus for determining ambient light and shade in an embodiment of the present application;
fig. 4 is a schematic hardware structure diagram of an apparatus in an embodiment of the present application.
Detailed Description
In application scenarios such as automatic driving or street lamp switch control, it is usually necessary to identify the brightness of the environment where an object (such as a vehicle, a street lamp, etc.) is located, so as to perform corresponding control processing. For example, in an automatic driving scene, when a vehicle is in a dark environment, light on the vehicle needs to be turned on to illuminate a road in front of the vehicle; for another example, in an automatic turn-on/off scene of a street lamp, the street lamp is usually turned off when the light is strong, and turned on when the light is weak.
In general, the brightness of the environment where the current object is located can be inferred according to the current time, that is, if the current time is in the time range corresponding to "day", the current object is determined to be in the environment with stronger light, that is, the environment is bright, and if the current time is in the time range corresponding to "night", the current object is determined to be in the environment with weaker light, that is, the environment is dark. However, in practical applications, factors such as weather (e.g., dark weather), humidity, and natural law change (e.g., long night in winter, short night in summer, etc.) may affect the judgment of the brightness of the current environment of the object, so that the accuracy of identification is low. However, if the light sensor is configured for the object and the light sensor is used for detecting light, the detectable area of the light sensor is generally small, and only a part of the area of the environment where the object is located can be detected, and the intensity of light in the detectable area cannot represent the intensity of light in the environment where the object is located, so that the requirements of practical application cannot be met.
Based on this, the embodiment of the application provides a method for determining the brightness of an environment, and aims to improve the accuracy of identifying the brightness of the environment where an object is located. Specifically, a first image may be obtained, a preset region on the first image includes a sky image, and then, according to a pixel value of each pixel point in the preset region in an HSV space, a light and dark result of an environment where the object is located may be determined. It can be understood that, in practical application, when an object is in an environment with strong light, a pixel value of a pixel point in a captured sky image is greatly different from a pixel value of a pixel point in a sky image captured when the object is in an environment with weak light, so that according to the pixel value of the pixel point in the sky image, whether the object is currently in the environment with strong light or in the environment with weak light can be determined, and the object is not influenced by factors such as weather, humidity, natural rules and the like, so that the accuracy of identifying the light and shade conditions of the environment where the object is located can be improved.
As an example, the embodiment of the present application may be applied to an exemplary application scenario as shown in fig. 1. In this scene, a camera 101 may be disposed on the vehicle 100, the surrounding environment is captured by the camera 101, and the captured image (i.e., the first image) includes an image of a partial sky (i.e., the sky image); then, the controller 102 on the vehicle 100 may acquire the first image from the camera 101, and determine a light and dark result of the environment where the object is located according to the pixel value of each pixel point in the HSV space in the preset area on the first image, so as to turn on or off the light on the vehicle according to the result.
It is to be understood that the above scenario is only one example of a scenario provided in the embodiment of the present application, and the embodiment of the present application is not limited to this scenario. For example, the above-mentioned determining the shading process of the environment where the object is located may also be performed by the server, that is, the vehicle 100 may send the first image to the server, and the server processes the first image to obtain a corresponding recognition result and feeds the recognition result back to the vehicle 100; or the identification process of the lighting state of the traffic lights is completed by the cooperation of the vehicle 100 and the server; moreover, the embodiment of the application can also be applied to scenes for controlling the switch of the street lamp and the like. In summary, the embodiments of the present application may be applied in any applicable scenario and are not limited to the scenario examples described above.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, various non-limiting embodiments accompanying the present application examples are described below with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating a method for determining ambient light and shade in an embodiment of the present application. The method may be applied to the vehicle 100 shown in fig. 1, or may be applied to a server, or may be implemented by the vehicle 100 and the server cooperatively, or executed by other devices. The method specifically comprises the following steps:
s201: a first image is acquired, and a preset area on the first image comprises a sky image.
In the present embodiment, when determining whether the subject is in a bright environment or a dark environment, the determination may be performed using a captured image. The object may be the vehicle 100 in the scenario example described in fig. 1, or may be other devices in practical applications. For example, when the object is a vehicle, an image of the surroundings of the vehicle (hereinafter referred to as a first image) may be captured by a camera disposed on the vehicle, so that the light brightness of the environment in which the vehicle is located may be determined by using the first image.
It should be noted that the captured first image may include a sky image, and the sky image is an image of the sky included in the captured field of view of the camera on the first image. In general, if the sky is dark (e.g., the sky at night), the brightness of the image of the sky on the image is low, for example, the image may be black or near black, and at this time, the light of the environment where the object located under the sky is located is also weak; if the light of the sky is bright (e.g., the sky in daytime), the brightness of the image of the sky on the image is high, for example, the image is also blue or white, and at this time, the light of the environment in which the object located under the sky is located is also strong. Therefore, in this embodiment, whether the light of the current sky is dark or bright may be determined according to the sky image included in the first image.
The position of the sky image in the first image may be in a preset region. For example, in a scene in which the object is specifically the vehicle 100 in fig. 1, the image captured by the camera 101 is usually a sky image above the image, and may be a road, a vehicle, a pedestrian, a facility, a building image, and the like below the image. Therefore, in an embodiment, the preset region in step S201 may be specifically a region formed by the pixels in the 1 st row to the nth row on the first image, that is, an image region of the first N rows of pixels above the first image, where N is a positive integer greater than 0. Of course, in other possible embodiments, the preset area may be set in other manners. For example, the position of the preset area on the image may be set by itself according to the shooting angle, orientation, and the like of the camera, which is not limited in this embodiment.
S202: and determining a light and shade result of the environment where the object is located according to the pixel value of each pixel point in the preset area in the HSV space.
In this embodiment, the light and shade of the environment where the object is located may be determined according to the pixel values of the pixels, and specifically, the light and shade may be determined according to the pixel value of each pixel in the HSV space in the preset region on the first image. Among them, HSV (Hue, Saturation, and brightness) space is a color space created according to the intuitive characteristics of colors. In an exemplary embodiment, an average value of pixel values of V channels of HSV space of all pixel points in the preset region may be calculated. Then, the average value can be compared with a first preset threshold, and when the average value is greater than the first preset threshold, the image of the sky on the first image is represented to be brighter, that is, the light of the photographed sky is stronger, so that the object can be determined to be in an environment with stronger light; and when the average value is not larger than the first preset threshold value, the image of the sky on the first image is represented to be dark, namely the shot sky is weak in light, so that the object can be determined to be in the environment with weak light.
Of course, the brightness and darkness result of the environment where the object is located is determined according to the pixel value of each pixel point in the preset region in the HSV space, and other implementation manners may be adopted. For example, the number of first pixels and the number of second pixels in a preset area can be counted, wherein the first pixels are pixels with pixel values of the V channels in the HSV space larger than a threshold value, and the second pixels are pixels with pixel values of the V channels in the HSV space not larger than the threshold value, so that when the number of the first pixels is larger than the number of the second pixels, it can be determined that the object is in an environment with strong light, and when the number of the first pixels is not larger than the number of the second pixels, it can be determined that the object is in an environment with weak light.
Further, in practical applications, to improve the reliability of the shading result of the determined environment where the object is located, the embodiment may also determine the shading of the environment where the object is located based on multiple images.
As an example, a plurality of target images may be obtained, where the plurality of target images includes the first image, so that a light and dark result of an environment where the object is located may be determined according to a pixel value of each pixel point in a preset region on each target image in an HSV space. The preset region on each target image may be the same, for example, the preset regions may be image regions formed by the first N rows of pixel points on the image.
In specific implementation, for each target image, an average value of pixel values of each pixel point in a V channel of an HSV space in a preset region on the target image can be calculated, so that an average value corresponding to each target image can be obtained. Then, a median, that is, a value with a numerical value in the middle, can be determined from the average values corresponding to the plurality of target images, so that a light and shade result of an environment where the object is located can be determined according to the magnitude of the median. As an example, the median value may be compared with a second preset threshold, and when the median value is determined to be greater than the second preset threshold, the object may be determined to be in a strong light environment, and when the median value is determined to be less than or equal to the second preset threshold, the object may be determined to be in a weak light environment.
Therefore, even if the sky in the camera shooting field is occasionally blocked by the obstacles, so that the accuracy of recognition is reduced because no sky image exists in the partial images, or no sky image exists in a few images because of other accidental factors, the influence of the few images on the recognition accuracy can be reduced based on other images, and the reliability of the light and shade result of the environment where the determined object is located is improved.
Of course, in practical applications, based on a plurality of target images, other manners may be adopted to finally determine the shading result of the environment where the object is located. For example, a shading result corresponding to each target image may be obtained according to each target image, and the implementation process may be similar to the above-described shading result of the environment where the object is located based on the first image, so that a plurality of shading results corresponding to a plurality of target images may be obtained, then, the number of shading results (hereinafter referred to as a first number) where the object is in a stronger environment and the number of shading results (hereinafter referred to as a second number) where the object is in a weaker environment may be statistically determined, respectively, and when the first number is greater than the second number, the object may be finally determined to be in a stronger environment, and when the first number is not greater than the second number, the object may be finally determined to be in a weaker environment. In a further embodiment, the number of target images may specifically be an odd number.
In an exemplary embodiment of acquiring a plurality of target images, continuous photographing may be performed at equal time intervals by using a photographing device, so that a plurality of target images may be obtained continuously. For example, during the running of the vehicle, a camera arranged on the vehicle can continuously shoot every 0.5 seconds, so that a plurality of target images can be obtained within a period of time. In this way, even if the sky is blocked by obstacles such as forests and buildings in the shooting view of the camera on the vehicle at some moments, and the shot partial image does not contain the sky image, the brightness result of the environment where the determined object is located can be corrected based on the target image containing the sky image shot at other moments, and the reliability of the brightness result of the environment where the determined object is located can be improved.
In this embodiment, a first image may be obtained, where a preset region on the first image includes a sky image, and then, according to a pixel value of each pixel point in the preset region in an HSV space, a light and dark result of an environment where an object is located may be determined. It can be understood that, in practical application, when an object is in an environment with strong light, a pixel value of a pixel point in a captured sky image is greatly different from a pixel value of a pixel point in a sky image captured when the object is in an environment with weak light, so that according to the pixel value of the pixel point in the sky image, whether the object is currently in the environment with strong light or in the environment with weak light can be determined, and the object is not influenced by factors such as weather, humidity, natural rules and the like, so that the accuracy of identifying the light and shade conditions of the environment where the object is located can be improved.
In addition, the embodiment of the application also provides another device for determining the ambient brightness. Referring to fig. 3, fig. 3 is a schematic structural diagram illustrating another apparatus for determining ambient light and dark according to an embodiment of the present application, where the apparatus 300 may include:
an obtaining module 301, configured to obtain a first image, where a preset region on the first image includes a sky image;
the determining module 302 is configured to determine a light and dark result of an environment where the object is located according to a pixel value of each pixel point in the preset region in the hue saturation value HSV space.
In some possible embodiments, the determining module 302 includes:
the first calculation unit is used for calculating the average value of pixel values of all pixel points in the preset area in a brightness V channel of the HSV space;
the first determining unit is used for determining that the object is in an environment with stronger light when the average value is larger than a first preset threshold value;
and the second determining unit is used for determining that the object is in an environment with weak light when the average value is not larger than a first preset threshold value.
In some possible embodiments, the obtaining module 301 is specifically configured to obtain a plurality of target images, where the plurality of target images includes the first image;
then, the determining module 302 is specifically configured to determine a shading result of an environment where the object is located according to the pixel value of each pixel point in the preset region on each target image in the HSV space.
In some possible embodiments, the determining module 302 includes:
the second calculation unit is used for calculating the average value of pixel values of all pixel points in a preset area on each target image in a brightness V channel of the HSV space;
a third determining unit, configured to determine a median from the average values corresponding to the plurality of target images, respectively;
and the fourth determining unit is used for determining the light and shade result of the environment where the object is located according to the median.
In some possible embodiments, the fourth determining unit includes:
the first determining subunit is used for determining that the object is in an environment with stronger light when the median is greater than a second preset threshold;
and the second determining subunit is used for determining that the object is in an environment with weak light when the median value is not greater than a second preset threshold value.
In some possible embodiments, the obtaining module 301 is specifically configured to perform continuous photographing at equal time intervals by using a photographing device to obtain the plurality of target images.
In some possible embodiments, the preset region on the first image is a region formed by pixels on the 1 st row to the nth row on the first image, where N is a positive integer.
It should be noted that, for the contents of information interaction, execution process, and the like between the modules and units of the apparatus, since the same concept is based on the method embodiment in the embodiment of the present application, the technical effect brought by the contents is the same as that of the method embodiment in the embodiment of the present application, and specific contents may refer to the description in the foregoing method embodiment in the embodiment of the present application, and are not described herein again.
In this embodiment, in practical application, when the object is in an environment with strong light, a pixel value of a pixel point in the captured sky image is different from a pixel value of a pixel point in the captured sky image when the object is in an environment with weak light, so that according to the pixel value of the pixel point in the sky image, whether the object is in the environment with strong light or in the environment with weak light at present can be determined, and the object is not affected by factors such as weather, humidity, and natural rules, and thus the accuracy of identifying the light and shade conditions of the environment where the object is located can be improved.
In addition, the embodiment of the application also provides equipment. Referring to fig. 4, fig. 4 shows a hardware structure diagram of an apparatus in an embodiment of the present application, and the apparatus 400 may include a processor 401 and a memory 402.
Wherein the memory 402 is used for storing a computer program;
the processor 401 is configured to execute the method for determining the ambient brightness according to the computer program in the above method embodiments.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, where the computer program is used to execute the method for determining ambient brightness described in the foregoing method embodiment.
In the names of "first determination module", "first determination unit", "first preset threshold", and the like, the "first" mentioned in the embodiments of the present application is only used for name identification, and does not represent the first in sequence. The same applies to "second", "third", "fourth", etc.
As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that all or part of the steps in the above embodiment methods can be implemented by software plus a general hardware platform. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a read-only memory (ROM)/RAM, a magnetic disk, an optical disk, or the like, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network communication device such as a router) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only an exemplary embodiment of the present application, and is not intended to limit the scope of the present application.

Claims (9)

1. A method of determining ambient lighting, the method comprising:
acquiring a first image, wherein a preset area on the first image comprises a sky image;
determining a light and shade result of the environment where the object is located according to the pixel value of each pixel point in the preset region in the hue saturation value HSV space;
the acquiring a first image includes:
acquiring a plurality of target images, wherein the plurality of target images comprise the first image, and the number of the plurality of target images is an odd number;
then, the determining, according to the pixel value of each pixel point in the preset region in the hue saturation value HSV space, a brightness result of an environment where the object is located includes:
determining a brightness result of an environment where the object is located according to pixel values of all pixel points in a preset area on each target image in the HSV space, specifically: counting the number of first pixel points and the number of second pixel points in a preset area on each target image, wherein the first pixel points are pixel points with pixel values of V channels in the HSV space larger than a threshold value, and the second pixel points are pixel points with pixel values of V channels in the HSV space not larger than the threshold value;
when the number of the first pixel points is larger than that of the second pixel points, determining that an object in a target image is in an environment with stronger light;
when the number of the first pixel points is not larger than that of the second pixel points, determining that an object in a target image is in an environment with weak light;
respectively counting and determining the quantity of the light and shade results of the target image in the environment with stronger light and determining the quantity of the light and shade results of the target image in the environment with weaker light;
when the quantity of the bright and dark results of the object in the strong light environment in the target image is larger than the quantity of the bright and dark results of the object in the weak light environment in the target image, determining that the object is in the strong light environment;
when the number of the bright and dark results of the object in the environment with strong light in the target image is smaller than the number of the bright and dark results of the object in the environment with weak light in the target image, the object is determined to be in the environment with weak light.
2. The method according to claim 1, wherein the determining a shading result of an environment where the object is located according to a pixel value of each pixel point in the preset region in a Hue Saturation Value (HSV) space further comprises:
calculating the average value of pixel values of all pixel points in the preset area in a brightness V channel of the HSV space;
when the average value is larger than a first preset threshold value, determining that the object is in an environment with stronger light;
and when the average value is not larger than a first preset threshold value, determining that the object is in an environment with weak light.
3. The method according to claim 1, wherein the determining the shading result of the environment where the object is located according to the pixel values of the pixel points in the HSV space in the preset region on each target image comprises:
calculating the average value of pixel values of all pixel points in a preset area on each target image in a brightness V channel of the HSV space;
determining a median value from the average values respectively corresponding to the plurality of target images;
and determining a light and shade result of the environment where the object is located according to the median.
4. The method of claim 3, wherein determining the shading result of the environment in which the object is located according to the median value comprises:
when the median is larger than a second preset threshold, determining that the object is in an environment with stronger light;
and when the median value is not larger than a second preset threshold value, determining that the object is in a weak environment.
5. The method of claim 1, wherein said acquiring a plurality of target images comprises:
and continuously shooting at equal time intervals by utilizing a shooting device to obtain the plurality of target images.
6. The method according to any one of claims 1 to 5, wherein the predetermined region on the first image is a region formed by pixels on the 1 st line to the Nth line on the first image, where N is a positive integer.
7. An apparatus for determining ambient light, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first image, and a preset area on the first image comprises a sky image;
the determining module is used for determining a light and shade result of the environment where the object is located according to the pixel value of each pixel point in the preset region in the hue saturation value HSV space;
the acquiring a first image includes:
acquiring a plurality of target images, wherein the plurality of target images comprise the first image, and the number of the plurality of target images is an odd number;
then, the determining, according to the pixel value of each pixel point in the preset region in the hue saturation value HSV space, a brightness result of an environment where the object is located includes:
determining a brightness result of an environment where the object is located according to pixel values of all pixel points in a preset area on each target image in the HSV space, specifically: counting the number of first pixel points and the number of second pixel points in a preset area on each target image, wherein the first pixel points are pixel points with pixel values of V channels in the HSV space larger than a threshold value, and the second pixel points are pixel points with pixel values of V channels in the HSV space not larger than the threshold value;
when the number of the first pixel points is larger than that of the second pixel points, determining that an object in a target image is in an environment with stronger light;
when the number of the first pixel points is not larger than that of the second pixel points, determining that an object in a target image is in an environment with weak light;
respectively counting and determining the quantity of light and shade results of an object in a strong light environment in a target image and determining the quantity of light and shade results of the object in a weak light environment in the target image;
when the quantity of the bright and dark results of the object in the strong light environment in the target image is larger than the quantity of the bright and dark results of the object in the weak light environment in the target image, determining that the object is in the strong light environment;
when the number of the bright and dark results of the object in the environment with strong light in the target image is smaller than the number of the bright and dark results of the object in the environment with weak light in the target image, the object is determined to be in the environment with weak light.
8. An apparatus, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of determining ambient shading of any of claims 1 to 6 according to instructions in the program code.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program for performing the method of determining ambient light of any of claims 1 to 6.
CN202010207240.4A 2020-03-23 2020-03-23 Method, device, equipment and storage medium for determining environment brightness Active CN111343385B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010207240.4A CN111343385B (en) 2020-03-23 2020-03-23 Method, device, equipment and storage medium for determining environment brightness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010207240.4A CN111343385B (en) 2020-03-23 2020-03-23 Method, device, equipment and storage medium for determining environment brightness

Publications (2)

Publication Number Publication Date
CN111343385A CN111343385A (en) 2020-06-26
CN111343385B true CN111343385B (en) 2022-02-11

Family

ID=71188021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010207240.4A Active CN111343385B (en) 2020-03-23 2020-03-23 Method, device, equipment and storage medium for determining environment brightness

Country Status (1)

Country Link
CN (1) CN111343385B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201012648Y (en) * 2007-03-08 2008-01-30 川铭精密科技股份有限公司 Automatic light opener for vehicles
CN101211407A (en) * 2006-12-29 2008-07-02 沈阳东软软件股份有限公司 Diurnal image recognition method and device
CN108551553A (en) * 2018-03-16 2018-09-18 浙江大华技术股份有限公司 A kind of light compensating lamp control method and device
JP2019026173A (en) * 2017-08-02 2019-02-21 株式会社デンソー Auto light device, lighting control method, and program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7683964B2 (en) * 2005-09-05 2010-03-23 Sony Corporation Image capturing apparatus and image capturing method
CN108001339A (en) * 2016-10-31 2018-05-08 法乐第(北京)网络科技有限公司 Safe travelling method and device, automobile data recorder
CN107995421A (en) * 2017-11-30 2018-05-04 潍坊歌尔电子有限公司 A kind of panorama camera and its image generating method, system, equipment, storage medium
CN110111281A (en) * 2019-05-08 2019-08-09 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101211407A (en) * 2006-12-29 2008-07-02 沈阳东软软件股份有限公司 Diurnal image recognition method and device
CN201012648Y (en) * 2007-03-08 2008-01-30 川铭精密科技股份有限公司 Automatic light opener for vehicles
JP2019026173A (en) * 2017-08-02 2019-02-21 株式会社デンソー Auto light device, lighting control method, and program
CN108551553A (en) * 2018-03-16 2018-09-18 浙江大华技术股份有限公司 A kind of light compensating lamp control method and device

Also Published As

Publication number Publication date
CN111343385A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
US9639764B2 (en) Image recognition system for vehicle for traffic sign board recognition
CN109194873B (en) Image processing method and device
US20060215882A1 (en) Image processing apparatus and method, recording medium, and program
CN105812674A (en) Signal lamp color correction method, monitoring method, and device thereof
CN109074408B (en) Map loading method and device, electronic equipment and readable storage medium
CN106572310B (en) light supplement intensity control method and camera
KR20210006276A (en) Image processing method for flicker mitigation
US10313601B2 (en) Image capturing device and brightness adjusting method
CN111031254B (en) Camera mode switching method and device, computer device and readable storage medium
CN114143940B (en) Tunnel illumination control method, device, equipment and storage medium
CN106161984B (en) Video image highlight suppression, contour and detail enhancement processing method and system
CN112995510B (en) Method and system for detecting environment light of security monitoring camera
EP3220637B1 (en) Vehicle-mounted camera system
CN111246119A (en) Camera and light supplement control method and device
CN110399672B (en) Street view simulation method and device for unmanned vehicle and electronic equipment
CN113747008B (en) Camera and light supplementing method
CN111402610B (en) Method, device, equipment and storage medium for identifying lighting state of traffic light
CN111343385B (en) Method, device, equipment and storage medium for determining environment brightness
CN105719488A (en) License plate recognition method and apparatus, and camera and system for license plate recognition
CN111491103B (en) Image brightness adjusting method, monitoring equipment and storage medium
CN113808117B (en) Lamp detection method, device, equipment and storage medium
CN114885096B (en) Shooting mode switching method, electronic equipment and storage medium
KR20140147211A (en) Method for Detecting Fog for Vehicle and Apparatus therefor
JPH10289321A (en) Image monitoring device
CN105745670B (en) System and method for forming nighttime images for a motor vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant