WO2021164463A1 - 检测方法、装置及存储介质 - Google Patents

检测方法、装置及存储介质 Download PDF

Info

Publication number
WO2021164463A1
WO2021164463A1 PCT/CN2021/071199 CN2021071199W WO2021164463A1 WO 2021164463 A1 WO2021164463 A1 WO 2021164463A1 CN 2021071199 W CN2021071199 W CN 2021071199W WO 2021164463 A1 WO2021164463 A1 WO 2021164463A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
target object
environment
image
frame
Prior art date
Application number
PCT/CN2021/071199
Other languages
English (en)
French (fr)
Inventor
周伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021164463A1 publication Critical patent/WO2021164463A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a detection method, device and storage medium.
  • assisted driving and autonomous driving need to perceive the surrounding driving environment.
  • the requirements for sensor perception are also different.
  • the camera assumes a very important function, which can be used for obstacle detection, lane line detection, road boundary detection, etc.
  • a common phenomenon in vehicle operation is windshield fogging, especially in winter, due to the large temperature difference between indoor and outdoor, causing the windshield inside the car to fog, which is similar to the scene where the outside environment is foggy.
  • the camera collects the environmental images in the above two scenes, and the environmental images are not clear.
  • the current technical solution cannot distinguish the above two scenes, and the accuracy of vehicle environment detection is not high.
  • the present application provides a detection method, device, and storage medium, which can distinguish between fogging on glass and foggy days in ambient weather, and improve the accuracy of environmental detection.
  • an embodiment of the present application provides a detection method, the method includes: acquiring at least one frame of environment image from at least one image acquisition device, the environment image is used to present information about the environment where the first terminal is located; The environment image determines the state information of the first terminal, and the state information includes at least one of the following: whether the glass of the first terminal is foggy, or the weather state of the environment where the first terminal is located.
  • the first terminal by acquiring a single frame or multiple frames of environmental images collected by at least one image capture device, based on the single frame or multiple frames of environmental images, it is determined whether the first terminal has glass fogging or the weather status of the environment where the first terminal is located (Including foggy environment).
  • the distinction between glass fogging and foggy days under ambient weather is realized, and the accuracy of environmental detection is improved.
  • the weather state includes any one of dense fog, mist or normal.
  • the envelope range, particle size, density, visibility, etc. of the fog can be described to determine whether it is dense fog, mist or normal.
  • the lower the visibility value the worse the visibility and the higher the fog concentration.
  • the state information includes whether the glass of the first terminal is fogged, and the environment presented by the environmental image includes at least two target objects and the sky.
  • at least two target objects in the environment image can be understood as at least two detection points in the environment image, and each detection point corresponds to one or more pixels in the image.
  • determining the state information of the first terminal according to at least one frame of environmental images includes: according to the brightness information of at least two target objects in the first environmental image in the at least one frame of environmental images, the brightness information of the sky, and at least Depth information of two target objects to determine status information.
  • determining the state information according to the brightness information of the at least two target objects, the brightness information of the sky, and the depth information of the at least two target objects in the first environment image includes: determining that the glass of the first terminal is fogged, where , In the first environment image, at least one set of target objects exists in at least two target objects, the difference in extinction coefficients of any two target objects in each set of target objects is greater than the first threshold, and the extinction coefficient passes through the target object
  • the brightness information, the brightness information of the sky, and the depth information of the target object are determined, and the extinction coefficient is used to indicate the degree of brightness loss of the target object in the atmosphere.
  • the state information of the first terminal is determined by comparing the extinction coefficients corresponding to at least two target objects in a single frame of environmental image of a certain image acquisition device.
  • the first threshold that is, the difference between the extinction coefficients of the two detection points in the image is large
  • the glass of the first terminal is fogged .
  • the above process can effectively distinguish the fogging of the glass from the foggy weather in the ambient weather, and avoid judging the foggy scene of the glass as the ambient foggy scene.
  • the status information includes whether the glass of the first terminal is foggy, the environment presented by the environmental image includes at least one near-end target object, and the near-end target object includes the distance between the first terminal and the outside of the first terminal. Objects less than the preset distance.
  • the near-end target object may be the front and rear hoods of the vehicle, any object fixed on the hood, the rearview mirrors on the left and right sides of the vehicle, and so on.
  • determining the state information of the first terminal according to at least one frame of environmental images includes: determining the state information according to the sharpness value of at least one near-end target object in the first environmental image in the at least one frame of environmental images; Wherein, the sharpness value of the at least one near-end target object is determined by the gray value of the image block corresponding to the at least one near-end target object.
  • determining the state information according to the sharpness value of at least one near-end target object in the first environment image includes: determining that the glass of the first terminal is fogged, and there is at least one near-end target in the first environment image The sharpness value of the object is less than or equal to the preset sharpness threshold.
  • the state information of the first terminal is determined by analyzing the sharpness value corresponding to at least one near-end target object in a single frame of environmental image of a certain image acquisition device.
  • a near-end target object if the sharpness value corresponding to the near-end target object is less than or equal to the preset sharpness threshold (that is, the near-end target object in the image is blurred), it is determined that the glass of the first terminal is fogged .
  • the fogging of the glass will cause the near-end objects to be blurred, and the foggy day under ambient weather has little effect on the clarity of the near-end objects.
  • the above process can effectively distinguish the fogging of the glass from the foggy weather in the ambient weather, and avoid judging the foggy scene of the glass as the ambient foggy scene.
  • acquiring at least one frame of environment images from at least one image acquisition device includes: acquiring multiple frames of environment images from at least one image acquisition device.
  • the determining of the state information of the first terminal according to the environmental image includes: determining the state information of the first terminal according to the multiple frames of environmental images.
  • the status information includes whether the glass of the first terminal is fogged, and the environment presented by the environmental image includes at least one target object.
  • determining the state information of the first terminal according to the multiple frames of environmental images includes: determining the state information according to the extinction coefficient or the sharpness value of the at least one target object in the multiple frames of environmental images.
  • the extinction coefficient of at least one target object in each frame of the environmental image is determined by the brightness information of at least one target object, the brightness information of the sky, and the depth information of at least one target object; at least one target object is in each frame
  • the sharpness value in the environment image of is determined by the gray value of the image block corresponding to at least one target object.
  • determining the state information according to the extinction coefficient of at least one target object in the multi-frame environment image includes: determining that the glass of the first terminal is fogged, wherein any two frames of the multi-frame environment image The differences of the extinction coefficients of the same target object in the environmental images are all less than or equal to the fourth threshold.
  • the glass of the first terminal is fogged by comparing the extinction coefficients corresponding to the same target object in multiple frames of environmental images of a certain image acquisition device. If the difference between the extinction coefficient of the same target object in any two frames of the multi-frame environmental image is less than or equal to the fourth threshold, it indicates that the extinction coefficient of the same target object in the multi-frame environmental image is basically unchanged, and this feature corresponds to glass Foggy scene.
  • the extinction coefficient is related to the distance value, and the same target object The farther away from the first terminal, the greater the extinction coefficient, and the closer the distance, the smaller the extinction coefficient. Therefore, in a foggy environment, the extinction coefficient of the same target object in multiple frames of environmental images changes greatly. However, for a scene where the glass is fogged, the movement of the first terminal has little effect on the extinction coefficient of the same target object in the multiple frames of environmental images.
  • determining the state information according to the sharpness value of at least one target object in the multi-frame environmental image includes: determining that the glass of the first terminal is fogged, wherein any two of the multi-frame environmental images The difference between the sharpness values of the same target object in the environment image of the frame is less than or equal to the fifth threshold.
  • the glass of the first terminal is fogged by comparing the sharpness values corresponding to the same target object in multiple frames of environmental images of a certain image acquisition device. If the difference between the sharpness value of the same target object in any two frames of the multi-frame environmental image is less than or equal to the fifth threshold, it indicates that the sharpness value of the same target object in the multi-frame environmental image is basically unchanged, and this feature Corresponding to the fogging scene of the glass.
  • the first terminal is in a foggy environment, as the first terminal moves, the distance between the same target object and the first terminal is constantly changing, and the clarity value is related to the distance value, and the farther the same target object is from the first terminal , The lower the sharpness value, the closer the distance, the higher the sharpness value. Therefore, in the foggy environment, the sharpness value of the same target object in multiple frames of environmental images changes greatly. However, for a scene where the glass is fogged, the movement of the first terminal has little effect on the extinction coefficient of the same target object in the multiple frames of environmental images.
  • the same target object is the same near-end target object, and the near-end target object includes an object outside the first terminal and whose distance from the first terminal is less than a preset distance.
  • the method further includes: controlling the activation of the defogging device in the vehicle, or controlling the activation of the window lifter, or sending Alarm information.
  • the method further includes: acquiring the saturation and lightness of any frame of the environmental image of the at least one image acquisition device; and determining the first terminal according to the ratio of the lightness to the saturation The weather state of the environment.
  • This solution is used to further determine the current environmental weather where the first terminal is located, whether it is normal weather or environmental fog (dense fog or mist).
  • determine the weather state of the environment where the first terminal is located according to the ratio of brightness to saturation including: when the weather state is dense fog, the ratio is greater than or equal to the second threshold; when the weather state is mist, the ratio is It is greater than the third threshold and less than the second threshold; and/or, when the weather state is normal, the ratio is less than or equal to the third threshold.
  • the above solution calculates the saturation and brightness of the environmental image, and determines the concentration level of the ambient weather as a foggy day based on the ratio of the brightness to the saturation, so that the first terminal has the ability to detect the foggy environment and executes the corresponding according to the concentration level of the foggy day. Control operation. In the case of low fog density, there is no need to switch the driving state of the first terminal, and the environment image can be defogged through the image processing algorithm, thereby avoiding the waste of control system resources.
  • an embodiment of the present application provides a detection device, including: an acquisition module and a processing module.
  • the acquisition module is used to acquire at least one frame of environmental image from at least one image acquisition device, and the environmental image is used to present information about the environment where the first terminal is located;
  • the processing module is used to determine the Status information, the status information includes at least one of the following: whether the glass of the first terminal is fogged, or the weather status of the environment where the first terminal is located.
  • the weather state includes any one of dense fog, mist or normal.
  • the status information includes whether the glass of the first terminal is fogged, and the environment presented by the environmental image includes at least two target objects and the sky.
  • the processing module is specifically configured to perform according to the brightness information of at least two target objects in the first environment image in at least one frame of the environment image, the brightness information of the sky, and the depth information of the at least two target objects. To determine the status information.
  • the processing module determines that the glass of the first terminal is fogged, wherein, in the first environment image, at least one group of target objects exists in the at least two target objects, and any two target objects in each group of target objects correspond to extinction
  • the coefficient difference is greater than the first threshold.
  • the extinction coefficient is determined by the brightness information of the target object, the brightness information of the sky, and the depth information of the target object. The extinction coefficient is used to indicate the degree of brightness loss of the target object in the atmosphere.
  • the status information includes whether the glass of the first terminal is foggy, the environment in which the environmental image is presented includes at least one near-end target object, and the near-end target object includes those outside the first terminal and whose distance from the first terminal is less than a preset distance. object.
  • the processing module is specifically configured to determine the state information according to the sharpness value of at least one near-end target object in the first environment image in the at least one frame of environment image.
  • the sharpness value of the at least one near-end target object is determined by the gray value of the image block corresponding to the at least one near-end target object.
  • the processing module determines that the glass of the first terminal is fogged, and in the first environment image, at least one near-end target object has a sharpness value less than or equal to a preset sharpness threshold.
  • the acquisition module is specifically configured to acquire multiple frames of environmental images from at least one image acquisition device.
  • the processing module is specifically configured to determine the state information of the first terminal according to multiple frames of environmental images.
  • the status information includes whether the glass of the first terminal is fogged, and the environment presented by the environmental image includes at least one target object.
  • the processing module is specifically configured to determine the state information according to the extinction coefficient or the sharpness value of the at least one target object in the multi-frame environment image.
  • the extinction coefficient of at least one target object in each frame of the environmental image is determined by the brightness information of at least one target object, the brightness information of the sky, and the depth information of at least one target object; at least one target object is in each frame
  • the sharpness value in the environment image of is determined by the gray value of the image block corresponding to at least one target object.
  • the processing module determines that the glass of the first terminal is fogged, where the difference in the extinction coefficient of the same target object in any two environmental images of the multiple environmental images is less than or equal to the fourth threshold, or more The difference between the sharpness values of the same target object in any two environmental images of the frame environmental image is less than or equal to the fifth threshold.
  • the same target object is the same near-end target object, and the near-end target object includes an object outside the first terminal and whose distance from the first terminal is less than a preset distance.
  • the processing module when determining that the glass of the first terminal is fogged, is also used to: control the activation of the defogging device in the car, or control the activation of the window lifter, or send an alarm message .
  • the acquisition module is further configured to acquire the saturation and brightness of any frame of the environmental image of the at least one image acquisition device.
  • the processing module is also used to determine the weather state of the environment where the first terminal is located according to the ratio of brightness to saturation.
  • the ratio when the weather state is dense fog, the ratio is greater than or equal to the second threshold; when the weather state is mist, the ratio is greater than the third threshold and less than the second threshold; and/or, when the weather state is normal, the ratio is less than or Equal to the third threshold.
  • the processing module is further used to: when it is determined that the weather state is dense fog, control the driving state of the first terminal or output control information to the on-board controller; or, when it is determined that the weather state is foggy, perform the removal of the environment image. Fog processing; or, when it is determined that the weather state is normal, road detection is performed based on environmental images.
  • an embodiment of the present application provides a detection device, including at least one processor and at least one memory; at least one memory is used to store computer execution instructions, and when the detection device is running, at least one processor executes at least one memory stored The computer executes the instructions to cause the detection device to execute the detection method as in any one of the first aspect.
  • an embodiment of the present application provides a computer storage medium for storing a computer program, and when the computer program is executed on a computer, the computer executes the detection method as in any one of the first aspect.
  • the embodiments of the present application provide a detection method, device, and storage medium.
  • the method includes: acquiring a single frame or multiple frames of environmental images from at least one image acquisition device, Comprehensive analysis of the brightness change law or image quality parameters of the first terminal is performed to determine the status information of the first terminal.
  • the state information of the first terminal includes whether the glass of the first terminal is foggy, or the weather state of the environment where the first terminal is located.
  • Figure 1 is a functional block diagram of a vehicle provided by an embodiment of the application.
  • FIG. 2 is a flowchart of a detection method provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of the space inside the vehicle of the image acquisition device provided by the embodiment of the application;
  • FIG. 4 is a flowchart of determining state information of the first terminal according to an embodiment of the application.
  • FIG. 5 is a flowchart of determining state information of the first terminal according to an embodiment of the application.
  • FIG. 6 is a flowchart of determining state information of the first terminal according to an embodiment of the application.
  • FIG. 7 is a flowchart of determining the weather state of the first terminal according to an embodiment of the application.
  • FIG. 8 is a schematic structural diagram of a detection device provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of the hardware structure of a detection device provided by an embodiment of the application.
  • the detection method provided by the embodiments of this application can be applied to any terminal with an automatic driving function.
  • the terminal has a closed space, and the components forming the closed space include at least glass.
  • the terminal can be a vehicle, a ship, an airplane, a spacecraft, etc.
  • the embodiments of this application do not impose any restrictions.
  • the detection method provided in the embodiment of the present application may be applied to a vehicle with an automatic driving function or other equipment (such as a cloud server) that has a function of controlling automatic driving.
  • the vehicle can implement the detection method provided by the embodiments of the present application through its components (including hardware and/or software), determine the current state information of the vehicle (such as speed, location, road conditions, weather conditions, etc.), and generate controls to control the vehicle instruction.
  • other equipment such as a server
  • determine the current state information of the vehicle determine the current state information of the vehicle, generate a control instruction for controlling the vehicle, and send the control instruction to the vehicle.
  • FIG. 1 is a functional block diagram of a vehicle 100 provided by an embodiment of the application.
  • the vehicle 100 may be configured in a fully or partially autonomous driving mode.
  • the vehicle 100 can control itself while in the automatic driving mode, and can determine the state information of the vehicle and its surrounding environment through human operation, determine whether the glass of the vehicle is foggy, or the weather state of the environment in which the vehicle is located, based on The determined state information controls the vehicle 100.
  • the vehicle 100 can be placed to operate without human interaction.
  • the vehicle 100 may include various subsystems, such as a travel system 102, a sensor system 104, a control system 106, one or more peripheral devices 108, and at least one of a power supply 110, a computer system 112, and a user interface 116.
  • the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements.
  • each of the subsystems and elements of the vehicle 100 may be wired or wirelessly interconnected.
  • the traveling system 102 may include components that provide power movement for the vehicle 100.
  • the travel system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine composed of a gas oil engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • the engine 118 converts the energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy for other systems of the vehicle 100.
  • the transmission device 120 can transmit mechanical power from the engine 118 to the wheels 121.
  • the transmission device 120 may include a gearbox, a differential, and a drive shaft.
  • the transmission device 120 may also include other devices, such as a clutch.
  • the drive shaft may include one or more shafts that can be coupled to one or more wheels 121.
  • the sensor system 104 may include several sensors that sense information about the environment around the vehicle 100.
  • the sensor system 104 may include a positioning system 122 (the positioning system may be a GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, a laser rangefinder 128, and At least one of the cameras 130.
  • the sensor system 104 may also include sensors of the internal system of the monitored vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and recognition are key functions for the safe operation of autonomous vehicle 100.
  • the positioning system 122 described above can be used to estimate the geographic location of the vehicle 100.
  • the IMU 124 is used to sense changes in the position and orientation of the vehicle 100 based on inertial acceleration.
  • the IMU 124 may be a combination of an accelerometer and a gyroscope.
  • the radar 126 may use radio signals to sense objects in the surrounding environment of the vehicle 100, including millimeter wave radar, lidar, and the like. In some embodiments, in addition to sensing the object, the radar 126 may also be used to sense the speed and/or direction of the object.
  • the laser rangefinder 128 may use laser light to sense the distance between the vehicle 100 and objects in the surrounding environment.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
  • the camera 130 may be used to capture multiple images of the surrounding environment of the vehicle 100.
  • the camera 130 may be a still camera or a video camera.
  • the control system 106 may control the operation of the vehicle 100 and its components.
  • the control system 106 may include various elements, such as a steering system 132, a throttle 134, a braking unit 136, a computer vision system 140, a route control system 142, an obstacle avoidance system 144, and so on.
  • the steering system 132 is operable to adjust the forward direction of the vehicle 100, such as a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 118 and thereby control the speed of the vehicle 100.
  • the braking unit 136 is used to control the vehicle 100 to decelerate.
  • the braking unit 136 may use friction to slow down the wheels 121.
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electric current.
  • the braking unit 136 may also take other forms to slow down the rotation speed of the wheels 121 to control the speed of the vehicle 100.
  • the computer vision system 140 may be operable to process and analyze the images captured by the camera 130 in order to identify objects and/or features in the surrounding environment of the vehicle 100.
  • the objects and/or features may include traffic signals, road boundaries and obstacles.
  • the computer vision system 140 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision technologies.
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and so on.
  • the route control system 142 is used to determine the travel route of the vehicle 100.
  • the route control system 142 may combine data from sensors, the positioning system 122, and one or more predetermined maps to determine a travel route for the vehicle 100.
  • the obstacle avoidance system 144 is used to identify, evaluate and avoid or otherwise surpass potential obstacles in the environment of the vehicle 100.
  • the control system 106 may add or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
  • the vehicle 100 can interact with external sensors, other vehicles, other computer systems, or users through the peripheral device 108.
  • the peripheral device 108 may include a wireless communication system 146, an onboard computer 148, a microphone 150, and/or a speaker 152.
  • the wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network.
  • the wireless communication system 146 may use 3G cellular communication, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communication, such as LTE, or 5G cellular communication.
  • the wireless communication system 146 may use WiFi to communicate with a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the wireless communication system 146 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols such as various vehicle communication systems, for example, the wireless communication system 146 may include one or more dedicated short range communications (DSRC) devices.
  • DSRC dedicated short range communications
  • the peripheral device 108 provides a means for the user of the vehicle 100 to interact with the user interface 116.
  • the onboard computer 148 may provide information to the user of the vehicle 100.
  • the user interface 116 can also operate the on-board computer 148 to receive user input.
  • the on-board computer 148 can be operated through a touch screen.
  • the peripheral device 108 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle.
  • the microphone 150 may receive audio (eg, voice commands or other audio input) from a user of the vehicle 100.
  • the speaker 152 may output audio to the user of the vehicle 100.
  • the power supply 110 may provide power to various components of the vehicle 100.
  • the power source 110 may be a rechargeable lithium ion or lead acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100.
  • the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
  • the computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer readable medium such as a data storage device 114.
  • the computer system 112 may also be multiple computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
  • the processor 113 may be any conventional processor, such as a commercially available Central Processing Unit (CPU).
  • the processor may be a dedicated device such as an Application Specific Integrated Circuit (ASIC) or other hardware-based processor.
  • FIG. 1 functionally illustrates the processor, the memory, and other elements in the same physical enclosure, those of ordinary skill in the art should understand that the processor, computer system, or memory may actually include Multiple processors, computer systems, or memories in a physical housing, or include multiple processors, computer systems, or memories that may not be stored in the same physical housing.
  • the memory may be a hard drive, or other storage medium located in a different physical enclosure.
  • a reference to a processor or computer system will be understood to include a reference to a collection of processors or computer systems or memories that may operate in parallel, or a reference to a collection of processors or computer systems or memories that may not operate in parallel.
  • some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
  • the processor may be located away from the vehicle and wirelessly communicate with the vehicle.
  • some of the processes described herein are executed on a processor disposed in the vehicle and others are executed by a remote processor, including taking the necessary steps to perform a single manipulation.
  • the data storage device 114 may include instructions 115 (eg, program logic), which may be executed by the processor 113 to perform various functions of the vehicle 100, including those described above.
  • the data storage device 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data on one or more of the traveling system 102, the sensor system 104, the control system 106, and the peripheral device 108. Control instructions.
  • the data storage device 114 may also store data, such as road maps, route information, the location, direction, and speed of the vehicle, and other such vehicle data, as well as other information (such as weather conditions, etc.).
  • This information may be used by the vehicle 100 and the computer system 112 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • the data storage device 114 can obtain environmental information from the sensor system 104 or other components of the vehicle 100.
  • the environmental information can be, for example, whether there are green belts, traffic lights, pedestrians, etc. near the environment where the vehicle is currently located.
  • the vehicle can pass through the machine.
  • the learning algorithm calculates whether there are green belts, traffic lights, pedestrians, etc. near the current environment.
  • the data storage device 114 may also store state information of the vehicle itself and state information of other vehicles that interact with the vehicle.
  • the status information includes, but is not limited to, the speed, acceleration, and heading angle of the vehicle.
  • the vehicle obtains the distance between other vehicles and the speed of other vehicles, and so on.
  • the processor 113 can obtain the aforementioned environmental information or state information from the data storage device 114, and based on the environmental information of the environment in which the vehicle is located, the state information of the vehicle itself, the state information of other vehicles, and the traditional rule-based driving strategy, Get the final driving strategy to control the vehicle for automatic driving (such as accelerating, decelerating, stopping, etc.).
  • the user interface 116 is used to provide information to or receive information from a user of the vehicle 100.
  • the user interface 116 may include one or more input/output devices in the set of peripheral devices 108, such as one or more of the wireless communication system 146, the onboard computer 148, the microphone 150, and the speaker 152.
  • the computer system 112 may control the functions of the vehicle 100 based on inputs received from various subsystems (for example, the travel system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may use input from the control system 106 to control the steering system 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control of many aspects of the vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • the data storage device 114 may exist partially or completely separately from the vehicle 100.
  • the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner. It should be noted that the above-mentioned components are only an example. In actual applications, the components in the above-mentioned modules may be added or deleted according to actual needs.
  • FIG. 1 should not be construed as a limitation to the embodiments of the present application.
  • the vehicle may further include a hardware structure and/or software module, and the above-mentioned functions are implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether a certain function among the above-mentioned functions is executed by a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraint conditions of the technical solution.
  • the above-mentioned vehicle 100 may be a car, an off-road vehicle, a sports car, a truck, a bus, a recreational vehicle, amusement park vehicle, construction equipment, a tram, a golf cart, a train, etc., which is not limited in the embodiment of the present application.
  • the vehicle 100 in the embodiment of the present application has intelligent driving or automatic driving functions, and the automatic driving level of the vehicle is used to indicate the degree of intelligence and automation of the automatic driving vehicle.
  • the automatic driving level of vehicles is divided into 6 levels: no automation (L0), driving support (L1), partial automation (L2), conditional automation (L3), and high automation ( L4) and fully automated (L5).
  • autonomous vehicles/assisted driving vehicles can obtain images of the environment where the vehicle is located through the sensor system, and can determine the current weather status of the vehicle based on image recognition algorithms. If the current weather status of the vehicle is not good, such as rain and snow Weather and foggy weather, the vehicle can be controlled according to the corresponding weather state, for example, when the weather state is not good, the auto-driving level of the vehicle is lowered, or an alarm message is issued.
  • the vehicle driving in winter due to the large temperature difference between indoor and outdoor, the windshield on the inside of the vehicle is prone to fogging.
  • an embodiment of the present application provides a detection method, an autonomous vehicle 100 or a computing device associated with the autonomous vehicle 100 (such as the computer system 112, the computer vision system 140, and the data storage device 114 in FIG. 1)
  • the current state information of the vehicle can be determined based on the image characteristics of the collected environmental images, and the corresponding control operations can be performed.
  • the vehicle 100 can determine the weather state of the environment in which the vehicle 100 is currently located (e.g., Sunny, cloudy, rainy, foggy, snowy, etc.). Alternatively, the vehicle 100 may determine whether the windshield of the vehicle 100 is fogged based on the image characteristics of the above-mentioned target object in the environment image.
  • the vehicle 100 After the vehicle 100 determines the current state information of the vehicle, it performs corresponding control operations based on the determined state information, and different state information can correspond to different control operations. As an example, when the vehicle 100 determines that the current environment is foggy, it performs defogging processing on the environment image, or reduces the level of automatic driving, or reduces the speed of the vehicle. As another example, when the vehicle 100 determines that the windshield of the vehicle is fogged, it controls other devices on the vehicle to start or shut down, for example, control the defogging device or the window lifter in the vehicle to start.
  • Fig. 2 is a flowchart of a detection method provided by an embodiment of the application. As shown in Figure 2, the detection method provided in this embodiment includes the following steps:
  • Step 201 Acquire at least one frame of environment image from at least one image acquisition device, where the environment image is used to present information about the environment where the first terminal is located.
  • the first terminal in this embodiment may be a vehicle, a ship, an airplane, a spacecraft, etc. with an automatic driving function, and the first terminal is currently in a fully or partially automatic driving mode.
  • the first terminal can realize automatic control based on the environment image when it is in the fully automatic driving mode.
  • the first terminal may obtain the user's control instruction through human-computer interaction after making a preliminary judgment based on the environment image when it is in the partial automatic driving mode, so as to realize the semi-automatic control.
  • the first terminal of this embodiment has a closed space
  • the components of the closed space at least include glass
  • the user observes the surrounding environment of the first terminal through the glass in the closed space.
  • the user can observe the road conditions and weather conditions around the vehicle through the front windshield, the left and right side windshields, or the rear windshield of the vehicle.
  • At least one image acquisition device may be arranged in the closed space inside the first terminal.
  • the image acquisition device may include a monocular camera (Monocular), a binocular camera (Stereo), a depth camera (RGB-D), and various combinations thereof.
  • the monocular camera has a simple structure and low cost, but a single image cannot determine the depth information;
  • a binocular camera consists of two monocular cameras. The distance between the two cameras is known and can be based on the relative relationship between the two cameras.
  • the position determines the distance of the object from any camera; the depth camera can measure the distance of the object from the camera by actively emitting light to the object and receiving the returned light by using infrared structured light or Time-of-Flight (ToF) distance.
  • the image acquisition device may also include a camera.
  • the environment image collected by the image acquisition device of this embodiment is a visible light image, which is used to present information about the environment where the first terminal is located, for example, including road conditions (obstacles such as other vehicles and pedestrians, lane lines, stop lines, lane edges, crosswalks, etc.) Road information such as traffic lights, traffic signs, green belts, etc.), weather conditions (sunny, cloudy, rainy, snowy, foggy, etc.).
  • road conditions obstacles such as other vehicles and pedestrians, lane lines, stop lines, lane edges, crosswalks, etc.
  • Road information such as traffic lights, traffic signs, green belts, etc.
  • weather conditions unsunny, cloudy, rainy, snowy, foggy, etc.
  • FIG. 3 shows a schematic diagram of the space of the image acquisition device inside the vehicle.
  • the image acquisition device can be set in the front driving area 301 of the vehicle 100, such as the top of the main driver or the top of the co-pilot, for collecting environmental images in front of the vehicle 100; it can also be set at the rear of the vehicle 100.
  • the row seating area 302, such as the top left or right side of the rear row, is used to collect environmental images on the left or right side of the vehicle 100, and another example is the top back in the middle of the rear row, which is used to collect environmental images behind the vehicle 100.
  • Step 202 Determine the status information of the first terminal according to at least one frame of environment image.
  • the status information includes at least one of the following:
  • the glass of the first terminal is foggy, or the weather condition of the environment where the first terminal is located.
  • the weather state of the environment where the first terminal is located includes any one of dense fog, mist, or normal.
  • the fog in the environment affects the imaging and detection of the camera.
  • the envelope range, particle size, density, and visibility of the fog can be described to determine whether it is dense fog, mist or normal.
  • the envelope range is the concept of space volume, and there is no fixed standard;
  • the particle size of the fog is usually 1um ⁇ 15um;
  • the density includes two indicators of water content and number density, and the value range is 0.1g/m 3 ⁇ 0.5g/ m 3 and 50/cm 3 ⁇ 2500/cm 3 ;
  • the range of visibility depends on the physical quantity of the microstructure of the fog, and its influencing factors include the number density of droplets, particle size, and water content.
  • the visibility is mainly determined by two factors: one is the brightness difference between the target and the background (such as the sky) that sets off the target.
  • the atmosphere between the observer (or image acquisition device) and the target can reduce the aforementioned difference in brightness.
  • Weather phenomena such as fog, smoke, sand, heavy snow, and drizzle can make the atmosphere muddy and make the transparency worse.
  • Table 1 shows the qualitative description relationship between visibility and weather. From Table 1, it can be seen that the lower the value of visibility, the worse the visibility and the higher the fog concentration. It should be noted that Table 1 is only an exemplary description to characterize the correspondence relationship, and this application does not limit the specific correspondence situation. The above-mentioned corresponding relationship can also be characterized by other forms, not limited to tables.
  • weather with visibility above 10km can be defined as normal weather, and weather with visibility below 10km can be defined as foggy weather.
  • different classifications can be set for foggy weather, such as mist and dense fog, or mist, fog, heavy fog, dense fog, and strong fog.
  • the embodiment does not impose any limitation.
  • weather with visibility between 10km-1km may be defined as misty weather, and weather with visibility below 1km is defined as dense fog weather.
  • the first terminal determines the visibility of the environment image through the image recognition algorithm, and determines the weather state of the environment where the first terminal is located according to the visibility. For example, the first terminal determines the visibility according to the clarity of the image. For another example, the first terminal determines the visibility according to the brightness difference between the target object in the image and the sky background. For another example, the first terminal determines the visibility according to the grayness of the image, which is due to fog. It is always gray-white, so in a foggy image, objects that should be dark will become gray-white. The greater the fog density, the higher the degree of gray-white.
  • the above-mentioned normal weather conditions include sunny weather and low fog density (for example, visibility above 10 km), and abnormal weather includes dense fog or mist.
  • an image recognition algorithm can be used to distinguish the above-mentioned normal weather state and abnormal weather state.
  • abnormal weather conditions may also include rainy weather, snowy weather, and the like.
  • the first terminal can make a more detailed determination of the weather state of the environment where the first terminal is located by updating the image recognition algorithm, so as to distinguish rainy, snowy, foggy, and so on.
  • This step is mainly used to distinguish two scenarios where the glass of the first terminal is fogged and the environment where the first terminal is located is foggy.
  • this embodiment provides the following three implementation manners:
  • the first implementation manner determines the current state information of the first terminal by analyzing the brightness change law of at least two target objects in a single frame of environment image (for example, the first environment image in at least one frame of environment image). Taking two target objects (two detection points) as an example, if the difference between the brightness attenuation of the two target objects is greater than or equal to the preset threshold, it can be considered that the brightness changes of the two target objects are inconsistent, and the first terminal can be determined The windshield is fogged; if the difference between the brightness attenuation of the two target objects is less than the preset threshold, it can be considered that the brightness changes of the two target objects are consistent, and the weather state of the environment where the first terminal is located can be further confirmed (mist or thick). fog).
  • any target object has an inherent brightness.
  • the surface brightness of the target object collected by the image capture device of the first terminal will be less than the inherent brightness, that is, there is a certain brightness attenuation.
  • the brightness attenuation is mainly affected by the atmospheric environment. The worse the environmental quality, the greater the brightness attenuation. For example, in a foggy environment, the greater the fog density, the greater the brightness attenuation. If the brightness changes of the two target objects are inconsistent, that is, the two target objects are not affected by the atmospheric environment, it can be considered that the two target objects are not in the same environment. This situation is most likely to be the environment in the car (such as the interior glass of the car).
  • Fogging it can be determined that the windshield of the first terminal is fogging. If the brightness changes of the two target objects are consistent, that is, the two target objects are affected by the atmospheric environment the same, it can be considered that the two target objects are in the same environment, and the weather state of the environment where the first terminal is located can be further determined.
  • the second implementation manner determines the current state information of the first terminal by analyzing the image quality of the near-end target object in a single frame of environment image (for example, the first environment image in at least one frame of environment image). If the image quality parameter of the near-end target object meets the preset condition, the weather state of the environment where the first terminal is located (mist or dense fog) can be further confirmed; if the image quality parameter of the near-end target object does not meet the preset condition, then The windshield of the first terminal is considered to be fogged.
  • Clarity refers to the clarity of the lines and boundaries of each detail in the image. There are many ways to evaluate it. For details, please refer to the following.
  • the embodiment of the present application shows that by analyzing the clarity of the near-end target object in the environment image, it is determined whether the weather state of the environment or whether the windshield is fogged.
  • the weather condition of the environment or whether the windshield is fogged can also be determined based on the above-mentioned other image quality, which is not limited in this application.
  • the third implementation manner determines the current state information of the first terminal by analyzing the brightness change law of the same target object in multiple frames of environmental images collected by the same image acquisition device or the above-mentioned image quality. Taking two frames of environmental images as an example, if the brightness changes of the same target object in the two frames of environmental images are consistent, or the difference in image quality of the same target object in the two frames of environmental images falls within a certain value range, it can be considered as two frames If the image quality of the same target object in the environmental image is the same or the same, it is considered that the windshield of the first terminal is fogged; if the brightness change law of the same target object in the two environmental images is inconsistent, or the same target in the two environmental images If the difference in image quality of the object falls outside a certain numerical range, it can be considered that the image quality of the same target object in the two environmental images is inconsistent or different, and the weather state of the environment where the first terminal is located (mist or dense fog can be further confirmed) ).
  • the environment presented by the environment image includes at least two target objects and the sky.
  • the at least two target objects may be moving objects such as vehicles and pedestrians, or fixed objects such as traffic lights, signs, green belts, lane lines, etc. This embodiment does not impose any limitation on this.
  • FIG. 4 is a flowchart of determining the status information of the first terminal according to an embodiment of the application. As shown in FIG. 4, the above step 202 specifically includes:
  • Step 301 Acquire brightness information of at least two target objects, brightness information of the sky, and depth information of at least two target objects in a single frame of environment image.
  • a single frame of environmental image may be a certain frame of environmental image in at least one frame of environmental images collected by at least one image acquisition device, for example, the first frame of environmental image, where "first" does not mean time sequence Relationship, but any frame of environmental images collected by a certain image acquisition device.
  • the first frame of environment image may also be a frame of environment image that meets a preset condition or rule among the at least one frame of environment image, and the preset condition or rule is not specifically limited here.
  • L represents the surface brightness of the target object
  • L 0 represents the inherent brightness of the target object
  • L f represents the sky brightness
  • d represents the distance between the target object and the first terminal
  • k represents the extinction coefficient (equal to the absorption coefficient and the diffusion coefficient) Sum).
  • the brightness information of the target object includes the surface brightness and intrinsic brightness of the target object.
  • the first terminal obtains the surface brightness of the target object by extracting the brightness value of the image block corresponding to the target object in the environment image, that is, L in the above formula. Different types of target objects correspond to different intrinsic brightness, and the first terminal may prestore the intrinsic brightness values of different types of target objects. Similarly, the first terminal obtains the brightness information of the sky by extracting the brightness value of the image block corresponding to the sky in the environment image, that is, L f in the above formula.
  • the first terminal can obtain the depth information of any target object in the environment image based on the monocular or binocular vision ranging method, and the depth information indicates the distance of any target object from the first terminal (or the image acquisition device of the first terminal), That is d in the above formula.
  • a monocular distance measurement method can use the target touch point.
  • the projection of the target touch point on the camera and the optical axis form a similar triangle. According to the principle of similar triangles, the distance between the camera and the target touch point can be obtained.
  • the binocular distance measurement is to directly measure the distance of the target by calculating the parallax of the two images obtained by the binocular.
  • the brightness information and extinction coefficient of the target object are not only limited to the Koschmieder law mentioned above, but can also include the Allard atmospheric light illuminance transmission law, Mie scattering theory, etc., which can be used as the basis for the calculation of the extinction coefficient.
  • This embodiment of the application does not impose any limitation.
  • Step 302 Determine the state information of the first terminal according to the brightness information of the at least two target objects, the brightness information of the sky, and the depth information of the at least two target objects in the single frame of the environment image.
  • the brightness information of the target object, the brightness information of the sky, and the depth information of the target object together indicate the degree of brightness loss of the target object in the atmosphere.
  • the extinction coefficient corresponding to the target object can be calculated based on the aforementioned Koschmieder law. The extinction coefficient is used to indicate the brightness loss of the target object in the atmosphere. degree.
  • the extinction coefficients corresponding to at least two target objects can be determined based on the aforementioned Koschmieder's law.
  • the first terminal may determine the state information of the first terminal according to the extinction coefficients corresponding to the two target objects. Specifically, it is judged whether the difference between the extinction coefficients corresponding to the two target objects is less than the first threshold.
  • the difference is less than or equal to the first threshold, it is considered that the extinction coefficients corresponding to the two target objects are the same (or the same ), it can be further determined whether the weather state of the environment where the first terminal is located is foggy, dense fog or mist; if the difference is greater than the first threshold, it is considered that the extinction coefficients corresponding to the two target objects are inconsistent (or it is Different), it is determined that the glass of the first terminal is fogged.
  • the above-mentioned first threshold can be set based on experience, and the threshold can also be fine-tuned based on actual detection results. There is no limitation on the setting method in this embodiment of the application.
  • the first terminal may determine the state information of the first terminal according to the extinction coefficients corresponding to the multiple target objects. Specifically, it is judged whether the extinction coefficients corresponding to multiple target objects are consistent. If the difference between any two of the extinction coefficients corresponding to the multiple target objects is less than the first threshold, it is considered that the extinction coefficients corresponding to the multiple target objects are consistent.
  • the weather state of the environment where the first terminal is located can be further determined; if there are at least one group of target objects among multiple target objects, and the extinction coefficients of any two target objects in each group of target objects are greater than or equal to the first threshold, it is considered The extinction coefficients corresponding to multiple target objects are inconsistent, and it is determined that the glass of the first terminal is fogged.
  • the target object in this area will not conform to the law of brightness change.
  • the difference between the extinction coefficients calculated by other target objects is relatively large (that is, the difference between the extinction coefficient corresponding to the target object in this area and the extinction coefficient corresponding to the target object in other areas is greater than the first threshold), through the above-mentioned judgment process of this embodiment
  • the first terminal has the ability to detect whether the glass of the first terminal is fogged, and the intelligence of the first terminal is improved.
  • the detection method by acquiring a single frame of environmental image from at least one image acquisition device, it is determined whether the extinction coefficients corresponding to two or more target objects in the environmental image are consistent. If two or more target objects correspond The extinction coefficients are inconsistent, and it can be determined that the glass of the first terminal is fogged.
  • the above judgment process enables the first terminal to have the ability to detect whether the glass of the first terminal is foggy, realizes the distinction between foggy weather and glass fogging in ambient weather, and improves the degree of intelligence of the first terminal.
  • the environment presented by the environment image includes at least one near-end target object, where the near-end target object includes an object outside the first terminal and whose distance from the first terminal is less than a preset distance.
  • the near-end target objects can be the front and rear hoods of the vehicle, any objects fixed on the hood, and the rearview mirrors on the left and right sides of the vehicle.
  • a calibration object can be set on the near-end target object, for example, a red dot or a cross on the front hood of the vehicle.
  • the first terminal is based on the environmental image collected by the image acquisition device (such as a camera) and recognizes the environment image
  • the calibration object determines the near-end target object.
  • FIG. 5 is a flowchart for determining the status information of the first terminal according to an embodiment of the application. As shown in FIG. 5, the above step 202 specifically includes:
  • Step 401 Obtain a sharpness value of at least one near-end target object in a single frame of environmental image.
  • the single-frame environmental image in the embodiment of the present application may also be a certain frame of environmental image in at least one frame of environmental images collected by at least one image acquisition device, for example, the first frame of environmental image, where The "first frame” does not represent a timing relationship, and can be any frame of environmental images collected by an image collection device.
  • the sharpness of the image is an important indicator to measure the quality of the image. It can better correspond to the subjective feelings of people.
  • the lack of sharpness of the image shows the blur of the image.
  • the first terminal can obtain the sharpness value of at least one near-end target object in the environment image based on any sharpness algorithm.
  • sharpness algorithms can be used: Brenner gradient function, Tenengrad gradient Function, Laplacian gradient function, SMD (gray-scale variance) function, variance function, energy gradient function, etc. This application does not make specific limitations, and the definition is subject to availability.
  • the first terminal may obtain the sharpness value of at least one near-end target object in the environment image through the Brenner gradient function.
  • This function is used to calculate the square of the gray difference between two adjacent pixels, which can be expressed as:
  • x, y represent pixel coordinates
  • f(x, y) represents the gray value of the corresponding point (x, y)
  • D represents the image sharpness value
  • the first terminal may determine the definition value of the at least one near-end target object by acquiring the gray value of the image block corresponding to the at least one near-end target object based on the above-mentioned Brenner gradient function.
  • Step 402 Determine the state information of the first terminal according to the sharpness value of the at least one near-end target object in the single-frame environment image.
  • the first terminal determines the state information of the first terminal by comparing the magnitude relationship between the sharpness value of the near-end target object and the preset sharpness threshold. If the sharpness value of the near-end target object is less than or equal to the preset sharpness threshold, it is determined that the glass of the first terminal is fogged; if the sharpness value of the near-end target object is greater than the preset sharpness threshold, the first terminal can be further determined The weather state of the environment. It should be pointed out that the above-mentioned definition threshold can be set based on experience, and the threshold can also be fine-tuned based on actual detection results, and there is no limitation on the setting method in the embodiment of the present application.
  • the detection method by acquiring a single frame of environmental image from at least one image acquisition device, the image clarity of the near-end target object in the environmental image is determined, and whether the glass of the first terminal is fogged or not is determined according to the image clarity.
  • the above judgment process enables the first terminal to have the ability to detect whether the glass of the first terminal is foggy, realizes the distinction between foggy weather and glass fogging in ambient weather, and improves the degree of intelligence of the first terminal.
  • the first terminal when the first terminal determines that the glass of the first terminal is fogged, it can control the defogging device (such as an on-board fresh air system, air conditioner) inside the first terminal to start, or, Control the start of the window lifter, or send out an alarm message (the alarm can be sent out by means of screen display, voice broadcast or vibration).
  • the defogging device such as an on-board fresh air system, air conditioner
  • the first two methods can directly eliminate the fog on the glass of the first terminal, realize the balance of the temperature difference between the inside and outside of the terminal, and ensure the driving safety of the first terminal. In the latter way, the user can perform manual intervention based on the alarm information to ensure the driving safety of the first terminal.
  • the foregoing several embodiments are based on a single frame of environmental images collected by the image acquisition device to detect the environment of the terminal.
  • the following embodiment shows the detection of the terminal environment based on multiple frames of environmental images collected by the image acquisition device.
  • step 202 Through image analysis of multiple frames of environmental images, it is determined whether the glass of the first terminal is foggy, or processing is performed according to the environmental weather detection method.
  • Fig. 6 is a flow chart for determining status information of the first terminal according to an embodiment of the application. As shown in Figure 6, the detection method provided in this embodiment includes the following steps:
  • Step 501 Acquire multiple frames of environment images from at least one image acquisition device, where the environment images are used to present information about the environment where the first terminal is located.
  • step 201 The implementation process of this step is the same as that of step 201 in the foregoing embodiment.
  • the only difference is that the acquired environment image is of multiple frames.
  • the foregoing embodiment please refer to the foregoing embodiment, which will not be repeated here.
  • Step 502 Determine the status information of the first terminal according to the multiple frames of environmental images.
  • the multiple frames of environmental images are all from the same image acquisition device, and the state information of the first terminal is determined according to the multiple frames of environmental images of the same image acquisition device.
  • the specific number of multiple frames can be set according to different requirements, for example, 5 consecutive frames of environmental images can be acquired according to a preset sampling interval.
  • the status information includes at least one of the following:
  • the glass of the first terminal is foggy, or the weather condition of the environment where the first terminal is located.
  • the environment presented by the environment image includes at least one target object.
  • the first terminal determines the state information of the first terminal according to the multi-frame environment image, including the following two implementation manners:
  • the first implementation manner determines the current state information of the first terminal by analyzing the brightness change law of at least one target object in the multi-frame environment image. If the difference in brightness attenuation of the same target object in multiple frames of environmental images is less than the preset threshold, it can be considered that the brightness of the target object in the multiple frames of environmental images is consistent, and it can be determined that the windshield of the first terminal is fogged; if The difference in the brightness attenuation of the same target object in the multi-frame environment image is greater than or equal to the preset threshold. It can be considered that the brightness change law of the target object in the multi-frame environment image is inconsistent, and the weather state of the environment where the first terminal is located can be further confirmed ( Mist or dense fog).
  • the second implementation manner determines the current state information of the first terminal by analyzing the image quality of at least one target object in the multi-frame environment image. If the difference between the image quality parameters of the same target object in the multiple frames of environmental images is less than the preset threshold, it can be considered that the image quality of the target object in the multiple frames of environmental images is consistent, and it can be determined that the windshield of the first terminal is fogged; If the difference between the image quality parameters of the same target object in multiple frames of environmental images is greater than or equal to the preset threshold, it can be considered that the image quality of the target object in the multiple frames of environmental images is inconsistent, and the environment where the first terminal is located can be further confirmed Weather status (mist or thick fog).
  • the aforementioned image quality parameters include, but are not limited to, sharpness, signal-to-noise ratio, color conditions, white balance, deformities, motion effects, and so on.
  • step 502 specifically includes:
  • the state information is determined according to the extinction coefficient of the at least one target object in the multiple frames of environmental images.
  • the extinction coefficient of the target object in each frame of the environmental image is determined by the brightness information of the target object, the brightness information of the sky, and the depth information of the target object.
  • the calculation process of the extinction coefficient is the same as that of step 301 in the foregoing embodiment.
  • the first terminal determines the extinction coefficient of the same target object in the multi-frame environment image according to the value of the same target object in any two frames of the environment image in the multi-frame environment image.
  • the difference of the extinction coefficient determines the state information of the first terminal. If the difference of the extinction coefficient of the same target object in any two frames of the environment image in the multi-frame environment image is less than or equal to the fourth threshold, it can be considered that the extinction coefficient of the target object in the multi-frame environment image is the same or the same. Make sure that the glass of the first terminal is fogged.
  • the fourth threshold can be set based on experience, or the threshold can be fine-tuned based on actual detection results, and there is no limitation on the setting method in the embodiment of the present application.
  • the first terminal may determine the change of the extinction coefficient of multiple target objects in the multi-frame environment image, if the extinction coefficient of each target object in the multi-frame environment image is the same Or the same, it is considered that the glass of the first terminal is fogged, and the accuracy of the detection method is higher than that of judging a target object in multiple frames of environmental images.
  • step 502 specifically includes:
  • the state information is determined according to the sharpness value of the at least one target object in the multiple frames of environmental images.
  • the definition value of the target object in the environmental image of each frame is determined by the gray value of the image block corresponding to the target object.
  • the calculation process of the sharpness value is the same as step 401 of the foregoing embodiment. For details, please refer to the foregoing embodiment, which will not be repeated here.
  • the first terminal determines the sharpness value of the same target object in multiple frames of environmental images, and then according to the difference of the sharpness value of the same target object in the multiple frames of environmental images To determine the status information of the first terminal. Specifically, if the difference between the sharpness values of the same target object in any two frames of environmental images in the multiple frames of environmental images is less than or equal to the fifth threshold, it can be considered that the sharpness value of the target object in the multiple frames of environmental images is consistent or Similarly, it can be determined that the glass of the first terminal is fogged.
  • the difference between the sharpness values of the same target object in two environmental images in multiple frames of environmental images is greater than the fifth threshold, it can be considered that the sharpness values of the target object in the multiple frames of environmental images are inconsistent or different, and it can be further determined Whether the weather state of the environment where the first terminal is located is foggy, thick fog or mist, see the embodiment in FIG. 7 for details.
  • the above-mentioned fifth threshold can be set based on experience, or the threshold can be fine-tuned based on actual detection results, and there is no restriction on the setting method in the embodiment of the present application.
  • the target object selected in the multi-frame environment image may be a near-end target object, and the near-end target object includes an object outside the first terminal and whose distance from the first terminal is less than a preset distance.
  • the number of acquiring multiple frames of environmental images and the time interval for acquiring multiple frames of environmental images can be preset according to actual needs. For example, setting an interval of 0.1s to acquire one frame of environment image, the first terminal can perform environment detection based on the continuous 5 frames of environment image.
  • step 502 may include: determining the first state information of the first terminal according to the multi-frame environmental images of the first image acquisition device ; Determine the second state information of the first terminal according to the multi-frame environmental images of the second image acquisition device; determine the state information of the first terminal according to the first state information and the second state information.
  • the status information of the first terminal is glass fogging; if the first status information and the second status information are different (for example, the first The status information is glass fogging, and the second status information is environmental fog.)
  • the status information of the first terminal can be determined according to the weight of the image acquisition device (for example, the weight of the first image acquisition device is greater than the weight of the second image acquisition device, then It is determined that the state information of the first terminal is the first state information (glass fogging).)
  • the weight of the image acquisition device is related to the hardware performance of the image acquisition device. The stronger the hardware performance, the larger the weight value.
  • FIG. 7 is a flowchart of determining the weather state of the first terminal according to an embodiment of the application. As shown in Figure 7, the detection method provided by this embodiment includes:
  • Step 601 Acquire the saturation and lightness of any frame of environmental images of at least one image acquisition device.
  • the first terminal obtains the overall saturation S and lightness V of the environment image through the HSV color model (Hue, Saturation, Value).
  • the saturation S represents the degree to which the color of the environmental image is close to the spectral color. For a certain color, it can be regarded as the result of mixing a certain spectral color with white. The greater the proportion of the spectral color, the closer the color is to the spectral color, and the higher the saturation of the color.
  • the lightness V represents the brightness of the color of the environment image.
  • the value ranges of S and V are both 0% to 100%.
  • Step 602 Determine the weather state of the environment where the first terminal is located according to the ratio of brightness and saturation.
  • the ratio V/S of the brightness and saturation of the environmental image is larger. Therefore, the ratio can be used to determine whether the weather state is foggy, misty or dense. Specifically, if the ratio is greater than or equal to the second threshold, it is determined that the weather state of the environment where the first terminal is located is dense fog weather; if the ratio is greater than the third threshold and less than the second threshold, then the environment where the first terminal is located is determined The weather state of is misty weather; if the ratio is less than or equal to the third threshold, it is determined that the weather state of the environment where the first terminal is located is normal weather (little or no fog). It should be pointed out that the above-mentioned second threshold and third threshold can be set based on experience, and the threshold can also be fine-tuned based on actual detection results. There is no limitation on the setting method in this embodiment of the application.
  • the first terminal may perform a corresponding control operation according to the determined weather state.
  • foggy environments with different concentration levels can correspond to different control operations.
  • the first terminal determines that the weather state is dense fog weather
  • the first terminal controls its own driving state (for example, switches the driving state from a fully automatic driving state to a semi-automatic driving state, that is, reduces the automatic driving level of the first terminal), or,
  • the first terminal outputs information to the vehicle-mounted controller, and the vehicle-mounted controller sends a control instruction to the related device of the first terminal, for example, the vehicle-mounted controller sends an instruction to turn on the fog lamp of the first terminal.
  • the first terminal When the first terminal determines that the weather state is misty weather, the first terminal performs defogging processing on the environment image, and then sends the image data after the defogging processing to the detection module for road detection, and executes corresponding road detection results according to the road detection results.
  • the driving strategy (such as acceleration, deceleration, parking), of course, can also not perform any operation (such as maintaining the current driving state), or deal with it in the manner of the above dense fog weather (such as lowering the automatic driving level of the first terminal).
  • the first terminal determines that the weather state is normal weather
  • the first terminal can directly perform road detection according to the environment image. For details, please refer to the above and will not be repeated here.
  • the first terminal can make a more detailed determination of the weather state of the environment where the first terminal is located by updating the image recognition algorithm, for example, adding recognition of abnormal weather, distinguishing between foggy weather, rainy weather, and snowy weather. Perform different control operations for different abnormal weather. For example, when the first terminal determines that the weather state is rainy weather, the first terminal can output information to the on-board controller, and the on-board controller sends an opening instruction to the wiper of the first terminal. The wiper can also be intelligently adjusted according to the amount of rain. The frequency of the device.
  • the saturation and brightness of the environmental image are calculated, and the concentration level of the foggy weather is determined based on the ratio of the brightness and the saturation.
  • the first terminal is likely to perform defogging algorithms or reduce the level of automatic driving for the glass fogging situation, resulting in a waste of control system resources , Or the foggy weather in the ambient weather is regarded as the glass fogging, which causes the defogging device to be turned on and has no effect.
  • the first terminal can quickly identify its status information, and perform corresponding control operations according to different status information, thereby improving the degree of intelligence of the first terminal.
  • each of the foregoing method embodiments may be the first terminal (for example, an autonomous vehicle) or a component on the first terminal (for example, a detection device, a chip, a controller, or a control unit), or it may be the same as the first terminal (for example, an autonomous vehicle) or a component on the first terminal.
  • a cloud device connected to a terminal in communication does not impose any restriction on this embodiment of the present application.
  • the foregoing detection device may be an image acquisition device (such as a camera device)
  • the foregoing controller may be a multi-domain controller (Multi Domain Control, MDC)
  • the foregoing control unit may be an electronic control unit (ECU). ), also known as trip computer.
  • the embodiment of the present application may divide the detection device into functional modules according to the foregoing method embodiments.
  • each functional module may be divided corresponding to each function, or two or more
  • the functions are integrated in a processing module.
  • the above-mentioned integrated modules can be implemented either in the form of hardware or in the form of software functional modules. It should be noted that the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation. The following is an example of using the corresponding functional modules to divide each functional module.
  • the detection device described below can also be replaced by a possible execution body such as a chip, a controller, or a control unit.
  • FIG. 8 is a schematic structural diagram of a detection device provided by an embodiment of the application. As shown in FIG. 8, the detection device 700 provided by the embodiment of the present application includes:
  • the acquiring module 701 is configured to acquire at least one frame of environment image from at least one image acquisition device, where the environment image is used to present information about the environment where the first terminal is located;
  • the processing module 702 is configured to determine the state information of the first terminal according to the at least one frame of environmental image, where the state information includes at least one of the following:
  • the weather state of the environment where the first terminal is located is located.
  • the weather state includes any one of dense fog, mist or normal.
  • the state information includes whether the glass of the first terminal is foggy, and the environment presented by the environmental image includes at least two target objects and the sky;
  • the processing module 702 is specifically configured to, according to the brightness information of at least two target objects in the first environment image in the at least one frame of environment image, the brightness information of the sky, and the depth information of the at least two target objects, Determine the status information.
  • the processing module 702 determines that the glass of the first terminal is fogged, wherein, in the first environment image, at least one group of target objects exists in the at least two target objects, and each group The difference between the extinction coefficients of any two target objects in the target object is greater than the first threshold, and the extinction coefficient is determined by the brightness information of the target object, the brightness information of the sky, and the depth information of the target object The extinction coefficient is used to indicate the degree of brightness loss of the target object in the atmosphere.
  • the status information includes whether the glass of the first terminal is foggy, the environment presented by the environmental image includes at least one near-end target object, and the near-end target object includes the exterior of the first terminal.
  • the processing module 702 is specifically configured to determine the state information according to the sharpness value of at least one near-end target object in the first environment image in the at least one frame of environment image;
  • the sharpness value of the at least one near-end target object is determined by the gray value of the image block corresponding to the at least one near-end target object.
  • the processing module 702 determines that the glass of the first terminal is fogged, and in the first environment image, at least one near-end target object has a sharpness value less than or equal to a preset sharpness threshold.
  • the acquisition module 701 is specifically configured to acquire multiple frames of environmental images from at least one image acquisition device;
  • the processing module 702 is specifically configured to determine the state information of the first terminal according to the multi-frame environment image.
  • the status information includes whether the glass of the first terminal is fogged, and the environment presented by the environment image includes at least one target object;
  • the processing module 702 is specifically configured to determine the state information according to the extinction coefficient or the sharpness value of the at least one target object in the multi-frame environment image;
  • the extinction coefficient of the at least one target object in each frame of the environment image is determined by the brightness information of the at least one target object, the brightness information of the sky, and the depth information of the at least one target object;
  • the definition value of the at least one target object in each frame of the environmental image is determined by the gray value of the image block corresponding to the at least one target object.
  • the processing module 702 determines that the glass of the first terminal is fogged, wherein the difference of the extinction coefficient of the same target object in any two frames of the environment images in the multiple frames of environment images is less than or equal to The fourth threshold.
  • the processing module 702 determines that the glass of the first terminal is fogged, wherein the difference between the sharpness values of the same target object in any two frames of the environmental images in the multiple frames of environmental images is less than Or equal to the fifth threshold.
  • the same target object is the same near-end target object, and the near-end target object includes an object outside the first terminal and whose distance from the first terminal is less than a preset distance.
  • the processing module 702 is further configured to:
  • the processing module 702 determines that the glass of the first terminal is not fogged
  • the acquiring module 701 is further configured to:
  • the processing module 702 is further configured to determine the weather state of the environment where the first terminal is located according to the ratio of the brightness to the saturation.
  • the ratio when the weather state is dense fog, the ratio is greater than or equal to a second threshold; when the weather state is mist, the ratio is greater than a third threshold and less than the second threshold; and/or
  • the ratio is less than or equal to the third threshold.
  • processing module 702 is further configured to:
  • road detection is performed according to the environmental image.
  • the detection device provided in the embodiment of the present application may further include a communication module, and the communication module is used to send a control instruction to the defogging device or the window lifting device on the first terminal, and the control instruction is used to control the first terminal vehicle.
  • the defogging device inside is activated, or the first terminal window lifter is controlled to activate.
  • the communication module is used to send alarm information to the display device, voice device, or vibration device of the first terminal, and the alarm can be issued by means of screen display, voice broadcast, or vibration.
  • the detection device provided in the embodiment of the present application is used to execute the detection solution of any of the foregoing method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 9 is a schematic diagram of the hardware structure of a detection device provided by an embodiment of the application.
  • the detection device 800 provided by the embodiment of the present application includes:
  • At least one processor 801 (only one processor is shown in FIG. 9) and at least one memory 802 (only one memory is shown in FIG. 9);
  • the at least one memory 802 is used to store computer execution instructions.
  • the at least one processor 801 executes the computer execution instructions stored in the at least one memory 802, so that the detection device 800 executes the detection scheme of any of the foregoing method embodiments.
  • the detection device 800 provided in the embodiment of the present application may be set on the first terminal or the cloud device, and the embodiment of the present application does not impose any limitation on this.
  • the embodiment of the present application also provides a computer storage medium for storing a computer program, and when the computer program runs on a computer, the computer is caused to execute the detection method in any of the foregoing method embodiments.
  • the embodiments of the present application also provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute the detection method in any of the foregoing method embodiments.
  • An embodiment of the present application also provides a chip, including: at least one processor and an interface, configured to call and run a computer program stored in the at least one memory from at least one memory, and execute the detection in any of the foregoing method embodiments method.
  • An embodiment of the present application also provides an automatic driving system, which includes the aforementioned one or more first terminals, and one or more cloud devices, wherein the first terminal is provided with the aforementioned detection device, or the cloud device is provided with The above-mentioned detection device enables the automatic driving system to distinguish between glass fogging and environmental weather, and improves the accuracy of the system's environmental detection.
  • An embodiment of the present application also provides a vehicle, which includes the above-mentioned detection device.
  • the vehicle has the function of distinguishing between glass fogging and environmental weather, so as to control other devices on the vehicle (such as defogging device, window lifter, display device, vibration device, voice device, etc.) Start or shut down.
  • the vehicle further includes at least one camera device and/or at least one radar device.
  • the radar device includes at least one of millimeter wave radar, laser radar, or ultrasonic radar.
  • the vehicle may be a car, an off-road vehicle, a sports car, a truck, a bus, a recreational vehicle, amusement park vehicle, construction equipment, a tram, a golf cart, a train, etc., and this embodiment of the application does not impose any restriction on this .
  • the processor mentioned in the embodiment of this application may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application-specific integrated circuits (Central Processing Unit, CPU).
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory mentioned in the embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • DDR SDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • Enhanced SDRAM, ESDRAM Enhanced Synchronous Dynamic Random Access Memory
  • Synchronous Link Dynamic Random Access Memory Synchronous Link Dynamic Random Access Memory
  • DR RAM Direct Rambus RAM
  • the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component
  • the memory storage module
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.

Abstract

本申请提供一种检测方法、装置及存储介质,可以应用于智能驾驶或者自动驾驶领域。该方法包括:通过获取来自至少一个图像采集装置,例如车载摄像头,的单帧或多帧环境图像,对单帧或多帧环境图像中的至少一个目标物体的亮度变化规律或者图像质量参数进行综合分析,确定第一终端的状态信息。其中,第一终端的状态信息包括第一终端的玻璃是否起雾,或者第一终端所在环境的天气状态。上述检测过程实现对玻璃起雾和环境天气下的雾天的区分,提升对环境检测的准确性。

Description

检测方法、装置及存储介质
本申请要求于2020年2月17日提交中国专利局、申请号为202010096935.X、申请名称为“检测方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种检测方法、装置及存储介质。
背景技术
随着人工智能的快速发展,辅助驾驶和自动驾驶需要对周围的行车环境进行感知。为了能够准确的对驾驶环境进行感知,需要知道行车路径上的行人、车辆、车道线等各种信息,来保障在一定的行车路径上进行驾驶,并避免碰撞到其他车辆和行人。针对不同的场景,不同的道路工况,以及不同的功能,对传感器感知的要求也是不一样的。作为传感器感知中最重要的一个传感器,摄像头承担了非常重要的功能,可以用来进行障碍物检测、车道线检测、道路边界检测等。
车辆运行中一种常见的现象是挡风玻璃起雾,特别是在冬天的时候,由于室内外温差较大,造成车内侧挡风玻璃起雾,这与外界环境为雾天的场景类似。摄像头采集到上述两种场景下的环境图像,环境图像均不清晰,目前的技术方案无法对上述两种场景加以区分,车辆环境检测的准确性不高。
发明内容
本申请提供一种检测方法、装置及存储介质,实现对玻璃起雾和环境天气下的雾天的区分,提升对环境检测的准确性。
第一方面,本申请实施例提供一种检测方法,该方法包括:获取来自至少一个图像采集装置的至少一帧环境图像,环境图像用于呈现第一终端所在的环境的信息;根据至少一帧环境图像,确定第一终端的状态信息,状态信息包含以下中的至少一个:第一终端的玻璃是否起雾,或者,第一终端所在的环境的天气状态。
在上述方案中,通过获取至少一个图像采集装置采集的单帧或多帧环境图像,根据单帧或多帧环境图像,确定第一终端是否存在玻璃起雾或者确定第一终端所在环境的天气状态(包括雾天环境)。通过上述过程,实现对玻璃起雾和环境天气下的雾天的区分,提升对环境检测的准确性。
可选的,天气状态包括浓雾、薄雾或者正常中的任意一个。具体的,可通过对雾的包络范围、粒径、密度、能见度等进行描述,确定是浓雾、薄雾还是正常。作为在一种示例,能见度的数值越低,能见度越差,雾气浓度越高。
在一种可能的实现方式中,状态信息包含第一终端的玻璃是否起雾,环境图像呈现的环境包含至少两个目标物体以及天空。其中,环境图像中的至少两个目标物体可以理解为环境图像中的至少两个检测点,每一个检测点对应图像中的一个或多个像素点。
可选的,根据至少一帧环境图像,确定第一终端的状态信息,包括:根据至少一帧环境图像中的第一环境图像中的至少两个目标物体的亮度信息、天空的亮度信息以及至少两个目标物体的深度信息,确定状态信息。
可选的,根据第一环境图像中的至少两个目标物体的亮度信息、天空的亮度信息以及至少两个目标物体的深度信息,确定状态信息,包括:确定第一终端的玻璃起雾,其中,在第一环境图像中,至少两个目标物体中存在至少一组目标物体,每组目标物体中任意两个目标物体对应的消光系数的差值大于第一阈值,消光系数是通过目标物体的亮度信息、天空的亮度信息以及目标物体的深度信息确定的,消光系数用于指示目标物体在大气中的亮度损失程度。
该实现方式中,通过对某一图像采集装置的单帧环境图像中的至少两个目标物体对应的消光系数进行比较,确定第一终端的状态信息。以两个目标物体为例,若这两个目标物体对应的消光系数的差值大于第一阈值(即图像中两个检测点的消光系数差别较大),则确定第一终端的玻璃起雾。上述过程可以有效区分玻璃起雾和环境天气下的雾天,避免将玻璃起雾的场景判定为环境雾天场景。
在一种可能的实现方式中,状态信息包括第一终端的玻璃是否起雾,环境图像呈现的环境包括至少一个近端目标物体,近端目标物体包括第一终端外部的、与第一终端距离小于预设距离的物体。示例性的,近端目标物体可以是车辆的前、后引擎盖,固定在引擎盖上的任意物体,车辆左右两侧的后视镜等。
可选的,根据至少一帧环境图像,确定第一终端的状态信息,包括:根据至少一帧环境图像中的第一环境图像中的至少一个近端目标物体的清晰度值,确定状态信息;其中,至少一个近端目标物体的清晰度值是通过至少一个近端目标物体对应的图像块的灰度值确定的。
可选的,根据第一环境图像中的至少一个近端目标物体的清晰度值,确定状态信息,包括:确定第一终端的玻璃起雾,在第一环境图像中,至少存在一个近端目标物体的清晰度值小于或者等于预设清晰度阈值。
该实现方式中,通过对某一图像采集装置的单帧环境图像中的至少一个近端目标物体对应的清晰度值进行分析,确定第一终端的状态信息。以一个近端目标物体为例,若该近端目标物体对应的清晰度值小于或者等于预设清晰度阈值(即图像中近端目标物体模糊不清),则确定第一终端的玻璃起雾。应理解,玻璃起雾将造成近端物体模糊不清,而环境天气下的雾天对近端物体的清晰度影响很小。上述过程可以有效区分玻璃起雾和环境天气下的雾天,避免将玻璃起雾的场景判定为环境雾天场景。
在一种可能的实现方式中,获取来自至少一个图像采集装置的至少一帧环境图像,包括:获取来自至少一个图像采集装置的多帧环境图像。根据环境图像,确定第一终端的状态信息,包括:根据多帧环境图像,确定第一终端的状态信息。
可选的,状态信息包括第一终端的玻璃是否起雾,环境图像呈现的环境包括至少一个目标物体。
可选的,根据多帧环境图像,确定第一终端的状态信息,包括:根据至少一个目标物体在多帧环境图像中的消光系数或者清晰度值,确定状态信息。
其中,至少一个目标物体在每一帧的环境图像中的消光系数是通过至少一个目标 物体的亮度信息、天空的亮度信息以及至少一个目标物体的深度信息确定的;至少一个目标物体在每一帧的环境图像中的清晰度值是通过至少一个目标物体对应的图像块的灰度值确定的。
该实现方式的第一种情况,根据至少一个目标物体在多帧环境图像中的消光系数,确定状态信息,包括:确定第一终端的玻璃起雾,其中,多帧环境图像中的任意两帧的环境图像的同一目标物体的消光系数的差值均小于或者等于第四阈值。
该情况下,通过对某一图像采集装置的多帧环境图像中的同一目标物体对应的消光系数进行比较,确定第一终端的玻璃起雾。若同一目标物体在多帧环境图像中的任意两帧中的消光系数的差值均小于或等于第四阈值,表明同一目标物体在多帧环境图像中的消光系数基本不变,该特征对应玻璃起雾场景。
应理解,若第一终端处于环境雾天,随着第一终端的移动,同一目标物体与该第一终端的距离不断变化(变大或变小),消光系数与距离值相关,同一目标物体距离第一终端越远,其消光系数越大,距离越近,消光系数越小。因此,环境雾天中同一目标物体在多帧环境图像中的消光系数变化较大。然而,对于玻璃起雾的场景,第一终端的移动对同一目标物体在多帧环境图像中的消光系数的影响很小。
该实现方式的第二种情况,根据至少一个目标物体在多帧环境图像中的清晰度值,确定状态信息,包括:确定第一终端的玻璃起雾,其中,多帧环境图像中的任意两帧的环境图像中的同一目标物体的清晰度值的差值均小于或者等于第五阈值。
该情况下,通过对某一图像采集装置的多帧环境图像中的同一目标物体对应的清晰度值进行比较,确定第一终端的玻璃起雾。若同一目标物体在多帧环境图像中的任意两帧中的清晰度值的差值均小于或等于第五阈值,表明同一目标物体在多帧环境图像中的清晰度值基本不变,该特征对应玻璃起雾场景。
应理解,若第一终端处于环境雾天,随着第一终端的移动,同一目标物体与该第一终端的距离不断变化,清晰度值与距离值相关,同一目标物体距离第一终端越远,其清晰度值越低,距离越近,清晰度值越高。因此,环境雾天中同一目标物体在多帧环境图像中的清晰度值变化较大。然而,对于玻璃起雾的场景,第一终端的移动对同一目标物体在多帧环境图像中的消光系数的影响很小。
可选的,同一目标物体为同一近端目标物体,近端目标物体包括第一终端外部的、与第一终端距离小于预设距离的物体。
在上述各种实现方式的基础上,可选的,在确定第一终端的玻璃起雾时,方法还包括:控制车内的除雾装置启动,或者,控制车窗升降装置启动,或者,发出告警信息。
可选的,在确定第一终端的玻璃没有起雾时,方法还包括:获取至少一个图像采集装置的任意一帧环境图像的饱和度和明度;根据明度与饱和度的比值,确定第一终端所在的环境的天气状态。该方案用于进一步确定当前第一终端所在的环境天气,是正常天气还是环境雾天(浓雾或者薄雾)。
可选的,根据明度与所度饱和度的比值,确定第一终端所在的环境的天气状态,包括:天气状态为浓雾时,比值大于或者等于第二阈值;天气状态为薄雾时,比值大于第三阈值且小于第二阈值;和/或,天气状态为正常时,比值小于或者等于第三阈值。
可选的,确定天气状态为浓雾时,控制第一终端的驾驶状态或者输出控制信息到车载控制器;或者,确定天气状态为薄雾时,对环境图像进行去雾处理;或者,确定天气状态为正常时,根据环境图像进行道路检测。
上述方案通过计算环境图像的饱和度和明度,基于明度与饱和度的比值确定环境天气为雾天的浓度等级,使得第一终端具备检测雾天环境的能力,并根据雾天的浓度等级执行相应的控制操作。在雾天浓度不高的情况下,无需切换第一终端的驾驶状态,可通过图像处理算法对环境图像进行去雾处理,从而避免对控制系统资源的浪费。
第二方面,本申请实施例提供一种检测装置,包括:获取模块和处理模块。其中,获取模块用于获取来自至少一个图像采集装置的至少一帧环境图像,环境图像用于呈现第一终端所在的环境的信息;处理模块用于根据至少一帧环境图像,确定第一终端的状态信息,状态信息包含以下中的至少一个:第一终端的玻璃是否起雾,或者,第一终端所在的环境的天气状态。
可选的,天气状态包括浓雾、薄雾或者正常中的任意一个。
可选的,状态信息包含第一终端的玻璃是否起雾,环境图像呈现的环境包含至少两个目标物体以及天空。
在一种可能的实现方式中,处理模块具体用于根据至少一帧环境图像中的第一环境图像中的至少两个目标物体的亮度信息、天空的亮度信息以及至少两个目标物体的深度信息,确定状态信息。
可选的,处理模块确定第一终端的玻璃起雾,其中,在第一环境图像中,至少两个目标物体中存在至少一组目标物体,每组目标物体中任意两个目标物体对应的消光系数的差值大于第一阈值,消光系数是通过目标物体的亮度信息、天空的亮度信息以及目标物体的深度信息确定的,消光系数用于指示目标物体在大气中的亮度损失程度。
可选的,状态信息包括第一终端的玻璃是否起雾,环境图像呈现的环境包括至少一个近端目标物体,近端目标物体包括第一终端外部的、与第一终端距离小于预设距离的物体。
在一种可能的实现方式中,处理模块具体用于根据至少一帧环境图像中的第一环境图像中的至少一个近端目标物体的清晰度值,确定状态信息。其中,至少一个近端目标物体的清晰度值是通过至少一个近端目标物体对应的图像块的灰度值确定的。
可选的,处理模块确定第一终端的玻璃起雾,在第一环境图像中,至少存在一个近端目标物体的清晰度值小于或者等于预设清晰度阈值。
可选的,获取模块具体用于获取来自至少一个图像采集装置的多帧环境图像。处理模块,具体用于根据多帧环境图像,确定第一终端的状态信息。
可选的,状态信息包括第一终端的玻璃是否起雾,环境图像呈现的环境包括至少一个目标物体。
在一种可能的实现方式中,处理模块具体用于根据至少一个目标物体在多帧环境图像中的消光系数或者清晰度值,确定状态信息。其中,至少一个目标物体在每一帧的环境图像中的消光系数是通过至少一个目标物体的亮度信息、天空的亮度信息以及至少一个目标物体的深度信息确定的;至少一个目标物体在每一帧的环境图像中的清晰度值是通过至少一个目标物体对应的图像块的灰度值确定的。
可选的,处理模块确定第一终端的玻璃起雾,其中,多帧环境图像中的任意两帧的环境图像的同一目标物体的消光系数的差值均小于或者等于第四阈值,或者,多帧环境图像中的任意两帧的环境图像中的同一目标物体的清晰度值的差值均小于或者等于第五阈值。
可选的,同一目标物体为同一近端目标物体,近端目标物体包括第一终端外部的、与第一终端距离小于预设距离的物体。
基于上述各实现方式,可选的,处理模块在确定第一终端的玻璃起雾时,还用于:控制车内的除雾装置启动,或者,控制车窗升降装置启动,或者,发出告警信息。
可选的,在处理模块确定第一终端的玻璃没有起雾时,获取模块还用于获取至少一个图像采集装置的任意一帧环境图像的饱和度和明度。处理模块,还用于根据明度与饱和度的比值,确定第一终端所在的环境的天气状态。
可选的,天气状态为浓雾时,比值大于或者等于第二阈值;天气状态为薄雾时,比值大于第三阈值且小于第二阈值;和/或,天气状态为正常时,比值小于或者等于第三阈值。
可选的,处理模块,还用于:确定天气状态为浓雾时,控制第一终端的驾驶状态或者输出控制信息到车载控制器;或者,确定天气状态为薄雾时,对环境图像进行去雾处理;或者,确定天气状态为正常时,根据环境图像进行道路检测。
第三方面,本申请实施例提供一种检测装置,包括至少一个处理器和至少一个存储器;至少一个存储器用于存储计算机执行指令,当检测装置运行时,至少一个处理器执行至少一个存储器存储的计算机执行指令,以使检测装置执行如第一方面中任一项的检测方法。
第四方面,本申请实施例提供一种计算机存储介质,用于存储计算机程序,当计算机程序在计算机上执行时,使得计算机执行如第一方面中任一项的检测方法。
本申请实施例提供一种检测方法、装置及存储介质,该方法包括:通过获取来自至少一个图像采集装置的单帧或多帧环境图像,对单帧或多帧环境图像中的至少一个目标物体的亮度变化规律或者图像质量参数进行综合分析,确定第一终端的状态信息。其中,第一终端的状态信息包括第一终端的玻璃是否起雾,或者第一终端所在环境的天气状态。上述检测过程实现对玻璃起雾和环境天气下的雾天的区分,提升对环境检测的准确性。
附图说明
图1为本申请实施例提供的车辆的功能框图;
图2为本申请实施例提供的一种检测方法的流程图;
图3为本申请实施例提供的图像采集装置在车辆内部的空间示意图;
图4为本申请实施例提供的一种确定第一终端状态信息的流程图;
图5为本申请实施例提供的一种确定第一终端状态信息的流程图;
图6为本申请实施例提供的一种确定第一终端状态信息的流程图;
图7为本申请实施例提供的一种确定第一终端天气状态的流程图;
图8为本申请实施例提供的一种检测装置的结构示意图;
图9为本申请实施例提供的一种检测装置的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
本申请实施例提供的检测方法可应用于具有自动驾驶功能的任意终端,该终端具有封闭空间,形成封闭空间的组件至少包括玻璃,该终端可以为车辆、船、飞机、航天器等,对此本申请实施例不作任何限制。
为了方便描述,下述实施例以车辆为例进行说明。
作为一种示例,本申请实施例提供的检测方法可应用于具有自动驾驶功能的车辆或者应用于具有控制自动驾驶功能的其他设备(比如云端服务器)中。车辆可通过其包含的组件(包括硬件和/或软件)实施本申请实施例提供的检测方法,确定车辆当前的状态信息(比如速度、位置、路面条件、天气条件等),生成控制车辆的控制指令。或者,其他设备(比如服务器)用于实施本申请实施例的检测方法,确定车辆当前的状态信息,生成控制车辆的控制指令,并向车辆发送该控制指令。
图1为本申请实施例提供的车辆100的功能框图。在一些实施例中,可以将车辆100配置为完全或部分地自动驾驶模式。例如,车辆100可以在处于自动驾驶模式的同时控制自身,并且可通过人为操作来确定车辆及其周边环境的状态信息,确定车辆的玻璃是否起雾,或者,车辆所在的环境的天气状态,基于所确定的状态信息来控制车辆100。在车辆100处于自动驾驶模式时,可以将车辆100置为在没有和人交互的情况下操作。
车辆100可包括各种子系统,例如行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116中的至少一个。可选地,车辆100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。
其中,行进系统102可包括为车辆100提供动力运动的组件。在一些实施例中,行进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。其中,引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如气油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为车辆100的其他系统提供能量。传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一些实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
传感器系统104可包括感测关于车辆100周边的环境的信息的若干个传感器。例如,传感器系统104可包括定位系统122(定位系统可以是GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130中的至少一个。传感器系统104还可包括被监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别 是自主车辆100的安全操作的关键功能。
上述定位系统122可用于估计车辆100的地理位置。IMU 124用于基于惯性加速度来感测车辆100的位置和朝向变化。在一些实施例中,IMU 124可以是加速度计和陀螺仪的组合。雷达126可利用无线电信号来感测车辆100的周边环境内的物体,包括毫米波雷达、激光雷达等。在一些实施例中,除了感测物体以外,雷达126还可用于感测物体的速度和/或前进方向。激光测距仪128可利用激光来感测车辆100与周围环境中的物体的距离。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。相机130可用于捕捉车辆100的周边环境的多个图像。相机130可以是静态相机或视频相机。
控制系统106可控制车辆100及其组件的操作。控制系统106可包括各种元件,例如转向系统132、油门134、制动单元136、计算机视觉系统140、路线控制系统142以及障碍规避系统144等。其中,转向系统132可操作来调整车辆100的前进方向,例如方向盘系统。油门134用于控制引擎118的操作速度并进而控制车辆100的速度。制动单元136用于控制车辆100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制车辆100的速度。计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别车辆100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。路线控制系统142用于确定车辆100的行驶路线。在一些实施例中,路线控制系统142可结合来自传感器、定位系统122和一个或多个预定地图的数据以为车辆100确定行驶路线。障碍规避系统144用于识别、评估和避免或者以其他方式越过车辆100的环境中的潜在障碍物。当然,在一些实施例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
车辆100可通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。其中,无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如CDMA、EVD0、GSM/GPRS,或者4G蜂窝通信,例如LTE,或者5G蜂窝通信。无线通信系统146可利用WiFi与无线局域网(wireless localarea network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备。
在一些实施例中,外围设备108提供车辆100的用户与用户接口116交互的手段。例如,车载电脑148可向车辆100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风150可从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向车辆100的用户输出音频。电源110可向车辆100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂 离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源,从而为车辆100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。
车辆100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。
处理器113可以是任何常规的处理器,诸如商业可获得的中央处理单元(CentralProcessing Unit,CPU)。可选地,该处理器可以是诸如专用集成电路(ApplicationSpecific Integrated Circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同物理外壳中的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机系统、或存储器实际上可以包括可以存储在相同的物理外壳内的多个处理器、计算机系统、或存储器,或者包括可以不存储在相同的物理外壳内的多个处理器、计算机系统、或存储器。例如,存储器可以是硬盘驱动器,或位于不同于物理外壳内的其它存储介质。因此,对处理器或计算机系统的引用将被理解为包括对可以并行操作的处理器或计算机系统或存储器的集合的引用,或者可以不并行操作的处理器或计算机系统或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,数据存储装置114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行车辆100的各种功能,包括以上描述的那些功能。数据存储装置114也可包含额外的指令,包括向行进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。除了指令115以外,数据存储装置114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息(比如天气状态等)。这些信息可在车辆100自主、半自主和/或手动模式的操作期间被车辆100和计算机系统112使用。作为一种示例,数据存储装置114可以从传感器系统104或车辆100的其他组件获取环境信息,环境信息例如可以为车辆当前所处环境附近是否有绿化带、交通信号灯、行人等,车辆可以通过机器学习算法计算当前所处环境附近是否存在绿化带、交通信号灯、行人等。数据存储装置114还可以存储该车辆自身的状态信息,以及与该车辆有交互的其他车辆的状态信息。状态信息包括但不限于车辆的速度、加速度、航向角等。比如,车辆基于雷达126的测速、测距功能,得到其他车辆与自身之间的距离、其他车辆的速度等。如此,处理器113可从数据存储装置114获取上述环境信息或者状态信息,并基于车辆所处环境的环境信息、车辆自身的状态信息、其他车辆的状态信息,以及传统的基于规则的驾驶策略,得到最终的驾驶策略,以控制车辆进行自动驾驶(比如加速、减速、停止等)。
用户接口116,用于向车辆100的用户提供信息或从其接收信息。可选地,用户接口116 可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车载电脑148、麦克风150和扬声器152中的一个或多个。
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制车辆100的功能。例如,计算机系统112可利用来自控制系统106的输入,以便控制转向系统132,从而规避由传感器系统104和障碍规避系统144检测到的障碍物。在一些实施例中,计算机系统112可操作来对车辆100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,数据存储装置114可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。需要说明的是,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
在本申请的一些实施例中,车辆还可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
上述车辆100可以为轿车、越野车、跑车、载货汽车、公共汽车、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车等,对此本申请实施例不作任何限制。
本申请实施例的车辆100具有智能驾驶或者自动驾驶功能,车辆的自动驾驶等级用于指示自动驾驶车辆的智能化程度、自动化程度。目前按照美国汽车工程师协会SAE的标准,将车辆的自动驾驶等级分为6个等级:无自动化(L0)、驾驶支援(L1)、部分自动化(L2)、有条件自动化(L3)、高度自动化(L4)和完全自动化(L5)。
基于对上述车辆的功能介绍,自动驾驶车辆/辅助驾驶车辆可通过传感器系统获取车辆所在的环境图像,可基于图像识别算法确定车辆当前的天气状态,若车辆当前的天气状态不佳,例如雨雪天气、雾天天气,可对车辆进行相应天气状态的控制,例如天气状态不佳时,降低车辆的自动驾驶等级,或者,发出告警信息。然而,存在一种特殊情况,车辆在冬天行驶过程中,由于室内外温差较大,车辆内侧的挡风玻璃容易起雾,该场景与天气状态为雾天的场景类似,为了确保行车安全,车辆会按照天气状态不佳的情况作相应的控制操作,例如降低自动驾驶等级。本领域技术人员可以理解,如果是车辆的挡风玻璃起雾,可以通过启动除雾装置消除,例如开启车内空调,而无需降低车辆的自动驾驶等级。由此可见,针对雾天天气和车辆挡风玻璃起雾这两种场景,如果不加以区分,可能造成对自动驾驶车辆的系统资源的浪费。
为了解决上述技术问题,本申请实施例提供一种检测方法,自动驾驶车辆100或者与自动驾驶车辆100相关联的计算设备(如图1的计算机系统112、计算机视觉系统140、数据存储装置114)可以基于采集的环境图像的图像特征,确定车辆当前的状态信息,执行相应的控制操作。车辆100可基于环境图像中的目标物体(例如固定在车辆外部的物体、道路上的其他车辆、车道线、交通信号灯、天空等)的图像特征,确定车辆100当前所处环境的天气状态(例如晴天、阴天、雨天、雾天、雪天等)。或者,车辆100可基于环境图像中的上述目标物体的图像特征,确定车辆100的挡风玻璃是否起雾。车辆100在确定车辆当前的状态信息后,基于确定的状态信息作相应的控制操作,不同状态信息可对应不同的 控制操作。作为一种示例,车辆100在确定当前所处环境为雾天时,对环境图像进行去雾处理,或者降低自动驾驶等级,或者降低车速等。作为另一种示例,车辆100在确定车辆挡风玻璃起雾时,控制车辆上的其他装置启动或关闭,例如控制车内的除雾装置或者车窗升降装置启动。
下面采用具体的实施例对本申请的检测方法进行详细说明,需要说明的是,下面几个具体实施例可以相互结合,对于相同或相似的内容,在不同的实施例中不再进行重复说明。
图2为本申请实施例提供的一种检测方法的流程图。如图2所示,本实施例提供的检测方法,包括如下步骤:
步骤201、获取来自至少一个图像采集装置的至少一帧环境图像,环境图像用于呈现第一终端所在的环境的信息。
本实施例的第一终端可以为具有自动驾驶功能的车辆、船、飞机、航天器等,该第一终端当前处于完全或部分自动驾驶模式。第一终端可以在处于完全自动驾驶模式时,基于环境图像实现自动控制。或者第一终端可以在处于部分自动驾驶模式时,基于环境图像做初步判断后,通过人机交互获取用户的控制指令,实现半自动控制。
需要指出的是,本实施例的第一终端具有一封闭空间,该封闭空间的组件至少包括玻璃,用户在该封闭空间通过玻璃观察第一终端的周围环境。例如,用户可通过车辆的前挡风玻璃、左右两侧的挡风玻璃或者后挡风玻璃,观察车辆周围的道路状况、天气状况等。
为了实现第一终端的智能控制功能,可在第一终端内部的封闭空间设置至少一个图像采集装置。图像采集装置可以包括单目相机(Monocular)、双目相机(Stereo)、深度相机(RGB-D)以及它们的各种组合。单目相机的结构简单、成本低,但单张图像无法确定深度信息;双目相机由两个单目相机组成,这两个相机之间的距离是已知的,可基于两个相机的相对位置确定物体距离任一相机的距离;深度相机可以通过红外结构光或飞行时间测距法(Time-of-Flight,ToF),通过主动向物体发射光并接收返回的光,测量物体距离相机的距离。可选地,图像采集装置还可以包括摄像机。
本实施例的图像采集装置采集的环境图像为可见光图像,用于呈现第一终端所在环境的信息,例如包括道路状况(其他车辆、行人等障碍物,车道线、停止线、车道边线、人行横道、交通信号灯、交通标识、绿化带等道路信息)、天气状况(晴天、阴天、雨天、雪天、雾天等)。
以第一终端为车辆进行举例,图3示出了图像采集装置在车辆内部的空间示意图。如图3所示,图像采集装置可以设置在车辆100的前排驾驶区域301,例如主驾驶的顶部或者副驾驶的顶部,用于采集车辆100前方的环境图像;还可以设置在车辆100的后排乘坐区域302,例如后排左侧或者右侧的顶部,用于采集车辆100左侧或者右侧的环境图像,又例如后排中间靠后的顶部,用于采集车辆100后方的环境图像。
步骤202、根据至少一帧环境图像,确定第一终端的状态信息。
其中,状态信息包括以下中的至少一个:
第一终端的玻璃是否起雾,或者,第一终端所在的环境的天气状态。
第一终端所在的环境的天气状态包括浓雾、薄雾或者正常中的任意一个。
环境中的雾气对相机成像和检测产生影响,可通过对雾的包络范围、粒径、密度、能见度等进行描述,确定是浓雾、薄雾还是正常。其中,包络范围是空间体积的概念,无固 定标准;雾的粒径通常为1um~15um;密度包含含水量和数密度两个指标,取值范围分别为0.1g/m 3~0.5g/m 3和50个/cm 3~2500个/cm 3;能见度的范围取决于雾的微观结构物理量,其影响因素包括雾滴数密度、质点大小、含水量。
需要指出的是,能见度的大小主要由两个因素决定:一是目标物与衬托目标物的背景(例如天空)之间的亮度差异,差异越大(小),能见度越大(小);二是大气透明度,观测者(或图像采集装置)与目标物之间的气层能减弱前述的亮度差异,大气透明度越差(好),能见度越小(大)。雾天、烟、沙尘、大雪、毛毛雨等天气现象可使大气浑浊,透明度变差。
表1示出了能见度与天气的定性描述关系表,由表1可知,能见度的数值越低,能见度越差,雾气浓度越高。需要说明的是,表1仅仅是一种示例性的说明,以表征对应关系,本申请不对具体的对应情况进行限定。上述对应关系也可以通过其他形式表征,不仅仅局限于表格。
表1
能见度 能见度的定性评价 天气的定性描述
20km-30km 能见度极好视野清晰 正常
15km-20km 能见度好视野较清晰 正常
10km-15km 能见度一般 正常
1km-10km 能见度较差视野不清晰 薄雾
500m-1km 能见度差视野不清晰
200m-500m 能见度很差 大雾
50m-200m 能见度极差 浓雾
<50m 能见度几乎为零 强浓雾
为了简单起见,可将能见度在10km以上的天气定义为正常天气,能见度在10km以下的天气定义为雾天天气。根据图像识别算法效果能力的差异,可以对雾天天气设置不同的分类,例如分为薄雾和浓雾,又或者分为薄雾、雾、大雾、浓雾、强浓雾,对此本实施例不作任何限制。示例性的,可将能见度在10km-1km的天气定义为薄雾天气,能见度在1km以下的天气都定义为浓雾天气。
第一终端通过图像识别算法确定环境图像的能见度,根据能见度确定第一终端所在环境的天气状态。例如,第一终端根据图像清晰度确定能见度,又例如,第一终端根据图像中目标物体与天空背景的亮度差异确定能见度,再例如,第一终端根据图像的灰白程度确定能见度,这是由于雾总是灰白色的,因此在有雾的图像中,本应该很暗的物体就会变得灰白,雾气浓度越大,灰白程度越高。
作为一种示例,上述正常的天气状态包括晴天、雾气浓度很低(例如能见度在10km以上)的天气,非正常天气包括浓雾天气或薄雾天气。本实施例通过图像识别算法可区分上述正常天气状态和非正常天气状态。应理解,非正常天气状态还可以包括雨天、雪天等。在一些实施例中,第一终端可通过更新图像识别算法,对第一终端所在的环境的天气状态 进行更详细的判定,用于区分雨天、雪天、雾天等。
本步骤主要用于区分第一终端的玻璃起雾与第一终端所在的环境为雾天这两种场景。对此,本实施例提供如下的三种实现方式:
第一种实现方式通过分析单帧环境图像(例如至少一帧环境图像中的第一环境图像)中至少两个目标物体的亮度变化规律,确定第一终端当前的状态信息。以两个目标物体(两个检测点)为例,如果两个目标物体的亮度衰减的差值大于或等于预设阈值,可认为两个目标物体的亮度变化规律不一致,可确定第一终端的挡风玻璃起雾;如果两个目标物体的亮度衰减的差值小于预设阈值,可认为两个目标物体的亮度变化规律一致,可以进一步确认第一终端所在环境的天气状态(薄雾还是浓雾)。
应理解,任意目标物体具有一内在亮度,在大气环境的影响下,第一终端的图像采集装置采集到的该目标物体的表面亮度将小于该内在亮度,即存在一定的亮度衰减。亮度衰减主要受大气环境的影响,环境质量越差亮度衰减越大,例如雾天环境,雾气浓度越大亮度衰减越大。如果两个目标物体的亮度变化规律不一致,即两个目标物体受大气环境的影响不一致,可认为这两个目标物体不在同一环境中,这种情况极有可能是车内环境(例如车内玻璃起雾)造成的,可确定第一终端的挡风玻璃起雾。如果两个目标物体的亮度变化规律一致,即两个目标物体受大气环境的影响一致,可认为两个目标物体处于同一环境中,可进一步确定第一终端所在环境的天气状态。
第二种实现方式通过分析单帧环境图像(例如至少一帧环境图像中的第一环境图像)中近端目标物体的图像质量,确定第一终端当前的状态信息。如果近端目标物体的图像质量参数满足预设条件,则可以进一步确认第一终端所在环境的天气状态(薄雾还是浓雾);如果近端目标物体的图像质量参数不满足预设条件,则认为第一终端的挡风玻璃起雾。
需要说明的是,可以通过多种方法来判断环境图像的图像质量,包括但不限于清晰度、信噪比、色彩情况、白平衡、畸变情况、运动影响等。清晰度是指图像上各个细节部分纹路和其边界的清晰程度,可以有多种方法来进行评价,具体可参见下文。本申请实施例示出了通过分析环境图像中近端目标物体的清晰度,确定所在环境的天气状态或是否挡风玻璃起雾。当然,还可以基于上述其他图像质量确定所在环境的天气状态或是否挡风玻璃起雾,本申请对此不作任何限制。
第三种实现方式通过分析同一图像采集装置采集的多帧环境图像中的同一目标物体的亮度变化规律或者上述图像质量,确定第一终端当前的状态信息。以两帧环境图像为例,如果两帧环境图像中同一目标物体的亮度变化规律一致,或者,两帧环境图像中的同一目标物体的图像质量差值落在一定数值范围内,可认为两帧环境图像中同一目标物体的图像质量一致或相同,则认为第一终端的挡风玻璃起雾;如果两帧环境图像中同一目标物体的亮度变化规律不一致,或者,两帧环境图像中的同一目标物体的图像质量差值落在一定数值范围之外,可认为两帧环境图像中同一目标物体的图像质量不一致或不相同,则可以进一步确认第一终端所在环境的天气状态(薄雾还是浓雾)。
下面结合附图4对步骤202的第一种实现方式进行详细说明。该实现方式中,环境图像呈现的环境包括至少两个目标物体以及天空。其中,至少两个目标物体,可以是车辆、行人等移动物体,也可以是交通信号灯、指示牌、绿化带、车道线等固定物体,对此本实施例不作任何限制。
图4为本申请实施例提供的一种确定第一终端状态信息的流程图,如图4所示,上述步骤202具体包括:
步骤301、获取单帧环境图像中至少两个目标物体的亮度信息、天空的亮度信息以及至少两个目标物体的深度信息。
在本申请实施例中,单帧环境图像可以是至少一个图像采集装置采集的至少一帧环境图像中的某一帧环境图像,例如第一帧环境图像,这里的“第一”并不代表时序关系,而是某一图像采集装置采集的任意一帧环境图像。可选的,所述第一帧环境图像也可以为所述至少一帧环境图像中满足预设条件或者规则的帧环境图像,这里不对预设条件或者规则进行具体限定。
目标物体在大气中的亮度满足Koschmieder定律:
L=L 0e -kd+L f(1-e -kd)
式中,L表示目标物体的表面亮度,L 0表示目标物体的内在亮度,L f表示天空亮度,d表示目标物体与第一终端之间的距离,k表示消光系数(等于吸收系数和扩散系数之和)。
目标物体的亮度信息包括目标物体的表面亮度和内在亮度。第一终端通过提取环境图像中目标物体对应的图像块的亮度值,获取该目标物体的表面亮度,即上式中的L。不同类型的目标物体对应的内在亮度不同,第一终端可预存不同类型目标物体的内在亮度值。同样的,第一终端通过提取环境图像中天空对应的图像块的亮度值,获取天空的亮度信息,即上式中的L f。第一终端可基于单目或双目视觉测距方法获取环境图像中任意目标物体的深度信息,深度信息指示了任意目标物体距离第一终端(或者说第一终端的图像采集装置)的距离,即上式中的d。一种单目测距的方法可以利用目标触地点,目标触地点在摄像头上的投影与光轴形成一个相似三角形,根据相似三角形原理可以得到摄像头距离目标触地点的距离。双目测距则是通过对双目得到的两幅图像视差的计算,直接对目标进行距离测量。
这里需要说明的是,目标物体的亮度信息以及消光系数等参数也不仅仅限定上述Koschmieder定律,还可以包括Allard大气灯光照度传输定律、Mie散射理论等,这些都是可以作为消光系数计算的基础,对此本申请实施例不作任何限制。
步骤302、根据单帧环境图像中的至少两个目标物体的亮度信息、天空的亮度信息以及至少两个目标物体的深度信息,确定第一终端的状态信息。
对于同一目标物体,目标物体的亮度信息、天空的亮度信息以及目标物体的深度信息共同指示了目标物体在大气中的亮度损失程度。具体的,当已知目标物体的亮度信息、天空的亮度信息以及目标物体的深度信息,可基于上述Koschmieder定律计算得到目标物体对应的消光系数,消光系数用于指示目标物体在大气中的亮度损失程度。
在本步骤中,可基于上述Koschmieder定律确定至少两个目标物体对应的消光系数。以两个目标物体为例,第一终端可根据这两个目标物体对应的消光系数,确定第一终端的状态信息。具体的,判断两个目标物体对应的消光系数的差值是否小于第一阈值,如果差值小于或者等于第一阈值,则认为两个目标物体对应的消光系数是一致的(或者说是相同的),可进一步确定第一终端所在环境的天气状态是否为雾天,是浓雾还是薄雾;如果差值大于第一阈值,则认为两个目标物体对应的消光系数是不一致的(或者说是不同的),确定第一终端的玻璃起雾。需要指出的是,上述第一阈值可根据经验进行设置,也可根据实际检 测效果对该阈值进行微调,对于设置方式本申请实施例不作任何限制。
对于多个目标物体,第一终端可根据多个目标物体对应的消光系数,确定第一终端的状态信息。具体的,判断多个目标物体对应的消光系数是否一致,如果多个目标物体对应的消光系数中的任意两个的差值均小于第一阈值,则认为多个目标物体对应的消光系数是一致的,可进一步确定第一终端所在环境的天气状态;如果多个目标物体中存在至少一组目标物体,每组目标物体中任意两个目标物体对应的消光系数大于或等于第一阈值,则认为多个目标物体对应的消光系数是不一致的,确定第一终端的玻璃起雾。
应理解,如果第一终端的某一区域的玻璃起雾,该区域内目标物体将不符合亮度变化规律,不论该区域目标物体的远近,该区域目标物体计算出来的消光系数与除该区域之外的其他目标物体计算出来的消光系数的差异较大(即该区域目标物体对应的消光系数与其他区域目标物体对应的消光系数的差值大于第一阈值),通过本实施例的上述判断过程使得第一终端具备检测第一终端玻璃是否起雾的能力,提升第一终端的智能化程度。
本实施例提供的检测方法,通过获取来自至少一个图像采集装置的单帧环境图像,判断该环境图像中两个或多个目标物体对应的消光系数是否一致,如果两个或多个目标物体对应的消光系数不一致,可确定第一终端的玻璃起雾。上述判断过程使得第一终端具备检测第一终端玻璃是否起雾的能力,实现对环境天气下的雾天和玻璃起雾的区分,提升了第一终端的智能化程度。
下面结合附图5步骤202的第二种实现方式进行详细说明。该实现方式中,环境图像呈现的环境包括至少一个近端目标物体,其中近端目标物体包括第一终端外部的、与第一终端距离小于预设距离的物体。以第一终端为车辆为例,近端目标物体可以是车辆的前、后引擎盖,固定在引擎盖上的任意物体,车辆左右两侧的后视镜等近距离目标物体。具体的,可在近端目标物体上设置标定物,例如在车辆前引擎盖上设置红点或打叉等,第一终端基于图像采集装置(例如相机)采集的环境图像,通过识别环境图像中的标定物确定近端目标物体。
图5为本申请实施例提供的一种确定第一终端状态信息的流程图,如图5所示,上述步骤202具体包括:
步骤401、获取单帧环境图像中至少一个近端目标物体的清晰度值。
与图4所示实施例类似,本申请实施例的单帧环境图像同样可以是至少一个图像采集装置采集的至少一帧环境图像中的某一帧环境图像,例如第一帧环境图像,这里的“第一帧”并不代表时序关系,可以是某一图像采集装置采集的任意一帧环境图像。
在无参考图像的质量评价中,图像的清晰度是衡量图像优劣的重要指标,它能够较好的与人的主观感受相对应,图像的清晰度不高表现出图像的模糊。第一终端可基于任意一种清晰度算法,获取环境图像中至少一个近端目标物体的清晰度值,可采用如下几种较为常用的、具有代表性的清晰度算法:Brenner梯度函数、Tenengrad梯度函数、Laplacian梯度函数、SMD(灰度方差)函数、方差函数、能量梯度函数等。本申请不做具体限定,以能够获取清晰度值为准。
作为一种示例,第一终端可以通过Brenner梯度函数获取环境图像中至少一个近端目标物体的清晰度值,该函数用于计算相邻两个像素灰度差的平方,可表示为:
D=∑ yx|f(x+2,y)-f(x,y)| 2
式中,x,y表示像素坐标,f(x,y)表示对应点(x,y)的灰度值,D表示图像清晰度值。
第一终端可基于上述Brenner梯度函数,通过获取至少一个近端目标物体对应的图像块的灰度值确定至少一个近端目标物体的清晰度值。
步骤402、根据单帧环境图像中的至少一个近端目标物体的清晰度值,确定第一终端的状态信息。
以一个近端目标物体为例,第一终端通过比较近端目标物体的清晰度值与预设清晰度阈值的大小关系,确定第一终端的状态信息。如果近端目标物体的清晰度值小于或者等于预设清晰度阈值,则确定第一终端的玻璃起雾;如果近端目标物体的清晰度值大于预设清晰度阈值,可进一步确定第一终端所在环境的天气状态。需要指出的是,上述清晰度阈值可根据经验进行设置,也可根据实际检测效果对该阈值进行微调,对于设置方式本申请实施例不作任何限制。
本实施例提供的检测方法,通过获取来自至少一个图像采集装置的单帧环境图像,判断该环境图像中近端目标物体的图像清晰度,根据图像清晰度确定第一终端的玻璃是否起雾。上述判断过程使得第一终端具备检测第一终端玻璃是否起雾的能力,实现对环境天气下的雾天和玻璃起雾的区分,提升了第一终端的智能化程度。
可选的,在上述各实施例的基础上,第一种终端在确定第一终端的玻璃起雾时,可控制第一终端内部的除雾装置(例如车载新风系统、空调)启动,或者,控制车窗升降装置启动,或者,发出告警信息(可通过屏幕显示、语音播报或者震动等方式发出告警)。前两种方式可直接消除第一终端玻璃上的雾气,实现终端内外温差的平衡,保证第一终端的行驶安全。后一种方式中,用户可以根据告警信息进行人工干预,保证第一终端的行驶安全。
上述几个实施例都是基于图像采集装置采集的单帧的环境图像,进行终端环境的检测。下面一个实施例示出了基于图像采集装置采集的多帧的环境图像,进行终端环境的检测。
下面结合附图6对步骤202的第三种实现方式进行详细说明。通过对多帧的环境图像的图像分析,确定第一终端的玻璃是否起雾,或者按照环境天气的检测方法进行处理。
图6为本申请实施例提供的一种确定第一终端状态信息的流程图。如图6所示,本实施例提供的检测方法,包括如下步骤:
步骤501、获取来自至少一个图像采集装置的多帧环境图像,环境图像用于呈现第一终端所在的环境的信息。
本步骤的实现过程同上述实施例的步骤201,区别仅在于获取的环境图像为多帧,具体可参见上述实施例,此处不再赘述。
步骤502、根据多帧环境图像,确定第一终端的状态信息。
在本步骤中,多帧环境图像均来自于同一图像采集装置,根据同一图像采集装置的多帧环境图像,确定第一终端的状态信息。在实际应用中,可以根据不同需求设定多帧的具体数量,例如根据预设采样间隔获取连续的5帧环境图像。
其中,状态信息包含以下中的至少一个:
第一终端的玻璃是否起雾,或者,第一终端所在的环境的天气状态。
在本实施例中,环境图像呈现的环境包括至少一个目标物体。第一终端根据多帧环境图像,确定第一终端的状态信息,包括如下的两种实现方式:
第一种实现方式通过分析多帧环境图像中的至少一个目标物体的亮度变化规律,确定第一终端当前的状态信息。如果多帧环境图像中的同一目标物体的亮度衰减的差值小于预设阈值,可认为该目标物体在多帧环境图像的亮度变化规律一致,可确定第一终端的挡风玻璃起雾;如果多帧环境图像中的同一目标物体的亮度衰减的差值大于或等于预设阈值,可认为该目标物体在多帧环境图像的亮度变化规律不一致,可以进一步确认第一终端所在环境的天气状态(薄雾还是浓雾)。
第二种实现方式通过分析多帧环境图像中的至少一个目标物体的图像质量,确定第一终端当前的状态信息。如果多帧环境图像中的同一目标物体的图像质量参数的差值小于预设阈值,可认为该目标物体在多帧环境图像的图像质量表现一致,可确定第一终端的挡风玻璃起雾;如果多帧环境图像中的同一目标物体的图像质量参数的差值大于或等于预设阈值,可认为该目标物体在多帧环境图像的图像质量表现不一致,则可以进一步确认第一终端所在环境的天气状态(薄雾还是浓雾)。上述的图像质量参数包括但不限于清晰度、信噪比、色彩情况、白平衡、畸形情况、运动影响等。
具体的,在本实施例的第一种实现方式中,步骤502具体包括:
根据至少一个目标物体在多帧环境图像中的消光系数,确定状态信息。
针对多帧环境图像中的同一目标物体,该目标物体在每一帧的环境图像中的消光系数是通过该目标物体的亮度信息、天空的亮度信息以及该目标物体的深度信息确定的。消光系数的计算过程同上述实施例的步骤301,具体可参见上述实施例,此处不再赘述。
以环境图像呈现的环境包括一个目标物体为例,第一终端在确定多帧环境图像中的同一目标物体的消光系数之后,根据多帧环境图像中的任意两帧的环境图像的同一目标物体的消光系数的差值,确定第一终端的状态信息。如果多帧环境图像中的任意两帧的环境图像的同一目标物体的消光系数的差值均小于或者等于第四阈值,可认为该目标物体在多帧环境图像中的消光系数一致或相同,可确定第一终端的玻璃起雾。如果多帧环境图像中存在两帧环境图像的同一目标物体的消光系数的差值大于第四阈值,可认为该目标物体在多帧环境图像中的消光系数不一致或不相同,可进一步确定第一终端所在环境的天气状态是否为雾天,是浓雾还是薄雾,具体可参见图7实施例。需要指出的是,上述第四阈值可根据经验进行设置,也可根据实际检测效果对该阈值进行微调,对于设置方式本申请实施例不作任何限制。
在一些实施例中,第一终端可判断多帧环境图像中的多个目标物体的消光系数的变化情况,如果多个目标物体中的每一个目标物体在多帧环境图像中的消光系数均一致或相同,则认为第一终端的玻璃起雾,相较于判断多帧环境图像中的一个目标物体,该检测方法的准确性更高。
具体的,在本实施例的第二种实现方式中,步骤502具体包括:
根据至少一个目标物体在多帧环境图像中的清晰度值,确定状态信息。
针对多帧环境图像中的同一目标物体,该目标物体在每一帧的环境图像中的清晰度值是通过该目标物体对应的图像块的灰度值确定的。清晰度值的计算过程同上述实施例的步骤401,具体可参见上述实施例,此处不再赘述。
以环境图像呈现的环境包括一个目标物体为例,第一终端在确定多帧环境图像中的同一目标物体的清晰度值之后,根据多帧环境图像中的同一目标物体的清晰度值的 差值,确定第一终端的状态信息。具体的,如果多帧环境图像中任意两帧环境图像的同一目标物体的清晰度值的差值均小于或者等于第五阈值,可认为该目标物体在多帧环境图像中的清晰度值一致或相同,可确定第一终端的玻璃起雾。如果多帧环境图像中存在两帧环境图像的同一目标物体的清晰度值的差值大于第五阈值,可认为该目标物体在多帧环境图像中的清晰度值不一致或不相同,可进一步确定第一终端所在环境的天气状态是否为雾天,是浓雾还是薄雾,具体可参见图7实施例。需要指出的是,上述第五阈值可根据经验进行设置,也可根据实际检测效果对该阈值进行微调,对于设置方式本申请实施例不作任何限制。
可选的,多帧环境图像中选取的目标物体可以是近端目标物体,近端目标物体包括第一终端外部的、与第一终端距离小于预设距离的物体。
可选的,可根据实际需求预先设置获取多帧环境图像的数量和获取多帧环境图像的时间间隔。例如,设置间隔0.1s获取一帧环境图像,第一终端可根据连续的5帧环境图像进行环境检测。
需要说明的是,第一终端不论执行图4、图5、图6所示的判断过程,均存在如下情况:需要进一步确定第一终端所在环境的天气状态,可按照环境天气的检测方法进行处理。下面结合附图7对雾天环境的判断过程进行详细说明。
应理解,不同图像采集装置的拍摄视角不同,因此在确定第一终端的状态信息时,作为一种可选的方案,可以根据不同视角的图像采集装置采集的多帧环境图像,进行综合判断,确定第一终端的状态信息。以两个图像采集装置为例,分别为第一图像采集装置和第二图像采集装置,步骤502,可以包括:根据第一图像采集装置的多帧环境图像,确定第一终端的第一状态信息;根据第二图像采集装置的多帧环境图像,确定第一终端的第二状态信息;根据第一状态信息和第二状态信息,确定第一终端的状态信息。
作为一种示例,若第一状态信息和第二状态信息相同(例如玻璃起雾),则第一终端的状态信息为玻璃起雾;若第一状态信息和第二状态信息不同(例如第一状态信息为玻璃起雾,第二状态信息为环境雾天),可根据图像采集装置的权重确定第一终端的状态信息(例如第一图像采集装置的权重大于第二图像采集装置的权重,则确定第一终端的状态信息为第一状态信息(玻璃起雾)。)其中,图像采集装置的权重是与图像采集装置的硬件性能相关,硬件性能越强,权重值越大。需要说明的是,上述方案仅仅是一种示例性的描述,本申请实施例不对上述判断规则进行限定。
图7为本申请实施例提供的一种确定第一终端天气状态的流程图。如图7所示,本实施例提供的检测方法包括:
步骤601、获取至少一个图像采集装置的任意一帧环境图像的饱和度和明度。
在本实施例中,第一终端通过HSV颜色模型(Hue,Saturation,Value)获取环境图像整体的饱和度S和明度V。其中,饱和度S表示环境图像的颜色接近光谱色的程度。对于某一种颜色,可以看成是某种光谱色与白色混合的结果,光谱色所占的比例越大,颜色接近光谱色的程度就越高,颜色的饱和度就越高。明度V表示环境图像颜色的明亮程度。S和V的取值范围均为0%~100%。
步骤602、根据明度和饱和度的比值,确定第一终端所在的环境的天气状态。
相较于正常天气状态,在雾天环境下,环境图像的明度和饱和度的比值V/S较大。因 此可以根据该比值确定天气状态是否为雾天,是薄雾天气还是浓雾天气。具体的,如果该比值大于或者等于第二阈值,则确定第一终端所在的环境的天气状态为浓雾天气;如果该比值大于第三阈值且小于第二阈值,则确定第一终端所在的环境的天气状态为薄雾天气;如果该比值小于或者等于第三阈值,则确定第一终端所在的环境的天气状态为正常天气(雾很少或者无雾)。需要指出的是,上述的第二阈值和第三阈值可根据经验进行设置,也可根据实际检测效果对该阈值进行微调,对于设置方式本申请实施例不作任何限制。
可选的,第一终端可根据确定的天气状态执行相应的控制操作。在本实施例中,不同浓度等级的雾天环境可对应不同的控制操作。
当第一终端确定天气状态为浓雾天气时,第一终端控制自身的驾驶状态(例如将驾驶状态由全自动驾驶状态切换为半自动驾驶状态,即降低第一终端的自动驾驶等级),或者,第一终端输出信息到车载控制器,由车载控制器向第一终端的相关装置发送控制指令,例如车载控制器向第一终端的雾灯发送开启指令。
当第一终端确定天气状态为薄雾天气时,第一终端对环境图像进行去雾处理,然后将经过去雾后处理后的图像数据送入检测模块进行道路检测,根据道路检测结果执行相应的驾驶策略(例如加速、减速、停车),当然也可以不执行任何操作(例如维持当前的驾驶状态),或者按照上述浓雾天气的方式来处理(例如降低第一终端的自动驾驶等级)。
当第一终端确定天气状态为正常天气时,第一终端可直接根据环境图像进行道路检测,具体可参见上文,此处不再赘述。
可选的,第一终端可通过更新图像识别算法,对第一终端所在的环境的天气状态进行更详细的判定,例如增加对非正常天气的识别,区分雾天、雨天、雪天等。针对不同的非正常天气执行不同的控制操作。例如,当第一终端确定天气状态为雨天天气时,第一终端可输出信息到车载控制器,由车载控制器向第一终端的雨刷器发送开启指令,还可以根据雨量的大小,智能调节雨刷器的频率。
本实施例提供的检测方法,通过计算环境图像的饱和度和明度,基于明度与饱和度的比值确定环境天气为雾天的浓度等级。上述判断过程使得第一终端具备检测雾天环境的能力,并根据雾天的浓度等级执行相应的控制操作。在雾天浓度不高的情况下,无需切换第一终端的驾驶状态,可通过图像处理算法对环境图像进行去雾处理,从而避免对控制系统资源的浪费。
综上可知,如果无法有效的区分环境天气下的雾天和玻璃起雾,第一终端很有可能对玻璃起雾的情况进行去雾算法或者降低自动驾驶等级等处理,造成控制系统资源的浪费,或者将环境天气下的雾天当作玻璃起雾,导致开启除雾装置而没有效果。基于上述实施例提供的检测方法,第一终端能够快速识别其状态信息,并根据不同的状态信息执行相应的控制操作,提升第一终端的智能化程度。
需要说明的是,上述各个方法实施例的执行主体可以是第一终端(例如自动驾驶车辆)或者第一终端上的部件(例如检测装置、芯片、控制器或者控制单元),还可以是与第一终端通信连接的云端设备,对此本申请实施例不做任何限制。作为一种示例,上述检测装置可以是图像采集装置(例如摄像头设备),上述控制器可以是多域控制器(Multi Domain Control,MDC),上述控制单元可以是电子控制单元(Electronic Control Unit,ECU),也称为行车电脑。
以第一终端上的检测装置为例,本申请实施例可以根据上述方法实施例对检测装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以使用硬件的形式实现,也可以使用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。下面以使用对应各个功能划分各个功能模块为例进行说明。下述检测装置也可以替代为芯片、控制器或者控制单元等可能的执行主体。
图8为本申请实施例提供的一种检测装置的结构示意图。如图8所示,本申请实施例提供的检测装置700,包括:
获取模块701,用于获取来自至少一个图像采集装置的至少一帧环境图像,所述环境图像用于呈现第一终端所在的环境的信息;
处理模块702,用于根据所述至少一帧环境图像,确定所述第一终端的状态信息,所述状态信息包含以下中的至少一个:
所述第一终端的玻璃是否起雾,或者
所述第一终端所在的所述环境的天气状态。
可选的,所述天气状态包括浓雾、薄雾或者正常中的任意一个。
可选的,所述状态信息包含所述第一终端的玻璃是否起雾,所述环境图像呈现的所述环境包含至少两个目标物体以及天空;
所述处理模块702,具体用于根据所述至少一帧环境图像中的第一环境图像中的至少两个目标物体的亮度信息、天空的亮度信息以及所述至少两个目标物体的深度信息,确定所述状态信息。
可选的,所述处理模块702确定所述第一终端的玻璃起雾,其中,在所述第一环境图像中,所述至少两个目标物体中存在至少一组目标物体,所述每组目标物体中任意两个目标物体对应的消光系数的差值大于第一阈值,所述消光系数是通过所述目标物体的亮度信息、所述天空的亮度信息以及所述目标物体的深度信息确定的,所述消光系数用于指示所述目标物体在大气中的亮度损失程度。
可选的,所述状态信息包括所述第一终端的玻璃是否起雾,所述环境图像呈现的所述环境包括至少一个近端目标物体,所述近端目标物体包括所述第一终端外部的、与所述第一终端距离小于预设距离的物体;
所述处理模块702,具体用于根据所述至少一帧环境图像中的第一环境图像中的至少一个近端目标物体的清晰度值,确定所述状态信息;
其中,所述至少一个近端目标物体的清晰度值是通过所述至少一个近端目标物体对应的图像块的灰度值确定的。
可选的,所述处理模块702确定所述第一终端的玻璃起雾,在所述第一环境图像中,至少存在一个近端目标物体的清晰度值小于或者等于预设清晰度阈值。
可选的,所述获取模块701,具体用于获取来自至少一个图像采集装置的多帧环境图像;
所述处理模块702,具体用于根据所述多帧环境图像,确定所述第一终端的状态信息。
可选的,所述状态信息包括所述第一终端的玻璃是否起雾,所述环境图像呈现的所述环境包括至少一个目标物体;
所述处理模块702,具体用于根据所述至少一个目标物体在所述多帧环境图像中的消光系数或者清晰度值,确定所述状态信息;
其中,所述至少一个目标物体在每一帧的环境图像中的消光系数是通过所述至少一个目标物体的亮度信息、所述天空的亮度信息以及所述至少一个目标物体的深度信息确定的;所述至少一个目标物体在每一帧的环境图像中的清晰度值是通过所述至少一个目标物体对应的图像块的灰度值确定的。
可选的,所述处理模块702确定所述第一终端的玻璃起雾,其中,所述多帧环境图像中的任意两帧的环境图像的同一目标物体的消光系数的差值均小于或者等于第四阈值。
可选的,所述处理模块702确定所述第一终端的玻璃起雾,其中,所述多帧环境图像中的任意两帧的环境图像中的同一目标物体的清晰度值的差值均小于或者等于第五阈值。
可选的,所述同一目标物体为同一近端目标物体,所述近端目标物体包括所述第一终端外部的、与所述第一终端距离小于预设距离的物体。
可选的,所述处理模块702在确定所述第一终端的玻璃起雾时,还用于:
控制车内的除雾装置启动,或者,控制车窗升降装置启动,或者,发出告警信息。
可选的,在所述处理模块702确定所述第一终端的玻璃没有起雾时,所述获取模块701,还用于:
获取至少一个图像采集装置的任意一帧环境图像的饱和度和明度;
所述处理模块702,还用于根据所述明度与所述饱和度的比值,确定所述第一终端所在的所述环境的天气状态。
可选的,所述天气状态为浓雾时,所述比值大于或者等于第二阈值;所述天气状态为薄雾时,所述比值大于第三阈值且小于所述第二阈值;和/或
所述天气状态为正常时,所述比值小于或者等于所述第三阈值。
可选的,所述处理模块702,还用于:
确定所述天气状态为浓雾时,控制所述第一终端的驾驶状态或者输出控制信息到车载控制器;或者
确定所述天气状态为薄雾时,对所述环境图像进行去雾处理;或者
确定所述天气状态为正常时,根据所述环境图像进行道路检测。
可选的,本申请实施例提供的检测装置还可以包括通信模块,通信模块用于向第一终端上的除雾装置或车窗升降装置发送控制指令,该控制指令用于控制第一终端车内的除雾装置启动,或者,控制第一终端车窗升降装置启动。或者,通信模块用于向第一终端的显示装置、语音装置或震动装置发出告警信息,可通过屏幕显示、语音播报或者震动等方式发出告警。
本申请实施例提供的检测装置,用于执行前述任一方法实施例的检测方案,其实现原理和技术效果类似,在此不再赘述。
图9为本申请实施例提供的一种检测装置的硬件结构示意图。如图9所示,本申请实施 例提供的检测装置800,包括:
至少一个处理器801(图9中仅示出一个处理器)和至少一个存储器802(图9中仅示出一个存储器);
所述至少一个存储器802用于存储计算机执行指令,当所述检测装置800运行时,所述至少一个处理器801执行所述至少一个存储器802存储的所述计算机执行指令,以使所述检测装置800执行前述任一方法实施例的检测方案。
需要说明的是,本申请实施例提供的检测装置800可以设置在第一终端上,也可以设置在云端设备上,对此本申请实施例不作任何限制。
本申请实施例还提供一种计算机存储介质,用于存储计算机程序,当该计算机程序在计算机上运行时,使得该计算机执行前述任一方法实施例中的检测方法。
本申请实施例还提供一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得该计算机执行前述任一方法实施例中的检测方法。
本申请实施例还提供一种芯片,包括:至少一个处理器和接口,用于从至少一个存储器中调用并运行所述至少一个存储器中存储的计算机程序,执行前述任一方法实施例中的检测方法。
本申请实施例还提供一种自动驾驶系统,其包括前述的一个或多个第一终端,以及一个或多个云端设备,其中第一终端上设置有上述检测装置,或者,云端设备上设置有上述检测装置,以使自动驾驶系统能够区分玻璃起雾和环境天气,提高系统对环境检测的准确性。
本申请实施例还提供一种车辆,该车辆上包含上述检测装置。通过所述检测装置,以使该车辆具备区分玻璃起雾和环境天气的功能,从而控制该车辆上的其他装置(例如除雾装置、车窗升降装置、显示装置、震动装置、语音装置等)启动或关闭。进一步,该车辆还包含至少一个摄像装置和/或至少一个雷达装置。所述雷达装置包含毫米波雷达、激光雷达或超声波雷达中的至少一个。
可选的,该车辆可以为轿车、越野车、跑车、载货汽车、公共汽车、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车等,对此本申请实施例不作任何限制。应理解,本申请实施例中提及的处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
还应理解,本申请实施例中提及的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机 存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。
需要说明的是,当处理器为通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件时,存储器(存储模块)集成在处理器中。
应注意,本文描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
还应理解,本文中涉及的第一、第二以及各种数字编号仅为描述方便进行的区分,并不用来限制本申请的范围。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。

Claims (31)

  1. 一种检测方法,其特征在于,所述方法包括:
    获取来自至少一个图像采集装置的至少一帧环境图像,所述环境图像用于呈现第一终端所在的环境的信息;
    根据所述至少一帧环境图像,确定所述第一终端的状态信息,所述状态信息包含以下中的至少一个:
    所述第一终端的玻璃是否起雾,或者
    所述第一终端所在的所述环境的天气状态。
  2. 根据权利要求1所述的方法,其特征在于,所述天气状态包括浓雾、薄雾或者正常中的任意一个。
  3. 根据权利要求1或2所述的方法,其特征在于,所述状态信息包含所述第一终端的玻璃是否起雾,所述环境图像呈现的所述环境包含至少两个目标物体以及天空;
    根据所述至少一帧环境图像,确定所述第一终端的状态信息,包括:
    根据所述至少一帧环境图像中的第一环境图像中的至少两个目标物体的亮度信息、天空的亮度信息以及所述至少两个目标物体的深度信息,确定所述状态信息。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第一环境图像中的至少两个目标物体的亮度信息、天空的亮度信息以及所述至少两个目标物体的深度信息,确定所述状态信息,包括:
    确定所述第一终端的玻璃起雾,其中,在所述第一环境图像中,所述至少两个目标物体中存在至少一组目标物体,每组目标物体中任意两个目标物体对应的消光系数的差值大于第一阈值,所述消光系数是通过所述目标物体的亮度信息、所述天空的亮度信息以及所述目标物体的深度信息确定的,所述消光系数用于指示所述目标物体在大气中的亮度损失程度。
  5. 根据权利要求1或2所述的方法,其特征在于,所述状态信息包括所述第一终端的玻璃是否起雾,所述环境图像呈现的所述环境包括至少一个近端目标物体,所述近端目标物体包括所述第一终端外部的、与所述第一终端距离小于预设距离的物体;
    根据所述至少一帧环境图像,确定所述第一终端的状态信息,包括:
    根据所述至少一帧环境图像中的第一环境图像中的至少一个近端目标物体的清晰度值,确定所述状态信息;
    其中,所述至少一个近端目标物体的清晰度值是通过所述至少一个近端目标物体对应的图像块的灰度值确定的。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述第一环境图像中的至少一个近端目标物体的清晰度值,确定所述状态信息,包括:
    确定所述第一终端的玻璃起雾,在所述第一环境图像中,至少存在一个近端目标物体的清晰度值小于或者等于预设清晰度阈值。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述获取来自至少一个图像采集装置的至少一帧环境图像,包括:
    获取来自至少一个图像采集装置的多帧环境图像;
    所述根据所述环境图像,确定所述第一终端的状态信息,包括:
    根据所述多帧环境图像,确定所述第一终端的状态信息。
  8. 根据权利要求7所述的方法,其特征在于,所述状态信息包括所述第一终端的玻璃是否起雾,所述环境图像呈现的所述环境包括至少一个目标物体;
    所述根据所述多帧环境图像,确定所述第一终端的状态信息,包括:
    根据所述至少一个目标物体在所述多帧环境图像中的消光系数或者清晰度值,确定所述状态信息;
    其中,所述至少一个目标物体在每一帧的环境图像中的消光系数是通过所述至少一个目标物体的亮度信息、天空的亮度信息以及所述至少一个目标物体的深度信息确定的;所述至少一个目标物体在每一帧的环境图像中的清晰度值是通过所述至少一个目标物体对应的图像块的灰度值确定的。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述至少一个目标物体在所述多帧环境图像中的消光系数,确定所述状态信息,包括:
    确定所述第一终端的玻璃起雾,其中,所述多帧环境图像中的任意两帧的环境图像的同一目标物体的消光系数的差值均小于或者等于第四阈值。
  10. 根据权利要求8所述的方法,其特征在于,所述根据所述至少一个目标物体在所述多帧环境图像中的清晰度值,确定所述状态信息,包括:
    确定所述第一终端的玻璃起雾,其中,所述多帧环境图像中的任意两帧的环境图像中的同一目标物体的清晰度值的差值均小于或者等于第五阈值。
  11. 根据权利要求9或10所述的方法,其特征在于,所述同一目标物体为同一近端目标物体,所述近端目标物体包括所述第一终端外部的、与所述第一终端距离小于预设距离的物体。
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,在确定所述第一终端的玻璃起雾时,所述方法还包括:
    控制车内的除雾装置启动,或者,控制车窗升降装置启动,或者,发出告警信息。
  13. 根据权利要求1-11中任一项所述的方法,其特征在于,在确定所述第一终端的玻璃没有起雾时,所述方法还包括:
    获取至少一个图像采集装置的任意一帧环境图像的饱和度和明度;
    根据所述明度与所述饱和度的比值,确定所述第一终端所在的所述环境的天气状态。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述明度与所述饱和度的比值,确定所述第一终端所在的所述环境的天气状态,包括:
    所述天气状态为浓雾时,所述比值大于或者等于第二阈值;
    所述天气状态为薄雾时,所述比值大于第三阈值且小于所述第二阈值;和/或
    所述天气状态为正常时,所述比值小于或者等于所述第三阈值。
  15. 根据权利要求1-14中任一项所述的方法,其特征在于,所述方法还包括:
    确定所述天气状态为浓雾时,控制所述第一终端的驾驶状态或者输出控制信息到车载控制器;或者
    确定所述天气状态为薄雾时,对所述环境图像进行去雾处理;或者
    确定所述天气状态为正常时,根据所述环境图像进行道路检测。
  16. 一种检测装置,其特征在于,包括:
    获取模块,用于获取来自至少一个图像采集装置的至少一帧环境图像,所述环境图像用于呈现第一终端所在的环境的信息;
    处理模块,用于根据所述至少一帧环境图像,确定所述第一终端的状态信息,所述状态信息包含以下中的至少一个:
    所述第一终端的玻璃是否起雾,或者
    所述第一终端所在的所述环境的天气状态。
  17. 根据权利要求16所述的装置,其特征在于,所述天气状态包括浓雾、薄雾或者正常中的任意一个。
  18. 根据权利要求16或17所述的装置,其特征在于,所述状态信息包含所述第一终端的玻璃是否起雾,所述环境图像呈现的所述环境包含至少两个目标物体以及天空;
    所述处理模块,具体用于根据所述至少一帧环境图像中的第一环境图像中的至少两个目标物体的亮度信息、天空的亮度信息以及所述至少两个目标物体的深度信息,确定所述状态信息。
  19. 根据权利要求18所述的装置,其特征在于,所述处理模块确定所述第一终端的玻璃起雾,其中,在所述第一环境图像中,所述至少两个目标物体中存在至少一组目标物体,每组目标物体中任意两个目标物体对应的消光系数的差值大于第一阈值,所述消光系数是通过所述目标物体的亮度信息、所述天空的亮度信息以及所述目标物体的深度信息确定的,所述消光系数用于指示所述目标物体在大气中的亮度损失程度。
  20. 根据权利要求16或17所述的装置,其特征在于,所述状态信息包括所述第一终端的玻璃是否起雾,所述环境图像呈现的所述环境包括至少一个近端目标物体,所述近端目标物体包括所述第一终端外部的、与所述第一终端距离小于预设距离的物体;
    所述处理模块,具体用于根据所述至少一帧环境图像中的第一环境图像中的至少一个近端目标物体的清晰度值,确定所述状态信息;
    其中,所述至少一个近端目标物体的清晰度值是通过所述至少一个近端目标物体对应的图像块的灰度值确定的。
  21. 根据权利要求20所述的装置,其特征在于,所述处理模块确定所述第一终端的玻璃起雾,在所述第一环境图像中,至少存在一个近端目标物体的清晰度值小于或者等于预设清晰度阈值。
  22. 根据权利要求16-21中任一项所述的装置,其特征在于,所述获取模块,具体用于获取来自至少一个图像采集装置的多帧环境图像;
    所述处理模块,具体用于根据所述多帧环境图像,确定所述第一终端的状态信息。
  23. 根据权利要求22所述的装置,其特征在于,所述状态信息包括所述第一终端的玻璃是否起雾,所述环境图像呈现的所述环境包括至少一个目标物体;
    所述处理模块,具体用于根据所述至少一个目标物体在所述多帧环境图像中的消光系数或者清晰度值,确定所述状态信息;
    其中,所述至少一个目标物体在每一帧的环境图像中的消光系数是通过所述至少一个目标物体的亮度信息、天空的亮度信息以及所述至少一个目标物体的深度信息确定的;所述至少一个目标物体在每一帧的环境图像中的清晰度值是通过所述至少一个 目标物体对应的图像块的灰度值确定的。
  24. 根据权利要求23所述的装置,其特征在于,所述处理模块确定所述第一终端的玻璃起雾,其中,所述多帧环境图像中的任意两帧的环境图像的同一目标物体的消光系数的差值均小于或者等于第四阈值,或者,所述多帧环境图像中的任意两帧的环境图像中的同一目标物体的清晰度值的差值均小于或者等于第五阈值;
    所述同一目标物体为同一近端目标物体,所述近端目标物体包括所述第一终端外部的、与所述第一终端距离小于预设距离的物体。
  25. 根据权利要求16-24中任一项所述的装置,其特征在于,所述处理模块在确定所述第一终端的玻璃起雾时,还用于:
    控制车内的除雾装置启动,或者,控制车窗升降装置启动,或者,发出告警信息。
  26. 根据权利要求16-24中任一项所述的装置,其特征在于,在所述处理模块确定所述第一终端的玻璃没有起雾时,所述获取模块,还用于:
    获取至少一个图像采集装置的任意一帧环境图像的饱和度和明度;
    所述处理模块,还用于根据所述明度与所述饱和度的比值,确定所述第一终端所在的所述环境的天气状态。
  27. 根据权利要求26所述的装置,其特征在于,
    所述天气状态为浓雾时,所述比值大于或者等于第二阈值;
    所述天气状态为薄雾时,所述比值大于第三阈值且小于所述第二阈值;和/或
    所述天气状态为正常时,所述比值小于或者等于所述第三阈值。
  28. 根据权利要求16-27中任一项所述的装置,其特征在于,
    所述处理模块,还用于:
    确定所述天气状态为浓雾时,控制所述第一终端的驾驶状态或者输出控制信息到车载控制器;或者
    确定所述天气状态为薄雾时,对所述环境图像进行去雾处理;或者
    确定所述天气状态为正常时,根据所述环境图像进行道路检测。
  29. 一种检测装置,其特征在于,包括至少一个处理器和至少一个存储器;
    所述至少一个存储器用于存储计算机执行指令,当所述检测装置运行时,所述至少一个处理器执行所述至少一个存储器存储的所述计算机执行指令,以使所述检测装置执行如权利要求1-16中任一项所述的检测方法。
  30. 一种计算机存储介质,其特征在于,用于存储计算机程序,当所述计算机程序在计算机上执行时,使得所述计算机执行权利要求1-15中任一项所述的检测方法。
  31. 一种车辆,其特征在于,所述车辆包含上述权利要求16-29任一项所述的检测装置。
PCT/CN2021/071199 2020-02-17 2021-01-12 检测方法、装置及存储介质 WO2021164463A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010096935.X 2020-02-17
CN202010096935.XA CN113343738A (zh) 2020-02-17 2020-02-17 检测方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2021164463A1 true WO2021164463A1 (zh) 2021-08-26

Family

ID=77390398

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/071199 WO2021164463A1 (zh) 2020-02-17 2021-01-12 检测方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN113343738A (zh)
WO (1) WO2021164463A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023241214A1 (zh) * 2022-06-16 2023-12-21 中国第一汽车股份有限公司 显示方法、装置及电子后视镜系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570005A (zh) * 2021-09-26 2021-10-29 中国人民解放军国防科技大学 一种基于机载光子雷达的远距离舰船类型识别方法
WO2023184460A1 (zh) * 2022-03-31 2023-10-05 华为技术有限公司 一种失焦检测方法以及相关装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170122836A1 (en) * 2015-11-04 2017-05-04 Kyowa Interface Science Co., Ltd. Antifog property evaluating apparatus and antifog property evaluating method
CN108202696A (zh) * 2016-12-20 2018-06-26 乐视汽车(北京)有限公司 车辆车窗去雾控制方法、装置及电子设备
CN110406346A (zh) * 2018-04-26 2019-11-05 上海博泰悦臻网络技术服务有限公司 基于图像采集的空调控制方法、系统及车辆

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170122836A1 (en) * 2015-11-04 2017-05-04 Kyowa Interface Science Co., Ltd. Antifog property evaluating apparatus and antifog property evaluating method
CN108202696A (zh) * 2016-12-20 2018-06-26 乐视汽车(北京)有限公司 车辆车窗去雾控制方法、装置及电子设备
CN110406346A (zh) * 2018-04-26 2019-11-05 上海博泰悦臻网络技术服务有限公司 基于图像采集的空调控制方法、系统及车辆

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023241214A1 (zh) * 2022-06-16 2023-12-21 中国第一汽车股份有限公司 显示方法、装置及电子后视镜系统

Also Published As

Publication number Publication date
CN113343738A (zh) 2021-09-03

Similar Documents

Publication Publication Date Title
WO2021164463A1 (zh) 检测方法、装置及存储介质
AU2021200258B2 (en) Multiple operating modes to expand dynamic range
CN102390370B (zh) 一种基于立体视觉的车辆行驶应急处理装置及方法
WO2021036592A1 (zh) 后视镜自适应调节方法及装置
KR101855940B1 (ko) 차량용 증강현실 제공 장치 및 그 제어방법
WO2021057344A1 (zh) 一种数据呈现的方法及终端设备
CN111527016A (zh) 用于控制自动驾驶载具的图像捕获设备遇到的光的程度的方法和系统
CN202271980U (zh) 一种基于立体视觉的车辆行驶应急处理装置
WO2022205243A1 (zh) 一种变道区域获取方法以及装置
WO2021217570A1 (zh) 基于隔空手势的控制方法、装置及系统
CN115666987A (zh) 信号处理设备、调光控制方法、信号处理程序和调光系统
US20230177840A1 (en) Intelligent vehicle systems and control logic for incident prediction and assistance in off-road driving situations
CN109572712B (zh) 控制车辆的运行系统的方法
WO2023050058A1 (zh) 控制车载摄像头的视角的方法、装置以及车辆
CN113614782A (zh) 信息处理装置、信息处理方法和程序
WO2023102915A1 (zh) 图像显示方法和装置
CN114572219B (zh) 自动超车方法、装置、车辆、存储介质及芯片
CN114802435B (zh) 车辆控制方法、装置、车辆、存储介质及芯片
CN108447290A (zh) 基于车联网的智能避让系统
CN114822216B (zh) 生成车位地图的方法、装置、车辆、存储介质及芯片
CN116552528A (zh) 一种智能驾驶方法和装置
CN115115707A (zh) 车辆落水检测方法、车辆、计算机可读存储介质及芯片
CN115082886A (zh) 目标检测的方法、装置、存储介质、芯片及车辆
CN117400934A (zh) 一种车辆自动驾驶方法、服务端和系统
JP2022129400A (ja) 運転支援装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21757085

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21757085

Country of ref document: EP

Kind code of ref document: A1