WO2019085930A1 - 车辆中双摄像装置的控制方法和装置 - Google Patents

车辆中双摄像装置的控制方法和装置 Download PDF

Info

Publication number
WO2019085930A1
WO2019085930A1 PCT/CN2018/112904 CN2018112904W WO2019085930A1 WO 2019085930 A1 WO2019085930 A1 WO 2019085930A1 CN 2018112904 W CN2018112904 W CN 2018112904W WO 2019085930 A1 WO2019085930 A1 WO 2019085930A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
vehicle
dual camera
camera device
information
Prior art date
Application number
PCT/CN2018/112904
Other languages
English (en)
French (fr)
Inventor
何敏政
Original Assignee
比亚迪股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 比亚迪股份有限公司 filed Critical 比亚迪股份有限公司
Publication of WO2019085930A1 publication Critical patent/WO2019085930A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present disclosure relates to the field of vehicle control technologies, and in particular, to a method and apparatus for controlling a dual camera device in a vehicle.
  • the vehicle's Advanced Driver Assistant System adopts a visual mode to collect environmental data outside the vehicle, and then recognizes the collected data. Specifically, an image of the outside of the vehicle is acquired by using a visible light camera, and then object recognition is performed on the collected image.
  • the present disclosure provides a method and a device for controlling a dual camera device in a vehicle, so as to overcome the problem that the image of the image taken by the existing ADAS system is blurred when the image is captured by setting the two cameras to overcome the existing ADAS system. And, according to the illumination intensity of the current environment of the vehicle, controlling the opening of the dual camera device, that is, controlling the opening of the dual camera device according to the actual light condition when the vehicle is running, so as to realize the working mode of the dual camera according to the light decision, in the light When a strong camera is turned on, energy saving is achieved.
  • the dual camera When the light is weak, the dual camera is turned on to effectively ensure the quality of the image captured by the dual camera, so that when the object in the image is recognized, the accuracy of the object recognition can be improved, thereby ensuring
  • the safety of driving the vehicle is used to solve the problem in the prior art that the visible light camera can only capture high-quality images in a scene with sufficient light, and in a scene with weak light, the image captured is relatively blurred and the noise is relatively low. Large, therefore, the process of object recognition for subsequent images In the middle, the object's false recognition rate and the leak recognition rate are large, which directly affects the technical problem of the vehicle's driving safety.
  • the first aspect of the present disclosure provides a method for controlling a dual camera device in a vehicle, including:
  • the on state of the dual camera device on the vehicle is controlled according to the illumination intensity.
  • the control method of the dual imaging device in the vehicle of the embodiment of the present disclosure controls the on state of the dual imaging device on the vehicle according to the illumination intensity by acquiring the illumination intensity of the environment in which the vehicle is currently located.
  • the image is acquired by setting two cameras to overcome the problem that the existing ADAS system adopts the visible light camera head, and the image taken in the case of weak light is relatively blurred, and according to the current environment of the vehicle.
  • control the opening of the dual camera device that is, control the opening of the dual camera device according to the actual light condition when the vehicle is running, so as to realize the working mode of the dual camera according to the light decision, and turn on a camera when the light is strong, thereby achieving energy saving in the light
  • the dual camera is turned on to effectively ensure the quality of the image captured by the dual camera, so that when the object in the image is recognized, the accuracy of the object recognition can be improved, thereby ensuring the safety of the vehicle.
  • the second aspect of the present disclosure provides a control device for a dual camera device in a vehicle, including:
  • An acquisition module for obtaining an illumination intensity of a current environment of the vehicle
  • control module configured to control an on state of the dual camera device on the vehicle according to the illumination intensity.
  • the control device of the dual camera device in the vehicle of the embodiment of the present disclosure controls the on state of the dual camera device on the vehicle according to the light intensity by acquiring the light intensity of the environment in which the vehicle is currently located.
  • the image is acquired by setting two cameras to overcome the problem that the existing ADAS system adopts the visible light camera head, and the image taken in the case of weak light is relatively blurred, and according to the current environment of the vehicle.
  • control the opening of the dual camera device that is, control the opening of the dual camera device according to the actual light condition when the vehicle is running, so as to realize the working mode of the dual camera according to the light decision, and turn on a camera when the light is strong, thereby achieving energy saving in the light
  • the dual camera is turned on to effectively ensure the quality of the image captured by the dual camera, so that when the object in the image is recognized, the accuracy of the object recognition can be improved, thereby ensuring the safety of the vehicle.
  • a third aspect of the present disclosure provides a computer device, including: a processor and a memory;
  • the processor runs a program corresponding to the executable program code by reading executable program code stored in the memory for implementing a dual in a vehicle as described in the first aspect of the present disclosure
  • the control method of the imaging device is not limited to the first aspect of the present disclosure.
  • FIG. 1 is a schematic flow chart of a method for controlling a dual camera device in a first type of vehicle according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart of a method for controlling a dual camera device in a second vehicle according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flow chart of a method for controlling a dual camera device in a third type of vehicle according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flow chart of a method for controlling a dual camera device in a fourth vehicle according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a control system of a dual camera device according to an embodiment of the present disclosure
  • FIG. 6 is a schematic flow chart of a method for controlling a dual camera device in a fifth vehicle according to an embodiment of the present disclosure
  • 7a is a schematic diagram of a sunrise and sunset schedule of different months reference time zones in an embodiment of the present disclosure
  • 7b is a schematic diagram of a sunrise and sunset schedule of different time zones in different months in the embodiment of the present disclosure
  • FIG. 8 is a schematic flowchart diagram of a method for controlling a dual camera device in a sixth vehicle according to an embodiment of the present disclosure
  • FIG. 9 is a schematic flowchart diagram of a method for controlling a dual camera device in a seventh vehicle according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic flow chart of a method for controlling a dual camera device in an eighth vehicle according to an embodiment of the present disclosure
  • FIG. 11 is a schematic diagram of a calibration template of a dual camera device according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of a control device for a dual camera device in a vehicle according to an embodiment of the present disclosure
  • FIG. 13 is a schematic structural diagram of a control device of a dual camera device in another vehicle according to an embodiment of the present disclosure.
  • YUV is a color coding method in which "Y" represents brightness (Luminance or Luma), that is, gray scale value; and "U” and “V” represent chromaticity (Chrominance or Chroma), which is used to describe image color. And saturation, used to specify the color of the pixel.
  • the YUV color space is characterized by its luminance signal Y and chrominance signals U, V being separated. If there is only a Y signal component and no U, V components, the image is a black and white grayscale image.
  • Multi-Scale decomposition refers to scaling multiple scales of an input image to generate reduced images of multiple resolutions, and then analyzing and processing the scaled images of each scale.
  • the MSD can separate the high and low frequency details contained in the image into the scaled images of each scale, and then analyze and process the information of different frequency bands of the image.
  • FIG. 1 is a schematic flow chart of a method for controlling a dual camera device in a first type of vehicle according to an embodiment of the present disclosure.
  • the dual camera device includes a first camera and a second camera.
  • the first camera and the second camera may be mounted side by side on the vehicle, and the resolutions of the first camera and the second camera may be different.
  • the first camera and the second camera capture the same field of view, and the subsequent images are processed.
  • the first camera may be a visible light camera and the second camera may be an infrared camera.
  • the resolution of the infrared camera can be lower than the resolution of the visible light camera, and the description of the details in the scene is relatively insufficient. Therefore, the visible light camera can select the high-definition camera, which can ensure that the image captured by the visible light camera is in the case of sufficient light.
  • the detailed description of the scene is relatively clear, and in the case of weak light, the image captured by the infrared camera has a clear description of the details of the scene.
  • control method of the dual camera device in the vehicle includes the following steps:
  • Step 101 Obtain the light intensity of the environment in which the vehicle is currently located.
  • the illumination intensity includes a first intensity during the day and a second intensity of the black world.
  • the ambient light sensor may be pre-installed on the vehicle, and the ambient light sensor is used to collect the light intensity signal of the environment where the vehicle is located, and the ambient light sensor can obtain the light intensity signal, and then the light can be determined according to the light intensity signal. strength.
  • the light intensity can be determined based on the state of the vehicle's lights.
  • the current time information of the vehicle may be obtained from the navigation information of the vehicle, and then the light intensity is determined according to the current time information, which is not limited thereto.
  • Step 102 controlling an on state of the dual camera device on the vehicle according to the light intensity.
  • the light intensity is the first intensity in the daytime
  • a clearer and less noisy image can be captured in the visible light camera
  • the light is weak, the quality of the images captured in the visible light camera is low, and the picture is blurred, the noise is large, and even the whole picture is black. Therefore, in the embodiment of the present disclosure, when the illumination intensity is different, the quality of the images captured by the camera is different. At this time, the on state of different cameras in the dual camera device can be controlled.
  • the illumination intensity is the first intensity in the daytime
  • only one camera in the dual camera device can be turned on.
  • a camera with a higher resolution such as a visible light camera
  • the first camera and the second camera in the dual camera device can be simultaneously turned on, for example, the visible light camera head and the infrared camera can be simultaneously turned on, thereby ensuring that a high quality image can be obtained.
  • the control method of the dual imaging device in the vehicle of the embodiment of the present disclosure controls the on state of the dual imaging device on the vehicle according to the illumination intensity by acquiring the illumination intensity of the environment in which the vehicle is currently located.
  • the image is acquired by setting two cameras to overcome the problem that the existing ADAS system adopts the visible light camera head, and the image taken in the case of weak light is relatively blurred, and according to the current environment of the vehicle.
  • control the opening of the dual camera device that is, control the opening of the dual camera device according to the actual light condition when the vehicle is running, so as to realize the working mode of the dual camera according to the light decision, and turn on a camera when the light is strong, thereby achieving energy saving in the light
  • the dual camera is turned on to effectively ensure the quality of the image captured by the dual camera, so that when the object in the image is recognized, the accuracy of the object recognition can be improved, thereby ensuring the safety of the vehicle.
  • step 101 specifically includes the following sub-steps:
  • Step 201 Acquire navigation information of the vehicle.
  • the CAN controller on the vehicle can be controlled by the image processing chip in the ADAS system of the vehicle, and then the CAN message of the vehicle body can be collected by using the interrupt mode, and then the collected CAN message can be parsed and processed. Obtain a CAN message related to the navigation information of the vehicle.
  • a plurality of central processing units may be integrated in the image processing chip, and the main frequency of the integrated CPU may be divided into low, medium, and high levels according to the frequency.
  • the low-level main frequency can be about 200MHz
  • the medium-level main frequency can be 500-700MHz
  • the high-level main frequency can be more than 1GHz.
  • some image processing chips only integrate two CPUs with a main frequency.
  • the image processing chip only integrates a CPU with a low main frequency and a medium frequency
  • the image processing chip only integrates the main controller. Frequency and high frequency CPU.
  • a CPU with a relatively low frequency on the image processing chip can be selected to control the CAN controller, and then the CAN message of the vehicle body is collected by using the interrupt mode, and the CPU performs the CAN message for the received CAN message.
  • the CAN message is sent to other CPUs on the image processing chip to parse the received CAN message to obtain the CAN message related to the navigation information of the vehicle.
  • Step 202 Extract current time information of the vehicle from the navigation information.
  • the navigation information includes current time information, location information, and the like.
  • the current time information includes year information, month information, date information, and clock information
  • the location information includes longitude information and latitude information.
  • the CAN message related to the navigation information may be parsed, and the current time information and the location information in the navigation information may be parsed.
  • Step 203 Determine the light intensity according to the current time information.
  • the sunrise time and the sunset time of the area where the location information is located in the navigation information may be determined, and then, by comparing whether the clock information in the current time information is within a time period formed by the sunrise time and the sunset time, Determine the light intensity.
  • the clock information in the current time information is within the time period formed by the sunrise time and the sunset time, it indicates that the light intensity is the first intensity under the daytime, and when the clock information is not at the time of sunrise time and sunset time In the segment, the current light intensity is the second intensity in the black world.
  • the location information in the navigation information is located in the east 8 area, and the sunrise time in the east 8 area is 06:00:00 and the sunset time is 18:00:00.
  • the current light intensity is the first intensity in the daytime
  • the clock information is in the range of 18:00:01 to 05:59:59
  • the current illumination The intensity is the second intensity in the black world.
  • the clock information extracted from the current time information is 15:32:45, it can be determined that the current light intensity is the first intensity in the daytime.
  • the control method of the dual camera device in the vehicle of the embodiment of the present disclosure extracts the current time information of the vehicle from the navigation information by acquiring the navigation information of the vehicle, and determines the light intensity according to the current time information.
  • the illumination intensity is determined according to the current time information, and the accuracy of the determination of the illumination intensity can be ensured.
  • the illumination intensity may also be determined according to the state of the vehicle lamp, and the above process will be described in detail below with reference to FIG. 3.
  • FIG. 3 is a schematic flow chart of a method for controlling a dual camera device in a third type of vehicle according to an embodiment of the present disclosure.
  • step 101 specifically includes the following sub-steps:
  • Step 301 detecting a state of the vehicle light.
  • the vehicle light on the vehicle may be in an open state or in a closed state. Under normal circumstances, the vehicle needs to turn on the illumination of the vehicle at night, and the vehicle does not need to be turned on during the daytime, and can be turned off. Therefore, the intensity of the ambient light currently traveling by the vehicle can be identified by detecting the state of the vehicle's lights.
  • the state of the vehicle lamp may include a state of the low beam and a state of the high beam.
  • the collected CAN message can be parsed to obtain a CAN message related to the state of the lamp. After obtaining the CAN message related to the state of the lamp, the CAN message related to the state of the lamp can be parsed, so that the state of the low beam or the high beam in the state of the lamp can be analyzed.
  • the vehicle light on the vehicle may be manually triggered by the driver, or may be controlled by the ambient light sensor on the vehicle to control the opening of the vehicle light when the light is weakened.
  • Step 302 Determine the light intensity according to the detected state of the light.
  • the vehicle light state when the vehicle light state indicates that the low beam or the high beam of the vehicle is in a closed state, it indicates that the light in the environment where the vehicle is located is sufficient. At this time, the light intensity may be determined as the first intensity in the daytime. When the vehicle light state indicates that the vehicle's low beam or high beam is in the activated state, it indicates that the light in the environment where the vehicle is located is weak. At this time, it can be determined that the light intensity is the second intensity in the black world.
  • the control method of the dual imaging apparatus in the vehicle of the embodiment of the present disclosure determines the illumination intensity based on the detected state of the vehicle light by detecting the state of the vehicle light. Thereby, the illumination intensity can be determined by the state of the vehicle lamp when the navigation information cannot be received by the ADAS system, thereby improving the flexibility of the control method of the dual camera.
  • the illumination intensity signal may also be acquired according to an associated sensor on the vehicle, thereby determining the illumination intensity according to the illumination intensity signal.
  • FIG. 4 is a schematic flow chart of a method for controlling a dual camera device in a fourth vehicle according to an embodiment of the present disclosure.
  • step 101 specifically includes the following sub-steps:
  • Step 401 Acquire an illumination intensity signal from an ambient light sensor on the vehicle.
  • the ambient light sensor on the vehicle can collect the light intensity signal in the environment where the vehicle is currently located, and after the ambient light sensor collects the light intensity signal, the control unit in the vehicle can obtain the light from the ambient light sensor. Intensity signal.
  • Step 402 Determine the light intensity according to the light intensity signal.
  • the first intensity in the daytime and the threshold value of the second intensity in the black world may be set, which is recorded as a preset threshold in the embodiment of the present disclosure, and when the light intensity signal exceeds a preset threshold, The light in the environment where the vehicle is located is sufficient. At this time, the light intensity can be determined as the first intensity in the daytime, and when the light intensity signal does not exceed the preset threshold, the light in the environment where the vehicle is located is weak. It can be determined that the light intensity is the second intensity in the black world.
  • the control method of the dual camera device in the vehicle obtains the light intensity signal according to the ambient light sensor on the vehicle, thereby determining the light intensity according to the light intensity signal, which is easy to implement and simple to operate.
  • the accuracy of the determined illumination intensity can be ensured, thereby ensuring the accuracy of the dual camera control.
  • FIG. 5 is a schematic structural diagram of a control system of a dual camera apparatus according to an embodiment of the present disclosure.
  • FIG. 5 includes a camera 2011, a camera 2012, an image processing chip 202, and an actuator 203.
  • the image processing chip 202 includes an image acquisition unit 2021, an image processing and recognition unit 2022, a system decision unit 2023, and a system control unit 2024.
  • the camera 2011 and the camera 2012 are both connected to the image processing chip 202.
  • the main difference between the camera 2011 and the camera 2012 is that the system control unit 2024 running in the image processing chip 202 can control the opening and closing of the camera 2012, and is controlled by the system control unit 2024. It is judged when the control camera 2012 is turned on, and when the control camera 2012 is turned off.
  • the image acquisition unit 2021 can also receive the control of the system control unit 2024.
  • the camera 2011 When the light is strong, only the camera 2011 is turned on to save energy, and when the light is weak, the camera 2011 and the camera 2012 are simultaneously turned on to effectively ensure the image captured by the dual camera. quality.
  • the image processing and recognition unit 2022 can also accept the control of the system control unit 2024.
  • the pair can be The images captured by the camera 2011 and the camera 2012 are fused, and then the fused images are analyzed and processed, and objects such as vehicles and pedestrians are identified.
  • the system decision unit 2023 After the image processing and recognition unit 2022 recognizes the object, the system decision unit 2023 generates a safe driving strategy based on the recognition result, and then controls the actuator 203 according to the safe driving strategy.
  • Actuator 203 can issue alarm alerts in the form of sound, light, etc., as well as operations such as controlling steering wheel shake or automatic braking.
  • step 203 specifically includes the following sub-steps:
  • Step 501 Determine, according to the location information, a first time zone in which the vehicle is currently located.
  • the first time zone in which the vehicle is currently located may be calculated according to the longitude information in the location information.
  • the first time zone in which the vehicle is currently located can be calculated using the following formula:
  • A represents longitude information
  • B represents a quotient
  • C represents a remainder
  • the first time zone is equal to the quotient B, and when the remainder is greater than 7.5, the first time zone is equal to B+1.
  • the longitude information is 173° west longitude
  • the first time zone is the west 12 zone.
  • Step 502 Determine whether the first time zone is a preset reference time zone. If yes, go to step 503. Otherwise, go to step 505.
  • the preset reference time zone is preset, and the reference time zone may be, for example, the east 8 zone.
  • Step 503 Identify the month information in the current time information, and obtain the sunrise time and the sunset time corresponding to the month information.
  • FIG. 7a is a schematic diagram of a sunrise and sunset schedule of different months reference time zones in an embodiment of the present disclosure.
  • the corresponding relationship may be queried according to the month information in the current time information, so as to obtain the sunrise time and the sunset time corresponding to the month information.
  • Step 504 using the sunrise time and the sunset time to form a first time period.
  • the sunrise time and the sunset time may be utilized to form the first time period.
  • the sunrise time is 06:00:00 corresponding to the month information
  • the sunset time is 18:00:00
  • the first time period formed is 06:00:00 to 18:00:00.
  • Step 505 Obtain a time difference between the first time zone and the reference time zone.
  • each time zone of the time zone is different, the zone is different by one hour. Therefore, when the first time zone is not the reference time zone, the time difference of the first time zone as the reference time zone may be acquired, so that the sunrise time and the sunset time corresponding to the first time zone are obtained according to the time difference.
  • the time zone of the west time zone may be marked as a negative number, and the time zone of the east time zone may be a positive number, and then the first time zone may be compared with the reference time zone to obtain a difference between the two.
  • the time difference for example, the mark time difference is D hours (D is a signed number).
  • Step 506 Identify the month information in the current time information, and obtain the sunrise time and the sunset time corresponding to the month information.
  • the month information in the current time information may be identified, and then the different month information of the pre-established reference time zone and the correspondence between the sunrise time and the sunset time may be queried according to the month information, and the month of the reference time zone is obtained.
  • the sunrise time and the sunset time corresponding to the information for example, the sunrise time corresponding to the month information of the reference time zone acquired by the mark is a point, and the sunset time is b point.
  • step 507 the time difference and the sunset time are adjusted by using the time difference.
  • the time difference D after acquiring the time difference D, the sunrise time a, and the sunset time b, the time difference D can be used to adjust the sunrise time a and the sunset time b, and obtain the month information corresponding to the current time information in the first time zone. Sunrise time and sunset time.
  • FIG. 7b is a schematic diagram of sunrise and sunset schedules of different time zones in different months in the embodiment of the present disclosure, and FIG. 7b is only exemplified in August.
  • the sunrise time and the sunset time of the reference time zones in FIG. 7a can be utilized to adjust the sunrise time and the sunset time of each time zone, thereby forming the day of each time zone (including the first time zone).
  • Time and sunset time After determining the monthly information, you can obtain the sunrise time and sunset time of each time zone by looking up the form. The operation is simple and easy to implement.
  • Step 508 forming a first time period by using the adjusted sunrise time and sunset time.
  • the first time period is formed by using the adjusted sunrise time and the sunset time, and the first time period is a+D point to b+D point.
  • the first time period is 21:00:00 to 9:00:00.
  • step 509 the clock information is extracted from the current time information, and it is determined whether the clock information is in the first time period. If yes, step 510 is performed; otherwise, step 511 is performed.
  • the illumination intensity is determined to be the first intensity during the day.
  • the light intensity when the clock information is in the first time period, the light intensity may be determined to be the first intensity in the daytime.
  • Step 511 determining that the illumination intensity is the second intensity in the black world.
  • the light intensity when the clock information is not in the first time period, the light intensity may be determined to be the second intensity in the black world.
  • the control method of the dual camera device in the vehicle of the embodiment of the present disclosure obtains the navigation information of the vehicle; extracts current time information of the vehicle from the navigation information; and determines the light intensity according to the current time information.
  • the illumination intensity is determined according to the driving time period of the current vehicle, so that the opening of the dual camera device can be controlled according to the actual light condition during driving, so as to realize the working mode of the dual camera according to the light decision, and when the light is strong, one is turned on.
  • the camera realizes energy saving.
  • the dual camera is turned on to effectively ensure the quality of the image captured by the dual camera device, so that when the object in the image is recognized, the accuracy of the object recognition can be improved, thereby ensuring the safety of the vehicle. Sex.
  • control method of the dual camera device in the vehicle may further include the following steps:
  • Step 601 receiving an image from a dual camera.
  • the image when controlling the on state of the dual camera device on the vehicle, the image may be acquired based on the dual camera device, and after the image is acquired by the dual camera device, the image processing chip in the ADAS system may receive the image from the dual camera device. .
  • Step 602 Identify the received image to acquire an object that may exist in the image.
  • the dual camera device when the dual camera device is turned on at the same time, it indicates that the light intensity is the second intensity in the black world. At this time, since the light in the environment where the vehicle is located is weak, in order to improve the accuracy of object recognition in the image, it may be The two images captured by the dual camera device are image-fused to acquire a target image, and then the object is recognized from the target image.
  • the dual camera devices When only one of the dual camera devices is turned on, it indicates that the light intensity of the vehicle is the first intensity during the day. At this time, since the light in the environment where the vehicle is located is sufficient, the camera device can be directly photographed. The image is identified to obtain objects that may exist in the image.
  • the control method of the dual imaging apparatus in the vehicle of the embodiment of the present disclosure acquires an image that is received from the dual imaging apparatus, recognizes the received image, and acquires an object that may exist in the image. Thereby, the accuracy of the object recognition can be improved, thereby ensuring the safety of the vehicle.
  • step 602 may specifically include the following steps:
  • step 701 it is determined whether two images are included in the received image. If yes, step 702 is performed; otherwise, step 704 is performed.
  • the two images include a first image and a second image, respectively corresponding to the first camera and the second camera in the dual camera.
  • Step 702 when only one image is included in the received image, the light intensity is indicated as the first intensity during the day, and at this time, step 704 can be triggered.
  • Step 702 Perform image fusion on the first image and the second image to obtain a target image.
  • the resolution of the dual camera device may be different, before the image fusion of the first image and the second image, the resolutions of the first image and the second image need to be adjusted such that the first image and the second image The resolution of the image is the same.
  • the resolution of another image can be adjusted based on the resolution of one of the two images such that the resolutions of the two images are the same.
  • a resolution of a compromise may be obtained as a target resolution according to the resolution of the first image and the resolution of the second image, and then the resolutions of the first image and the second image are simultaneously adjusted to the target resolution.
  • the target resolution may be 1280*960
  • the resolution of the second image of the first image is adjusted to 1280*. 960.
  • first camera and the second camera in the dual camera are installed side by side and the field of view of the camera is the same, the two images are adjusted after the first camera and the second camera are different in position. Still can't completely coincide. Therefore, in the embodiment of the present disclosure, two images with the same resolution can be registered, and then the first image and the second image after registration are fused to obtain a target image.
  • one image may be selected as the reference image, and then another image is geometrically transformed according to the reference image, and the processed image is fused with the reference image, so that the two images completely coincide.
  • Step 703 identifying an object from the target image.
  • the input image of the camera is a color image
  • the color space is YUV.
  • the Y component in the color space is calculated during the image fusion calculation process, and UV The component does not participate in the calculation.
  • the Y component when the object in the target image is identified, the Y component may be extracted from the fused target image. Whether the target image is a color image or a black and white image, the process of extracting the Y component is the gray of the image. Degree processing to reduce the amount of computation and improve the real-time performance of the system.
  • a grayscale image of the target image can be obtained.
  • the histogram equalization processing can be performed on the grayscale image to obtain a balanced grayscale image.
  • the equalized grayscale image may be split to form at least two equalized grayscale images, and then The pedestrian identification of the gray image after equalization can be performed to obtain the identification information of the pedestrian object and the pedestrian object, and the vehicle identification of the other equalized gray image is performed to obtain the identification information of the vehicle object and the vehicle object. It should be noted that the above two-way identification processing process is performed at the same time. For example, two equalized grayscale images can be identified through different CPUs to improve the real-time performance of the system.
  • the identification information may include: coordinate information, width information, height information, distance information, and the like.
  • the Laplacian pyramid decomposition algorithm can be used to perform multi-level scaling processing on the equalized grayscale image, and then the directional gradient is performed on the scaled image on each level. Histogram of Oriented Gradient (Hologram) is extracted, and then the hog feature can be classified and recognized based on the hog feature.
  • the Laplacian pyramid decomposition algorithm can be used to perform multi-level scaling processing on the equalized grayscale image, and then the haar feature extraction is performed on the scaled image at each level, which can be based on The haar feature classifies and identifies the vehicle object identified from the object.
  • a tracking algorithm of the pedestrian object and the vehicle object such as a Kalman filter algorithm, may also be used. Pedestrian objects and vehicle objects are tracked to eliminate misidentified pedestrian and vehicle objects.
  • step 704 an image is taken as the target image.
  • the image when it is determined that only one image is included in the received image, the image may be directly used as the target image to identify it, and each object in the image is acquired, that is, the triggering step 703 is performed.
  • Step 705 extracting a region of interest from the target image.
  • the region of interest when it is determined that only one image is included in the received image, the region of interest may be extracted from an image, and the region of interest may be, for example, a sky region.
  • the region of interest may also be extracted from the target image to further confirm the illumination condition in the environment in which the vehicle is currently located.
  • Step 706 Acquire a brightness average value of the region of interest.
  • the brightness value of each pixel in the region of interest may be determined, so that the brightness of the region of interest may be acquired according to the brightness value of each pixel in the region of interest. Mean.
  • Step 707 Determine whether the brightness average is higher than a preset threshold. If yes, go to step 709. Otherwise, go to step 708.
  • the brightness average it may be determined whether the brightness average is higher than a preset threshold.
  • the brightness average is higher than the preset threshold, it indicates that the lighting condition of the current environment of the vehicle is good, and at this time, no processing may be performed.
  • the brightness average is lower than or equal to the preset threshold, it indicates that the lighting condition of the current environment of the vehicle is poor, and at this time, step 708 can be triggered.
  • step 708 feedback information is generated and fed back to the vehicle.
  • the brightness average value when the brightness average value is lower than or equal to a preset threshold value, it indicates that the illumination condition of the current environment of the vehicle is poor. At this time, feedback information may be formed and fed back to the vehicle to control the low beam or the far of the vehicle. The light is turned on.
  • the control method of the dual imaging apparatus in the vehicle of the embodiment of the present disclosure by integrating the first image and the second image when the two images are included in the received image, acquiring the target image, and then identifying the object from the target image , can improve the accuracy of object recognition in the black day mode.
  • the region of interest from the target image, obtaining the mean value of the region of interest, and generating feedback information to the vehicle when the luminance mean is lower than or equal to the preset threshold, the illumination in the environment in which the vehicle is currently located can be achieved. Make further confirmation.
  • step 702 specifically includes the following sub-steps:
  • Step 801 adjusting the resolution of the first image and/or the second image so that the resolutions of the two images are the same.
  • the resolution of another image may be adjusted based on the resolution of one of the two images such that the resolutions of the two images are the same.
  • one of the first image and the second image may be selected as a reference image, and then the resolution of the other image may be adjusted according to the resolution of the reference image.
  • the resolution of the second image may be adjusted such that the resolution of the second image is the same as the resolution of the first image, or the first image may be adjusted when the reference image is the second image.
  • the resolution of the image is such that the resolution of the first image is the same as the resolution of the second image.
  • a smaller resolution may be selected from the first image and the second image as a reference image.
  • the first image may be used as the first image.
  • the reference image can then be scaled for the second image to reduce the resolution of the second image such that the resolutions of the two images are the same.
  • a target resolution may be acquired according to the resolution of the first image and the resolution of the second image; and the resolutions of the first image and the second image are adjusted as targets.
  • Resolution For example, when the resolution of the first image is 1600*1200 and the resolution of the second image is 1024*768, the target resolution may be 1280*960, and the resolution of the second image of the first image is adjusted to 1280*. 960.
  • Step 802 registering the first image and the second image having the same resolution.
  • one of the two images with the same resolution may be selected as the reference image, and then the other image may be geometrically transformed according to the reference image, so that the processed image may well coincide with the reference image.
  • a transform coefficient that performs affine transformation on another image may be acquired according to the reference image, and then the other image is affine transformed according to the transform coefficient to obtain the first image and the second image after registration.
  • the image and the transform coefficient are obtained by calibrating the dual imaging device in advance.
  • the first image is taken as the reference image
  • the camera that captures the first image is the first camera example. Therefore, the second image may be geometrically transformed according to the first image captured by the first camera, so that the processed second image may better coincide with the first image. That is, according to the first image, the transform coefficients for performing the radiation transform on the second image are acquired, and then the second image is affine-transformed according to the transform coefficients to obtain the first image and the second image after registration.
  • the calibration process of the transform coefficients may be as follows:
  • the calibration template can be made as shown in Fig. 11 (the calibration template of Fig. 11 is only an example, and can be made according to actual conditions when implemented), and then printed out with paper. Then, the calibration template is placed in front of the dual camera device, and the distance between the calibration template and the dual camera device is adjusted, so that the black rectangular frames of the four corners on the calibration template fall into the image captured by the dual camera device. In the corner area. Then, the image captured by the dual camera device can be acquired, and the coordinates of all the vertices of the black rectangular frame of the four corners are solved by the "corner point detection" method.
  • the vertex coordinates of all the black rectangular frames on the image captured by the first camera and the vertex coordinates of the black rectangular frame corresponding to the image captured by the second camera may be substituted into the formula (2).
  • the affine transforms the matrix equation and then derives the formula (3).
  • x and y represent the vertex coordinates of the black rectangular frame on the image captured by the first camera
  • x' and y' represent the black rectangular frame corresponding to the image captured by the first camera on the image captured by the second camera.
  • the vertex coordinates, m 1 , m 2 , m 3 , m 4 , m 5 and m 6 are the transform coefficients of the affine transformation.
  • k represents the number of vertex coordinates of the black rectangle (the number of k in Fig. 11 is 28), and x k and y k represent the vertex coordinates of the kth black rectangle on the image captured by the first camera. , x k ' and y k ' represent the vertex coordinates corresponding to the kth black rectangular frame on the image captured by the first camera on the image captured by the second camera.
  • the transform coefficients m 1 , m 2 , m 3 , m 4 , m 5 and m 6 of the affine transform can be solved by the least squares method.
  • the second image captured by the second camera may be affine transformed according to the transform coefficients to obtain the first image and the second image after registration.
  • Step 803 the first image and the second image after the registration are merged to obtain a target image.
  • the fusion coefficients of the two images are first calculated.
  • the MSD method may be used to calculate the fusion coefficients of the first image and the second image after registration, and then the target image may be obtained based on the fusion coefficient.
  • the multi-scale decomposition of the first image and the second image after registration may be performed to obtain two sets of multi-scale decomposition coefficients:
  • the two sets of multi-scale decomposition coefficients can be fused according to a preset fusion rule to obtain a fusion coefficient:
  • the multi-scale inverse transform is used to inversely reconstruct the target image, for example, as shown in the following equation:
  • image r represents the merged target image.
  • the resolutions of the two images are the same, and the first image and the second image having the same resolution are used.
  • the registration is performed, and the first image and the second image after registration are fused to obtain a target image.
  • the present disclosure also proposes a control device for a dual camera device in a vehicle.
  • FIG. 12 is a schematic structural diagram of a control device for a dual camera device in a vehicle according to an embodiment of the present disclosure.
  • control device 1200 of the dual camera device in the vehicle includes an acquisition module 1201 and a control module 1202.
  • the obtaining module 1201 is configured to obtain the light intensity of the environment in which the vehicle is currently located.
  • the acquiring module 1201 is specifically configured to acquire navigation information of the vehicle, extract current time information of the vehicle from the navigation information, and determine the light intensity according to the current time information.
  • the obtaining module 1201 is specifically configured to: when the current time information is in the first time period, determine that the light intensity is the first intensity in the day; and when the current time information is not in the first time period, determine The light intensity is the second intensity in the black world.
  • the obtaining module 1201 is further configured to extract location information from the navigation information while extracting current time information from the navigation information.
  • the acquiring module 1201 is specifically configured to determine, according to the location information, a first time zone in which the vehicle is currently located; determine whether the first time zone is a preset reference time zone; and if it is determined that the first time zone is a reference time zone, identify The month information in the current time information obtains the sunrise time and the sunset time corresponding to the month information; forms the first time period by using the sunrise time and the sunset time; extracts the clock information from the current time information, and determines whether the clock information is in the first Within the time period; if within the first time period, the light intensity is determined to be the first intensity during the day.
  • the acquiring module 1201 is further configured to: acquire the time difference between the first time zone and the reference time zone when determining the first time zone non-reference time zone; identify the month information in the current time information, and obtain the sunrise corresponding to the month information. Time and sunset time; use the time difference to adjust the sunrise time and sunset time; use the adjusted sunrise time and sunset time to form the first time period.
  • the acquiring module 1201 is specifically configured to detect a state of the vehicle light; and determine the light intensity according to the detected state of the light.
  • the acquiring module 1201 is specifically configured to determine that the light intensity is the first intensity during the day when the vehicle light state indicates that the low beam or the high beam of the vehicle is in the off state; the state of the light is indicated When the low beam or high beam of the vehicle is in the activated state, the light intensity is determined to be the second intensity in the black world.
  • the acquiring module 1201 is specifically configured to obtain an illumination intensity signal from an ambient light sensor on the vehicle; and determine an illumination intensity according to the illumination intensity signal.
  • the acquiring module 1201 is specifically configured to: when the illumination intensity signal exceeds a preset threshold, determine that the illumination intensity is the first intensity in the daytime; and determine the illumination intensity when the illumination intensity signal does not exceed the preset threshold.
  • the second intensity for the black world is specifically configured to: when the illumination intensity signal exceeds a preset threshold, determine that the illumination intensity is the first intensity in the daytime; and determine the illumination intensity when the illumination intensity signal does not exceed the preset threshold. The second intensity for the black world.
  • the control module 1202 is configured to control an open state of the dual camera device on the vehicle according to the light intensity.
  • control device 1200 of the dual camera device in the vehicle may further include: a receiving module 1203 and an identification module 1204. .
  • the receiving module 1203 is configured to receive an image from the dual camera.
  • the identification module 1204 is configured to identify the received image and acquire an object that may exist in the image.
  • the identification module 1204 is specifically configured to determine whether two images are included in the received image.
  • the two images include the first image and the second image, respectively corresponding to the first of the dual camera devices. a camera and a second camera; when it is determined that the received image includes two images, image fusion is performed on the first image and the second image to acquire a target image; and the object is identified from the target image.
  • the identification module 1204 is further configured to adjust the resolution of the first image and/or the second image so that the resolutions of the two images are the same; and the first image and the second image with the same resolution are matched. The first image and the second image after the registration are fused to obtain a target image.
  • the identification module 1204 is further configured to select one of the first image and the second image as the reference image; adjust the resolution of the other image according to the resolution of the reference image; or, according to the resolution of the first image Rate and resolution of the second image to obtain a target resolution; and adjust the resolution of the first image and the second image to the target resolution.
  • the identification module 1204 is further configured to: select one of the first image and the second image with the same resolution as the reference image; and acquire, according to the reference image, a transform coefficient that performs affine transformation on the other image; The transform coefficient is obtained by calibrating the dual camera device in advance; the other image is affine transformed according to the transform coefficient to obtain the first image and the second image after registration.
  • the identification module 1204 is further configured to perform multi-scale decomposition on the first image and the second image after registration to obtain two sets of multi-scale decomposition coefficients; and two sets of multi-scale according to a preset fusion rule.
  • the decomposition coefficients are fused to obtain the fusion coefficient; the multi-scale inverse transformation is performed according to the fusion coefficient to reconstruct the target image.
  • the identification module 1204 is further configured to perform gray processing on the target image to obtain a grayscale image of the target image; perform histogram equalization processing on the grayscale image to obtain a balanced grayscale image; The grayscale image is split, and at least two equalized grayscale images are formed; pedestrian identification is performed on the grayscale image after equalization, and the identification information of the pedestrian object and the pedestrian object is acquired; and the vehicle image is recognized and acquired by another equalized grayscale image. Identification information of vehicle objects and vehicle objects.
  • the identifying module 1204 is further configured to: when it is determined that only one image is included in the received image, extract a region of interest from an image; obtain a brightness average value of the region of interest; and determine whether the brightness average is Above a preset threshold; if below a preset threshold, feedback information is generated and fed back to the vehicle.
  • the identification module 1204 is further configured to extract a region of interest from the target image; obtain a brightness average value of the region of interest; determine whether the brightness average is higher than a preset threshold; if lower than a preset threshold, Form feedback information to the vehicle.
  • the control device of the dual camera device in the vehicle of the embodiment of the present disclosure controls the on state of the dual camera device on the vehicle according to the light intensity by acquiring the light intensity of the environment in which the vehicle is currently located.
  • the image is acquired by setting two cameras to overcome the problem that the existing ADAS system adopts the visible light camera head, and the image taken in the case of weak light is relatively blurred, and according to the current environment of the vehicle.
  • control the opening of the dual camera device that is, control the opening of the dual camera device according to the actual light condition when the vehicle is running, so as to realize the working mode of the dual camera according to the light decision, and turn on a camera when the light is strong, thereby achieving energy saving in the light
  • the dual camera is turned on to effectively ensure the quality of the image captured by the dual camera, so that when the object in the image is recognized, the accuracy of the object recognition can be improved, thereby ensuring the safety of the vehicle.
  • the present disclosure further provides a computer device comprising: a processor and a memory; wherein the processor runs the executable program code by reading executable program code stored in the memory A corresponding program for implementing a control method of a dual camera device in a vehicle as proposed in the foregoing embodiments of the present disclosure.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing the steps of a custom logic function or process.
  • the scope of the preferred embodiments of the present disclosure includes additional implementations, in which the functions may be performed in a substantially simultaneous manner or in an inverse order depending on the functions involved, in the order shown or discussed. It will be understood by those skilled in the art to which the embodiments of the present disclosure pertain.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the present disclosure can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in various embodiments of the present disclosure may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present disclosure have been shown and described above, it is understood that the foregoing embodiments are illustrative and are not to be construed as limiting the scope of the disclosure The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)

Abstract

本公开提出一种车辆中双摄像装置的控制方法和装置,其中,方法包括:获取车辆当前所处环境的光照强度;根据光照强度控制车辆上的双摄像装置的开启状态。通过设置两个摄像头采集图像,以克服现有ADAS系统采用可见光摄像装头,在光线弱的情况下拍摄的图像画面较为模糊的问题,并且,根据当前车辆所处环境的光照强度,控制双摄像装置的开启,即根据车辆行驶时的实际光线情况来控制双摄像装置的开启,以实现根据光线决策双摄像头的工作模式,在光线强时开启一个摄像头,实现节能,在光线弱时,开启双摄像头,有效保证双摄像装置所拍摄图像的质量,从而在对图像中的对象进行识别时,可以提升对象识别的准确度,进而保障车辆行驶的安全性。

Description

车辆中双摄像装置的控制方法和装置
相关申请的交叉引用
本公开要求比亚迪股份有限公司于2017年10月31日提交的、发明名称为“车辆中双摄像装置的控制方法和装置”的、中国专利申请号“201711047626.8”的优先权。
技术领域
本公开涉及车辆控制技术领域,尤其涉及一种车辆中双摄像装置的控制方法和装置。
背景技术
随着车辆保有量的持续增加,交通事故的发生率也随之增加。为了有效保证车辆的驾驶人员以及车辆中乘客的生命与财产安全,车辆生产厂商均致力于开发更加可靠的安全辅助系统。
相关技术中,车辆的高级驾驶辅助系统(Advanced Driver Assistant System,ADAS)采用视觉方式,对车辆外部的环境数据进行采集,而后对采集的数据进行对象的识别。具体地,采用可见光摄像头获取车辆外部的图像,而后对采集到的图像进行对象识别。
发明内容
本公开提出一种车辆中双摄像装置的控制方法和装置,以实现通过设置两个摄像头采集图像,以克服现有ADAS系统采用可见光摄像头,在光线弱的情况下拍摄的图像画面较为模糊的问题,并且,根据当前车辆所处环境的光照强度,控制双摄像装置的开启,即根据车辆行驶时的实际光线情况来控制双摄像装置的开启,以实现根据光线决策双摄像头的工作模式,在光线强时开启一个摄像头,实现节能,在光线弱时,开启双摄像头,有效保证双摄像装置所拍摄图像的质量,从而在对图像中的对象进行识别时,可以提升对象识别的准确度,进而保障车辆行驶的安全性,用于解决现有技术中由于可见光摄像头只能在光线充足的场景下,拍摄高质量的图像,而在光线较弱的场景下,拍摄的图像画面较为模糊,且噪声较大,因此,在后续对图像进行对象识别的过程中,对象的误识别率以及漏识别率较大,直接影响车辆的行驶安全的技术问题。
本公开第一方面实施例提出了一种车辆中双摄像装置的控制方法,包括:
获取车辆当前所处环境的光照强度;
根据所述光照强度控制所述车辆上的双摄像装置的开启状态。
本公开实施例的车辆中双摄像装置的控制方法,通过获取车辆当前所处环境的光照强度,根据光照强度控制车辆上的双摄像装置的开启状态。本实施例中,通过设置两个摄像头采集图像,以克服现有ADAS系统采用可见光摄像装头,在光线弱的情况下拍摄的图像画面较为模糊的问题,并且,根据当前车辆所处环境的光照强度,控制双摄像装置的开启,即根据车辆行驶时的实际光线情况来控制双摄像装置的开启,以实现根据光线决策双摄像头的工作模式,在光线强时开启一个摄像头,实现节能,在光线弱时,开启双摄像头,有效保证双摄像装置所拍摄图像的质量,从而在对图像中的对象进行识别时,可以提升对象识别的准确度,进而保障车辆行驶的安全性。
本公开第二方面实施例提出了一种车辆中双摄像装置的控制装置,包括:
获取模块,用于获取车辆当前所处环境的光照强度;
控制模块,用于根据所述光照强度控制所述车辆上的双摄像装置的开启状态。
本公开实施例的车辆中双摄像装置的控制装置,通过获取车辆当前所处环境的光照强度,根据光照强度控制车辆上的双摄像装置的开启状态。本实施例中,通过设置两个摄像头采集图像,以克服现有ADAS系统采用可见光摄像装头,在光线弱的情况下拍摄的图像画面较为模糊的问题,并且,根据当前车辆所处环境的光照强度,控制双摄像装置的开启,即根据车辆行驶时的实际光线情况来控制双摄像装置的开启,以实现根据光线决策双摄像头的工作模式,在光线强时开启一个摄像头,实现节能,在光线弱时,开启双摄像头,有效保证双摄像装置所拍摄图像的质量,从而在对图像中的对象进行识别时,可以提升对象识别的准确度,进而保障车辆行驶的安全性。
本公开第三方面实施例提出了一种计算机设备,包括:处理器和存储器;
其中,所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于实现如本公开第一方面实施例所述的车辆中双摄像装置的控制方法。
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例所提供的第一种车辆中双摄像装置的控制方法的流程示意图;
图2为本公开实施例所提供的第二种车辆中双摄像装置的控制方法的流程示意图;
图3为本公开实施例所提供的第三种车辆中双摄像装置的控制方法的流程示意图;
图4为本公开实施例所提供的第四种车辆中双摄像装置的控制方法的流程示意图;
图5为本公开实施例中双摄像装置的控制系统的结构示意图;
图6为本公开实施例所提供的第五种车辆中双摄像装置的控制方法的流程示意图;
图7a为本公开实施例中不同月份参考时区的日出和日落时间表示意图;
图7b为本公开实施例中不同月份中的不同时区的日出和日落时间表示意图;
图8为本公开实施例所提供的第六种车辆中双摄像装置的控制方法的流程示意图;
图9为本公开实施例所提供的第七种车辆中双摄像装置的控制方法的流程示意图;
图10为本公开实施例所提供的第八种车辆中双摄像装置的控制方法的流程示意图;
图11为本公开实施例中双摄像装置的标定模板示意图;
图12为本公开实施例所提供的一种车辆中双摄像装置的控制装置的结构示意图;
图13为本公开实施例所提供的另一种车辆中双摄像装置的控制装置的结构示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
下面参考附图描述本公开实施例的车辆中双摄像装置的控制方法和装置。在具体描述本公开实施例之前,为了便于理解,首先对常用技术词进行介绍:
YUV,是一种颜色编码方法,其中,“Y”表示明亮度(Luminance或Luma),即灰阶值;而“U”和“V”表示色度(Chrominance或Chroma),作用是描述图像色彩及饱和度,用于指定像素的颜色。YUV色彩空间的特点是它的亮度信号Y和色度信号U、V是分离的。如果只有Y信号分量而没有U、V分量,表明图像为黑白灰度图像。
多尺度分解(Multi-Scale decomposition,MSD),指对输入图像进行多个尺度的缩放,生成多个分辨率的缩小图像,而后在各个尺度的缩放图像上进行分析和处理。MSD可以将图像包含的高低频细节信息分离至各个尺度的缩放图像中,而后可以对图像不同频段的信息进行分析和处理。
图1为本公开实施例所提供的第一种车辆中双摄像装置的控制方法的流程示意图。
本公开实施例中,双摄像装置包括第一摄像头和第二摄像头,第一摄像头和第二摄像头可以并排安装在车辆上,且第一摄像头和第二摄像头的分辨率可以不同。需要说明的是,第一摄像头和第二摄像头所拍摄的视野范围相同,以对其拍摄的图像进行后续处理。例如, 第一摄像头可以为可见光摄像头,第二摄像头可以为红外摄像头。红外摄像头的分辨率可以低于可见光摄像头的分辨率,其对场景中的细节信息描述相对不足,因此,可见光摄像头可以选择高清摄像头,这样能够保证在光线充足的情况下,可见光摄像头所拍摄图像对场景的细节信息描述比较清晰,而在光线较弱的情况下,红外摄像头所拍摄图像对场景的细节信息描述比较清晰。
如图1所示,该车辆中双摄像装置的控制方法包括以下步骤:
步骤101,获取车辆当前所处环境的光照强度。
本公开实施例中,光照强度包括白天下的第一强度和黑天下的第二强度。
本公开实施例中,车辆上可以预先装有环境光线传感器,环境光线传感器用于采集车辆所处环境的光照强度信号,通过环境光线传感器可以获取光照强度信号,而后可以根据光照强度信号,确定光照强度。或者,可以根据车辆的车灯状态,确定光照强度。或者,可以从车辆的导航信息中获取车辆的当前时间信息,而后根据当前时间信息确定光照强度,对此不作限制。
步骤102,根据光照强度控制车辆上的双摄像装置的开启状态。
可以理解的是,当光照强度为白天下的第一强度时,由于光照条件良好,光线充足,可见光摄像头中可以拍摄较为清晰、噪声较小的图像,而当光照强度为黑天模式时,由于光线较弱,可见光摄像头中所拍摄图像的质量较低,往往呈现画面模糊、噪声大,甚至与整个画面都是黑的。因此,本公开实施例中,当光照强度不同时,摄像头所拍摄图像的质量不同,此时,可以控制双摄像装置中不同摄像头的开启状态。
本公开实施例中,当光照强度为白天下的第一强度时,可以只开启双摄像装置中的一个摄像头,例如,可以开启分辨率较高的一个摄像头,比如为可见光摄像头。而光照强度为黑天下的第二强度时,可以同时开启双摄像装置中的第一摄像头和第二摄像头,例如可以同时开启可见光摄像装头和红外摄像头,从而保证可以获取高质量的图像。
本公开实施例的车辆中双摄像装置的控制方法,通过获取车辆当前所处环境的光照强度,根据光照强度控制车辆上的双摄像装置的开启状态。本实施例中,通过设置两个摄像头采集图像,以克服现有ADAS系统采用可见光摄像装头,在光线弱的情况下拍摄的图像画面较为模糊的问题,并且,根据当前车辆所处环境的光照强度,控制双摄像装置的开启,即根据车辆行驶时的实际光线情况来控制双摄像装置的开启,以实现根据光线决策双摄像头的工作模式,在光线强时开启一个摄像头,实现节能,在光线弱时,开启双摄像头,有效保证双摄像装置所拍摄图像的质量,从而在对图像中的对象进行识别时,可以提升对象识别的准确度,进而保障车辆行驶的安全性。
作为本公开实施例的一种可能的实现方式,参见图2,在图1所示实施例的基础上, 步骤101具体包括以下子步骤:
步骤201,获取车辆的导航信息。
本公开实施例中,可以通过车辆上ADAS系统中的图像处理芯片对车辆上的CAN控制器进行控制,而后可以采用中断方式采集车身的CAN报文,进而可以对采集的CAN报文进行解析处理,获取车辆的导航信息相关的CAN报文。
本公开实施例中,图像处理芯片内部可以集成多个中央处理单元(Central Processing Unit,CPU),可以将上述集成的CPU的主频按照频率大小分为低、中、高等几个等级。低等级的主频可以为200MHz左右,中等级的主频可以为500~700MHz,而高等级的主频可以为1GHz以上。实际应用时,可以发现,有的图像处理芯片只集成了两种主频的CPU,例如,图像处理芯片只集成了低主频和中主频的CPU,或者,图像处理芯片只集成了中主频和高主频的CPU。一般而言,可以选择图像处理芯片上的一个主频相对较低的CPU来对CAN控制器进行控制,而后采用中断方式采集车身的CAN报文,并由该CPU对于接收到的CAN报文进行分发,将CAN报文发送至图像处理芯片上其他CPU中,以对接收到的CAN报文进行解析处理,获取车辆的导航信息相关的CAN报文。
步骤202,从导航信息中提取车辆的当前时间信息。
本公开实施例中,导航信息包括当前时间信息、位置信息等。当前时间信息包括年份信息、月份信息、日份信息,以及时钟信息,位置信息包括经度信息和纬度信息。
本公开实施例中,在获取到的导航信息后,可以对导航信息相关的CAN报文进行解析处理,可以解析出导航信息中的当前时间信息以及位置信息。
步骤203,根据当前时间信息确定光照强度。
本公开实施例中,可以确定导航信息中位置信息所处区域的日出时间和日落时间,进而通过比较当前时间信息中的时钟信息是否处于日出时间和日落时间所形成的时间段内,来确定光照强度。当当前时间信息中的时钟信息处于日出时间和日落时间所形成的时间段内时,表明光照强度为白天下的第一强度,而当时钟信息未处于日出时间和日落时间所形成的时间段内时,表明当前光照强度为黑天下的第二强度。
举例而言,例如导航信息中位置信息所处区域为东8区,且东8区的日出时间为06:00:00,日落时间为18:00:00。当时钟信息处于06:00:00~18:00:00内时,当前光照强度为白天下的第一强度,而当时钟信息处于18:00:01~05:59:59内时,当前光照强度为黑天下的第二强度。假设从当前时间信息中提取的时钟信息为15:32:45,则可以确定当前光照强度为白天下的第一强度。
本公开实施例的车辆中双摄像装置的控制方法,通过获取车辆的导航信息,从导航信息中提取车辆的当前时间信息,根据当前时间信息确定光照强度。本实施例中,由于导航 信息中时间信息的实时性,根据当前时间信息确定光照强度,可以保证光照强度确定的精确性。
实际使用时,可以发现,车辆的导航系统可能发生故障的情况,或者,车辆可能进入卫星信号不好的地下停车场等,此时,ADAS系统无法接收到导航信息。因此,本公开实施例中,还可以根据车辆的车灯状态确定光照强度,下面结合图3对上述过程进行详细说明。
图3为本公开实施例所提供的第三种车辆中双摄像装置的控制方法的流程示意图。
如图3所示,在图1所示实施例的基础上,步骤101具体包括以下子步骤:
步骤301,检测车辆的车灯状态。
本公开实施例中,车辆上的车灯可以处于开启状态,也可以处于关闭状态。一般情况下,车辆行驶在夜间需要开启车灯照明,行驶在白天车灯不需要开启,可以处于关闭状态。因此,可以通过检测车辆的车灯状态,来识别车辆当前所行驶的环境光线的强弱。本公开实施例中,车灯状态可以包括近光灯的状态和远光灯的状态。
本公开实施例中,可以对采集的CAN报文进行解析处理,从而获取车灯状态相关的CAN报文。在获取到车灯状态相关的CAN报文后,可以对车灯状态相关的CAN报文进行解析处理,从而可以解析出车灯状态中的近光灯或者远光灯的状态。
本公开实施例中,车辆上的车灯可以由驾驶人员手动触发开启,也可以由车辆上的环境光线传感器,在感知到光线变弱时,控制开启车灯。
步骤302,根据检测到的车灯状态,确定光照强度。
本公开实施例中,在车灯状态指示出车辆的近光灯或者远光灯处于关闭状态时,表明车辆所处环境中的光线充足,此时,可以确定光照强度为白天下的第一强度;而在车灯状态指示出车辆的近光灯或者远光灯处于启动状态时,表明车辆所处环境中的光线较弱,此时,可以确定光照强度为黑天下的第二强度。
本公开实施例的车辆中双摄像装置的控制方法,通过检测车辆的车灯状态,根据检测到的车灯状态,确定光照强度。由此,可以在ADAS系统无法接收到导航信息时,由车辆的车灯状态确定光照强度,从而提升了该双摄像装置的控制方法的灵活性。
作为本公开实施例的另一种可能的实现方式,还可以根据车辆上的相关传感器获取光照强度信号,从而根据光照强度信号,确定光照强度。下面结合图4对上述过程进行详细说明。
图4为本公开实施例所提供的第四种车辆中双摄像装置的控制方法的流程示意图。
如图4所示,在图1所示实施例的基础上,步骤101具体包括以下子步骤:
步骤401,从车辆上的环境光线传感器上获取光照强度信号。
本公开实施例中,车辆上的环境光线传感器可以实时采集车辆当前所处环境中的光照强度信号,在环境光线传感器采集到光照强度信号后,车辆中的控制单元可以从环境光线传感器上获取光照强度信号。
步骤402,根据光照强度信号,确定光照强度。
本公开实施例中,可以设定白天下的第一强度和黑天下的第二强度的临界值,本公开实施例中记为预设的阈值,当光照强度信号超出预设的阈值时,表明车辆所处环境中的光线充足,此时,可以确定光照强度为白天下的第一强度,而当光照强度信号未超出预设的阈值时,表明车辆所处环境中的光线较弱,此时,可以确定光照强度为黑天下的第二强度。
本公开实施例的车辆中双摄像装置的控制方法,根据车辆上的环境光线传感器上获取光照强度信号,从而根据光照强度信号,确定光照强度,易于实现,且操作简单。此外,由于环境光线传感器的灵敏度较高,因此,可以保证确定的光照强度的准确性,从而保证双摄像装置控制的精确性。
作为一种示例,参见图5,图5为本公开实施例中双摄像装置的控制系统的结构示意图。图5包括:摄像头2011、摄像头2012、图像处理芯片202,以及执行机构203。图像处理芯片202包括:图像采集单元2021、图像处理和识别单元2022、系统决策单元2023和系统控制单元2024。
摄像头2011和摄像头2012均与图像处理芯片202相连,摄像头2011和摄像头2012的主要区别为:图像处理芯片202中运行的系统控制单元2024可以控制摄像头2012的开启和关闭,并且由系统控制单元2024自行判断何时开启控制摄像头2012,以及何时关闭控制摄像头2012。
图像采集单元2021也可以接受系统控制单元2024的控制,当光线强时,只开启摄像头2011,实现节能,而在光线弱时,同时开启摄像头2011和摄像头2012,有效保证双摄像装置所拍摄图像的质量。
图像处理和识别单元2022也可以接受系统控制单元2024的控制,当光线强时,只对摄像头2011拍摄的图像进行分析和处理,并识别出车辆和行人等对象,而当光线弱时,可以对摄像头2011和摄像头2012所拍摄的图像进行融合处理,而后对融合后的图像进行分析和处理,并识别出车辆和行人等对象。
在图像处理和识别单元2022识别出对象后,系统决策单元2023根据识别结果生成安全驾驶策略,然后根据安全驾驶策略对执行机构203进行控制。执行机构203可以发出包括声、光等形式的报警提醒,以及诸如控制方向盘抖动或者自动刹车等操作。
为了清楚说明上一实施例,参见图6,在图2所示实施例的基础上,步骤203具体包括以下子步骤:
步骤501,根据位置信息确定车辆当前所处的第一时区。
本公开实施例中,可以根据位置信息中的经度信息,计算出车辆当前所处的第一时区。
可以理解的是,一共24个时区,每个时区占15°,因此,本公开实施例中,可以使用下述公式计算车辆当前所处的第一时区:
A/15°=B……C;(1)
公式(1)中,A表示经度信息,B表示商,C表示余数。
当余数C小于7.5时,第一时区等于商B,而当余数到大于7.5时,第一时区等于B+1。
举例而言,当经度信息为西经173°时,173°/15°=11余8,因此,第一时区为西12区。
步骤502,判断第一时区是否为预设的参考时区,若是,执行步骤503,否则,执行步骤505。
本公开实施例中,预设的参考时区为预先设置的,参考时区例如可以为东8区。
步骤503,识别当前时间信息中的月份信息,获取月份信息对应的日出时间和日落时间。
本公开实施例中,可以预先建立参考时区的不同月份信息和日出时间及日落时间的对应关系。例如,参见图7a,图7a为本公开实施例中不同月份参考时区的日出和日落时间表示意图。
在第一时区为参考时区时,可以根据当前时间信息中的月份信息,查询上述对应关系,从而获取与月份信息对应的日出时间和日落时间。
步骤504,利用日出时间和日落时间形成第一时间段。
本公开实施例中,在获取与月份信息对应的日出时间和日落时间后,可以利用日出时间和日落时间形成第一时间段。例如,获取与月份信息对应的日出时间06:00:00,日落时间为18:00:00,则形成的第一时间段为06:00:00~18:00:00。
步骤505,获取第一时区和参考时区的时差。
可以理解的是,时区每差一个区,区时相差一个小时。因此,在第一时区不为参考时区时,可以获取第一时区为参考时区的时差,从而根据时差获取第一时区对应的日出时间及日落时间。
本公开实施例中,由于东时区早于西时区,因此,可以标记西时区的时区为负数,东时区的时区为正数,进而可以将第一时区与参考时区作差,获取两者之间的时差,例如标记时差为D小时(D为有符号的数)。
例如,当第一时区为西一区,参考时区为东8区时,两者之间的时差为(-1)-8=-9小时,即D=-9,第一时区的当前时间晚于参考时区9小时。
步骤506,识别当前时间信息中的月份信息,获取月份信息对应的日出时间和日落时 间。
本公开实施例中,可以根据识别当前时间信息中的月份信息,而后根据月份信息,查询预先建立的参考时区的不同月份信息和日出时间及日落时间的对应关系,获取与参考时区的该月份信息对应的日出时间和日落时间,例如,标记获取的与参考时区的该月份信息对应的日出时间为a点,日落时间为b点。
步骤507,利用时差对日出时间和日落时间进行调整。
本公开实施例中,在获取时差D、日出时间a和日落时间b后,可以利用时差D对日出时间a和日落时间b进行调整,获取与第一时区当前时间信息中的月份信息对应的日出时间和日落时间。
例如,标记第一时区当前时间信息中的月份信息对应的日出时间为c点和日落时间为d点,则c=a+D,d=b+D。
仍以步骤505中的例子示例,第一时区的当前时间晚于参考时区9小时,D=-9,因此,调整后的日出时间为a-9点,日落时间为b-9点。
作为一种示例,参见图7b,图7b为本公开实施例中不同月份中的不同时区的日出和日落时间表示意图,图7b仅以8月份示例。在确定不同时区与参考时区的时差后,可以利用图7a中的参考时区的日出时间和日落时间,调整各个时区的日出时间和日落时间,从而形成各个时区(包含第一时区)的日出时间和日落时间。在确定出月份信息后,可以通过查找表格的方式,获取各个时区的日出时间和日落时间,操作简单且易于实现。
步骤508,利用调整后的日出时间和日落时间形成第一时间段。
本公开实施例中,利用调整后的日出时间和日落时间形成第一时间段,则第一时间段为a+D点~b+D点。
例如,当a为6,b为18时,则第一时间段为21:00:00~9:00:00。
步骤509,从当前时间信息中提取出时钟信息,判断时钟信息是否处于第一时间段内,若是,执行步骤510,否则,执行步骤511。
步骤510,确定光照强度为白天下的第一强度。
本公开实施例中,当时钟信息处于第一时间段内时,可以确定光照强度为白天下的第一强度。
步骤511,确定光照强度为黑天下的第二强度。
本公开实施例中,当时钟信息未处于第一时间段内时,可以确定光照强度为黑天下的第二强度。
本公开实施例的车辆中双摄像装置的控制方法,通过获取车辆的导航信息;从导航信息中提取车辆的当前时间信息;根据当前时间信息确定光照强度。本实施例中,根据当前 车辆的行驶时间段,确定光照强度,从而能够根据行驶时实际光线情况来控制双摄像装置的开启,以实现根据光线决策双摄像头的工作模式,在光线强时开启一个摄像头,实现节能,在光线弱时,开启双摄像头,有效保证双摄像装置所拍摄图像的质量,从而在对图像中的对象进行识别时,可以提升对象识别的准确度,进而保障车辆行驶的安全性。
作为本公开实施例的一种可能的实现方式,参见图8,在图1-6所示实施例的基础上,该车辆中双摄像装置的控制方法还可以包括以下步骤:
步骤601,从双摄像装置中接收图像。
本公开实施例中,当控制车辆上的双摄像装置的开启状态,可以基于双摄像装置获取图像,在双摄像装置获取到图像后,ADAS系统中的图像处理芯片可以从双摄像装置中接收图像。
步骤602,对接收到的图像进行识别,获取图像中可能存在的对象。
可以理解的是,当同时开启双摄像装置时,表明光照强度为黑天下的第二强度,此时,由于车辆所处环境中的光线较弱,为了提升图像中对象识别的准确性,可以对双摄像装置拍摄的两幅图像进行图像融合,获取目标图像,而后从目标图像中识别对象。
而当只开启双摄像装置中的一个摄像装置时,表明车辆的光照强度为白天下的第一强度,此时,由于车辆所处环境中的光线充足,因此,可以直接对该摄像装置拍摄的图像进行识别,获取图像中可能存在的对象。
本公开实施例的车辆中双摄像装置的控制方法,通过从双摄像装置中接收图像,对接收到的图像进行识别,获取图像中可能存在的对象。由此,可以提升对象识别的准确度,从而保障车辆行驶的安全性。
为了清楚说明上述实施例,参见图9,在图8所示实施例的基础上,步骤602具体可以包括以下步骤:
步骤701,判断接收到的图像中是否包括两幅图像,若是,执行步骤702,否则,执行步骤704。
本公开实施例中,两幅图像包括第一图像和第二图像,分别对应双摄像装置中的第一摄像头和第二摄像头。
本公开实施例中,当接收到的图像中包括两幅图像时,表明双摄像头均处于开启状态,此时,光照强度为黑天下的第二强度,为了提升图像中对象识别的准确性,可以触发步骤702,而当接收到的图像中仅包括一幅图像时,表明光照强度为白天下的第一强度,此时,可以触发步骤704。
步骤702,对第一图像和第二图像进行图像融合,获取目标图像。
本公开实施例中,由于双摄像装置的分辨率可以不同,在对第一图像和第二图像进行 图像融合前,需要调整第一图像和第二图像的分辨率,使得第一图像和第二图像的分辨率相同。
例如,可以基于两个图像中的一个图像的分辨率,调整另一个图像的分辨率,使得两个图像的分辨率相同。或者,可以根据第一图像的分辨率和第二图像的分辨率,获取一个折中的分辨率中作为目标分辨率,而后同时调整第一图像和第二图像的分辨率为目标分辨率。例如,当第一图像的分辨率为1600*1200,第二图像的分辨率为1024*768时,目标分辨率可以为1280*960,同时调整第一图像的第二图像的分辨率为1280*960。
需要说明的是,虽然双摄像装置中的第一摄像头和第二摄像头并排安装在一起,且拍摄的视野范围相同,但是由于第一摄像头和第二摄像头的位置不同,分辨率调整后两个图像还是无法完全重合到一起。因此,本公开实施例中,可以将分辨率相同的两个图像进行配准,而后对配准后的第一图像和第二图像进行融合,得到目标图像。
本公开实施例中,可以选择一个图像作为基准图像,而后根据基准图像,对另外一个图像进行几何变换处理,将处理后的图像与该基准图像进行融合,从而使得两个图像完全重合。
步骤703,从目标图像中识别对象。
一般而言,摄像头的输入图像是彩色图像,且色彩空间是YUV的,为了减少图像融合过程中的运算量,在图像融合的计算过程中,只对色彩空间中的Y分量进行计算,而UV分量不参与计算。
本公开实施例中,在对目标图像中的对象进行识别时,可以从融合后的目标图像中提取出Y分量,无论目标图像是彩色图像还是黑白图像,提取Y分量的过程即为图像的灰度处理过程,以此来减少运算量,提升系统的实时性能。
在从目标图像中提取出Y分量后,即对目标图像进行灰度处理后,可以得到目标图像的灰度图。为了提高灰度图的对比度以及灰度色调的变化,使得灰度图更加清晰,本公开实施例中,可以对灰度图进行直方图均衡化处理,得到均衡后的灰度图。
本公开实施例中,由于行人、车辆等对象的识别规则不同,因此,在得到均衡后的灰度图后,可以将均衡后灰度图分路,形成至少两路均衡后灰度图,然后可以对一路均衡后灰度图进行行人识别,获取行人对象和行人对象的识别信息,而对另一路均衡后灰度图进行车辆识别,获取车辆对象和车辆对象的识别信息。需要说明的是,上述两路识别处理过程是同时进行的,例如可以通过不同的CPU,同时对两路均衡后灰度图进行识别,以提升系统的实时性能。
本公开实施例中,识别信息可以包括:坐标信息、宽度信息、高度信息、距离信息等。
作为一种可能的实现方式,对于行人对象的识别,可以利用拉普拉斯金字塔分解算法, 对均衡后灰度图进行多层次的缩放处理,而后在每个层次上的缩放图像上进行方向梯度直方图(Histogram of Oriented Gradient,本公开实施例记为hog)特征提取,进而可以基于hog特征进行分类识别,从对象中识别出的行人对象。对于车辆对象的识别,可以利用拉普拉斯金字塔分解算法,对均衡后灰度图进行多层次的缩放处理,而后对每个层次上的缩放图像进行哈尔(haar)特征提取,进而可以基于haar特征进行分类识别,从对象中识别出的车辆对象。
需要说明的是,为了提升行人对象或者车辆对象识别的准确性,在识别出目标图像中的行人对象或者车辆对象后,还可以采用行人对象和车辆对象的跟踪算法,例如卡尔曼滤波算法,对行人对象和车辆对象进行跟踪,剔除误识别的行人对象和车辆对象。
步骤704,将一幅图像作为目标图像。
本公开实施例中,当判断出接收的图像中仅包括一幅图像时,可以将该幅图像直接作为目标图像,以对其进行识别,获取图像中的每个对象,即触发步骤703。
步骤705,从目标图像中提取感兴趣区域。
本公开实施例中,当判断出接收的图像中仅包括一幅图像时,可以从一幅图像中提取感兴趣区域,感兴趣区域例如可以为天空区域。或者,在根据步骤702获取目标图像后,还可以从目标图像中提取感兴趣区域,以对车辆当前所处的环境中的光照情况进行进一步确认。
步骤706,获取所感兴趣区域的亮度均值。
本公开实施例中,在提取出感兴趣区域后,可以确定出感兴趣区域中每个像素点的亮度值,从而可以根据感兴趣区域中每个像素点的亮度值,获取所感兴趣区域的亮度均值。
步骤707,判断亮度均值是否高于预设的阈值,若是,执行步骤709,否则,执行步骤708。
本公开实施例中,可以判断亮度均值是否高于预设的阈值,在亮度均值高于预设的阈值时,表明当前车辆所处环境的光照条件较好,此时,可以不作任何处理。而在亮度均值低于等于预设的阈值时,表明当前车辆所处环境的光照条件较差,此时,可以触发步骤708。
步骤708,形成反馈信息反馈给车辆。
本公开实施例中,在亮度均值低于等于预设的阈值时,表明当前车辆所处环境的光照条件较差,此时,可以形成反馈信息反馈给车辆,以控制车辆的近光灯或者远光灯的开启状态。
步骤709,不作任何处理。
本公开实施例的车辆中双摄像装置的控制方法,通过当接收到的图像中包括两幅图像时,对第一图像和第二图像进行图像融合,获取目标图像,而后从目标图像中识别对象, 能够实现在黑天模式时,提升对象识别的准确度。通过从目标图像中提取感兴趣区域,获取所感兴趣区域的亮度均值,在亮度均值低于等于预设的阈值时,形成反馈信息反馈给车辆,可以实现对车辆当前所处的环境中的光照情况进行进一步确认。
作为本公开实施例的一种可能的实现方式,参见图10,在图9所示实施例的基础上,步骤702具体包括以下子步骤:
步骤801,调整第一图像和/或第二图像的分辨率,使两个图像的分辨率相同。
作为本公开实施例的一种可能的实现方式,可以基于两个图像中的一个图像的分辨率,调整另一个图像的分辨率,使得两个图像的分辨率相同。本公开实施例中,可以从第一图像和第二图像中选取一个作为参考图像,而后根据参考图像的分辨率调整另一个图像的分辨率。例如,当参考图像为第一图像时,可以调整第二图像的分辨率,使得第二图像的分辨率与第一图像的分辨率相同,或者当参考图像为第二图像时,可以调整第一图像的分辨率,使得第一图像的分辨率与第二图像的分辨率相同。
再例如,还可以从第一图像和第二图像中选取分辨率较小的一个作为参考图像,比如,当第一图像的分辨率低于第二图像的分辨率时,可以将第一图像作为参考图像,而后可以对第二图像进行缩放处理,减小第二图像的分辨率,使得两个图像的分辨率相同。由此,可以减少系统的运算量,提升系统的实时性能。
作为本公开实施例的另一种可能的实现方式,可以根据第一图像的分辨率和第二图像的分辨率,获取一个目标分辨率;同时调整第一图像和第二图像的分辨率为目标分辨率。例如,当第一图像的分辨率为1600*1200,第二图像的分辨率为1024*768时,目标分辨率可以为1280*960,同时调整第一图像的第二图像的分辨率为1280*960。
步骤802,将分辨率相同的第一图像和第二图像进行配准。
本公开实施例中,可以从分辨率相同的两个图像中选取一个作为基准图像,而后根据基准图像,对另外一个图像进行几何变换处理,使得处理后的图像可以与基准图像较好地重合。
作为一种可能的实现方式,可以根据基准图像,获取对另一个图像进行仿射变换的变换系数,而后按照变换系数对另一个图像进行仿射变换,得到配准后的第一图像和第二图像,变换系数是预先对双摄像装置进行标定得到的。
本公开实施例以第一图像为基准图像,拍摄第一图像的摄像头为第一摄像头示例。因此,可以根据第一摄像头所拍摄的第一图像,对第二图像进行几何变换处理,使得处理后的第二图像可以与第一图像较好地重合。即根据第一图像,获取对第二图像进行放射变换的变换系数,而后按照变换系数对第二图像进行仿射变换,得到配准后的第一图像和第二图像。
本公开实施例中,变换系数的标定过程可以如下所示:
可以按照图11所示制作标定模板(图11的标定模板只是示例,具体实现时,可以根据实际情况制作),而后用纸张打印出来。而后,将该标定模板放置在双摄像装置的正前方,调整标定模板与双摄像装置之间的距离,使得标定模板上面的4个角落的黑色矩形框均落入双摄像装置所拍摄图像的4个角落区域中。而后可以采集双摄像装置拍摄的图像,利用“角点检测”方法求解出4个角落的黑色矩形框的全部顶点坐标。
本公开实施例中,可以将第一摄像头拍摄的图像上所有的黑色矩形框的顶点坐标以及第二摄像头拍摄的图像上与之对应的黑色矩形框的顶点坐标代入到如公式(2)所示的仿射变换矩阵方程中,而后推导得到公式(3)。
Figure PCTCN2018112904-appb-000001
公式(2)中,x和y表示第一摄像头拍摄的图像上黑色矩形框的顶点坐标,x'和y'表示第二摄像头拍摄的图像上与第一摄像头拍摄的图像上对应的黑色矩形框的顶点坐标,m 1、m 2、m 3、m 4、m 5和m 6为仿射变换的变换系数。
Figure PCTCN2018112904-appb-000002
公式(3)中,k表示黑色矩形框顶点坐标的个数(图11中k的个数为28),x k和y k表示第一摄像头拍摄的图像上第k个黑色矩形框的顶点坐标,x k'和y k'表示第二摄像头拍摄的图像上与第一摄像头拍摄的图像上第k个黑色矩形框对应的顶点坐标。
最后,利用最小二乘法,可以求解得到仿射变换的变换系数m 1、m 2、m 3、m 4、m 5和m 6
在得到仿射变换的变换系数后,可以按照变换系数对第二摄像头所拍摄的第二图像进行仿射变换,得到配准后的第一图像和第二图像。
步骤803,对配准后的第一图像和第二图像进行融合,得到目标图像。
本公开实施例中,在对配准后的第一图像和第二图像进行融合时,首先需计算两个图 像的融合系数。例如,可以使用MSD法,计算配准后的第一图像和第二图像的融合系数,而后可以根据融合系数得到目标图像。
本公开实施例中,可以对配准后的第一图像和第二图像进行多尺度分解,获取两组多尺度分解系数:
Figure PCTCN2018112904-appb-000003
公式(4)中,i=1,2,…,n,n表示多尺度分解的层数,
Figure PCTCN2018112904-appb-000004
表示第一图像的多尺度分解系数,
Figure PCTCN2018112904-appb-000005
表示第二图像的多尺度分解系数。
在得到两组多尺度分解系数后,可以按照预设的融合规则对两组多尺度分解系数进行融合,得到融合系数:
Figure PCTCN2018112904-appb-000006
公式(5)中,
Figure PCTCN2018112904-appb-000007
表示融合系数,θ表示预设的融入规则。
在得到融合系数
Figure PCTCN2018112904-appb-000008
后,可以根据融合系数
Figure PCTCN2018112904-appb-000009
进行多尺度反变换反向重构出目标图像,例如可以如下式所示:
Figure PCTCN2018112904-appb-000010
公式(6)中,image r表示融合后的目标图像。
本公开实施例的车辆中双摄像装置的控制方法,通过调整第一图像和/或第二图像的分辨率,使两个图像的分辨率相同,将分辨率相同的第一图像和第二图像进行配准,对配准后的第一图像和第二图像进行融合,得到目标图像。由此,可以实现两个图像较好的重合,进而可以提升图像识别的准确性。
为了实现上述实施例,本公开还提出一种车辆中双摄像装置的控制装置。
图12为本公开实施例所提供的一种车辆中双摄像装置的控制装置的结构示意图。
如图12所示,该车辆中双摄像装置的控制装置1200包括:获取模块1201和控制模块1202。
获取模块1201,用于获取车辆当前所处环境的光照强度。
作为一种可能的实现方式,获取模块1201,具体用于获取车辆的导航信息;从导航信息中提取车辆的当前时间信息;根据当前时间信息确定光照强度。
本公开实施例中,获取模块1201,具体用于在当前时间信息处于第一时间段内时,确定光照强度为白天下的第一强度;在当前时间信息未处于第一时间段内时,确定光照强度为黑天下的第二强度。
本公开实施例中,获取模块1201,还用于在从导航信息中提取当前时间信息的同时, 从导航信息中提取位置信息。
本公开实施例中,获取模块1201,具体用于根据位置信息确定车辆当前所处的第一时区;判断第一时区是否为预设的参考时区;如果判断出第一时区为参考时区,则识别当前时间信息中的月份信息,获取月份信息对应的日出时间和日落时间;利用日出时间和日落时间形成第一时间段;从当前时间信息中提取出时钟信息,判断时钟信息是否处于第一时间段内;如果处于第一时间段内,则确定光照强度为白天下的第一强度。
本公开实施例中,获取模块1201,还用于在判断出第一时区非参考时区时,获取第一时区和参考时区的时差;识别当前时间信息中的月份信息,获取月份信息对应的日出时间和日落时间;利用时差对日出时间和日落时间进行调整;利用调整后的日出时间和日落时间形成第一时间段。
作为另一种可能的实现方式,获取模块1201,具体用于检测车辆的车灯状态;根据检测到的车灯状态,确定光照强度。
本公开实施例中,获取模块1201,具体用于在车灯状态指示出车辆的近光灯或者远光灯处于关闭状态时,确定光照强度为白天下的第一强度;在车灯状态指示出车辆的近光灯或者远光灯处于启动状态时,确定光照强度为黑天下的第二强度。
作为又一种可能的实现方式,获取模块1201,具体用于从车辆上的环境光线传感器上获取光照强度信号;根据光照强度信号,确定光照强度。
本公开实施例中,获取模块1201,具体用于在光照强度信号超出预设的阈值时,确定光照强度为白天下的第一强度;在光照强度信号未超出预设的阈值时,确定光照强度为黑天下的第二强度。
控制模块1202,用于根据光照强度控制车辆上的双摄像装置的开启状态。
在本公开实施例的一种可能的实现方式中,参见图13,在图12所示实施例的基础上,该车辆中双摄像装置的控制装置1200还可以包括:接收模块1203和识别模块1204。
接收模块1203,用于从双摄像装置中接收图像。
识别模块1204,用于对接收到的图像进行识别,获取图像中可能存在的对象。
作为一种可能的实现方式,识别模块1204,具体用于判断接收到的图像中是否包括两幅图像;其中,两幅图像包括第一图像和第二图像,分别对应双摄像装置中的第一摄像头和第二摄像头;当判断出接收到的图像中包括两幅图像时,对第一图像和第二图像进行图像融合,获取目标图像;从目标图像中识别对象。
本公开实施例中,识别模块1204,还用于调整第一图像和/或第二图像的分辨率,使两个图像的分辨率相同;将分辨率相同的第一图像和第二图像进行配准;对配准后的第一图像和第二图像进行融合,得到目标图像。
本公开实施例中,识别模块1204,还用于从第一图像和第二图像中选取一个作为参考图像;根据参考图像的分辨率调整另一个图像的分辨率;或者,根据第一图像的分辨率和第二图像的分辨率,获取一个目标分辨率;同时调整第一图像和第二图像的分辨率为目标分辨率。
本公开实施例中,识别模块1204,还用于从分辨率相同的第一图像和第二图像中选取一个作为基准图像;根据基准图像,获取对另一个图像进行仿射变换的变换系数;其中,变换系数是预先对双摄装置进行标定得到的;按照变换系数对另一个图像进行仿射变换,得到配准后的第一图像和第二图像。
本公开实施例中,识别模块1204,还用于分别对配准后的第一图像和第二图像进行多尺度分解,获取两组多尺度分解系数;按照预设的融合规则对两组多尺度分解系数进行融合,得到融合系数;根据融合系数进行多尺度反变换反向重构出目标图像。
本公开实施例中,识别模块1204,还用于对目标图像进行灰度处理,得到目标图像的灰度图;对灰度图进行直方图均衡化处理,得到均衡后灰度图;将均衡后灰度图分路,形成至少两路均衡后灰度图;对一路均衡后灰度图进行行人识别,获取行人对象和行人对象的识别信息;对另一路均衡后灰度图进行车辆识别,获取车辆对象和车辆对象的识别信息。
本公开实施例中,识别模块1204,还用于当判断出接收的图像中仅包括一幅图像时,则从一幅图像中提取感兴趣区域;获取所感兴趣区域的亮度均值;判断亮度均值是否高于预设的阈值;如果低于预设的阈值,则形成反馈信息反馈给车辆。
本公开实施例中,识别模块1204,还用于从目标图像中提取感兴趣区域;获取所感兴趣区域的亮度均值;判断亮度均值是否高于预设的阈值;如果低于预设的阈值,则形成反馈信息反馈给车辆。
需要说明的是,前述对车辆中双摄像装置的控制方法实施例的解释说明也适用于该实施例的车辆中双摄像装置的控制装置1200,此处不再赘述。
本公开实施例的车辆中双摄像装置的控制装置,通过获取车辆当前所处环境的光照强度,根据光照强度控制车辆上的双摄像装置的开启状态。本实施例中,通过设置两个摄像头采集图像,以克服现有ADAS系统采用可见光摄像装头,在光线弱的情况下拍摄的图像画面较为模糊的问题,并且,根据当前车辆所处环境的光照强度,控制双摄像装置的开启,即根据车辆行驶时的实际光线情况来控制双摄像装置的开启,以实现根据光线决策双摄像头的工作模式,在光线强时开启一个摄像头,实现节能,在光线弱时,开启双摄像头,有效保证双摄像装置所拍摄图像的质量,从而在对图像中的对象进行识别时,可以提升对象识别的准确度,进而保障车辆行驶的安全性。
为了实现上述实施例,本公开还提出一种计算机设备,包括:处理器和存储器;其中, 所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于实现如本公开前述实施例提出的车辆中双摄像装置的控制方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实 施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本公开各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种车辆中双摄像装置的控制方法,其特征在于,包括:
    获取车辆当前所处环境的光照强度;
    根据所述光照强度控制所述车辆上的双摄像装置的开启状态。
  2. 根据权利要求1所述车辆中双摄像装置的方法,其特征在于,所述获取车辆当前所处环境的光照强度,包括:
    获取车辆的导航信息;
    从所述导航信息中提取所述车辆的当前时间信息;
    根据所述当前时间信息确定所述光照强度。
  3. 根据权利要求1所述的车辆中双摄像装置的控制方法,其特征在于,所述获取车辆当前所处环境的光照强度,包括:
    检测所述车辆的车灯状态;
    根据检测到的所述车灯状态,确定所述光照强度。
  4. 根据权利要求1所述的车辆中双摄像装置的控制方法,其特征在于,所述获取车辆当前所处环境的光照强度,包括:
    从所述车辆上的环境光线传感器上获取光照强度信号;
    根据所述光照强度信号,确定所述光照强度。
  5. 根据权利要求2所述的车辆中双摄像装置的控制方法,其特征在于,所述根据所述当前时间信息确定所述光照强度,包括:
    如果所述当前时间信息处于第一时间段内,则确定所述光照强度为白天下的第一强度;
    如果所述当前时间信息未处于所述第一时间段内,则确定所述光照强度为黑天下的第二强度。
  6. 根据权利要求3所述的车辆中双摄像装置的控制方法,其特征在于,所述根据检测到的所述车灯状态,确定所述光照强度,包括:
    如果所述车灯状态指示出所述车辆的近光灯或者远光灯处于关闭状态,则确定所述光照强度为白天下的第一强度;
    如果所述车灯状态指示出所述车辆的近光灯或者远光灯处于启动状态,则确定所述光照强度为黑天下的第二强度。
  7. 根据权利要求4所述的车辆中双摄像装置的控制方法,其特征在于,所述根据所述光照强度信号,确定所述光照强度,包括:
    如果所述光照强度信号超出预设的阈值,则确定所述光照强度为白天下的第一强度;
    如果所述光照强度信号未超出预设的阈值,则确定所述光照强度为黑天下的第二强度。
  8. 根据权利要求2或5所述的车辆中双摄像装置的控制方法,其特征在于,所述当前时间信息中包括月份信息和时钟信息,包括:
    在从所述导航信息中提取所述当前时间信息的同时,从所述导航信息中提取位置信息;
    所述根据所述当前时间信息确定所述光照强度,包括:
    根据所述位置信息确定所述车辆当前所处的第一时区;
    判断所述第一时区是否为预设的参考时区;
    如果判断出所述第一时区为所述参考时区,则识别所述当前时间信息中的月份信息,获取所述月份信息对应的日出时间和日落时间;
    利用所述日出时间和所述日落时间形成第一时间段;
    从所述当前时间信息中提取出时钟信息,判断所述时钟信息是否处于所述第一时间段内;
    如果处于所述第一时间段内,则确定所述光照强度为白天下的所述第一强度。
  9. 根据权利要求8所述的车辆中双摄像装置的控制方法,其特征在于,还包括:
    如果判断出所述第一时区非所述参考时区,则获取所述第一时区和所述参考时区的时差;
    识别所述当前时间信息中的月份信息,获取所述月份信息对应的日出时间和日落时间;
    利用所述时差对所述日出时间和所述日落时间进行调整;
    利用调整后的所述日出时间和所述日落时间形成第一时间段。
  10. 根据权利要求1-9任一项所述的车辆中双摄像装置的控制方法,其特征在于,所述根据所述光照强度控制所述车辆上的双摄像装置的开启状态之后,还包括:
    从所述双摄像装置中接收图像;
    对接收到的所述图像进行识别,获取所述图像中可能存在的对象。
  11. 根据权利要求10所述的车辆中双摄像装置的控制方法,其特征在于,所述对接收到的所述图像进行识别,获取所述图像中可能存在的对象,包括:
    判断接收到的所述图像中是否包括两幅图像;其中,所述两幅图像包括第一图像和第二图像,分别对应所述双摄像装置中的第一摄像头和第二摄像头;
    当判断出接收到的所述图像中包括两幅图像时,对所述第一图像和所述第二图像进行图像融合,获取目标图像;
    从所述目标图像中识别对象。
  12. 根据权利要求11所述的车辆中双摄像装置的控制方法,其特征在于,所述对所述第一图像和所述第二图像进行图像融合,获取目标图像,包括:
    调整所述第一图像和/或所述第二图像的分辨率,使两个图像的分辨率相同;
    将分辨率相同的所述第一图像和所述第二图像进行配准;
    对配准后的所述第一图像和所述第二图像进行融合,得到所述目标图像。
  13. 根据权利要求12所述的车辆中双摄像装置的控制方法,其特征在于,所述调整所述第一图像和/或所述第二图像的分辨率,使两个图像的分辨率相同,包括:
    从所述第一图像和所述第二图像中选取一个作为参考图像;
    根据所述参考图像的分辨率调整另一个图像的分辨率;或者,
    根据所述第一图像的分辨率和所述第二图像的分辨率,获取一个目标分辨率;
    同时调整所述第一图像和所述第二图像的分辨率为所述目标分辨率。
  14. 根据权利要求12或13所述的车辆中双摄像装置的控制方法,其特征在于,所述将分辨率相同的所述第一图像和所述第二图像进行配准,包括:
    从分辨率相同的所述第一图像和所述第二图像中选取一个作为基准图像;
    根据所述基准图像,获取对另一个图像进行仿射变换的变换系数;其中,所述变换系数是预先对所述双摄装置进行标定得到的;
    按照所述变换系数对所述另一个图像进行仿射变换,得到配准后的所述第一图像和所述第二图像。
  15. 根据权利要求12-14任一项所述的车辆中双摄像装置的控制方法,其特征在于,所述对配准后的所述第一图像和所述第二图像进行融合,得到所述目标图像,包括:
    分别对配准后的所述第一图像和所述第二图像进行多尺度分解,获取两组多尺度分解系数;
    按照预设的融合规则对所述两组多尺度分解系数进行融合,得到融合系数;
    根据所述融合系数进行多尺度反变换反向重构出所述目标图像。
  16. 根据权利要求11-15任一项所述的车辆中双摄像装置的控制方法,其特征在于,所述从所述目标图像中识别对象,包括:
    对所述目标图像进行灰度处理,得到所述目标图像的灰度图;
    对所述灰度图进行直方图均衡化处理,得到均衡后灰度图;
    将所述均衡后灰度图分路,形成至少两路均衡后灰度图;
    对一路所述均衡后灰度图进行行人识别,获取行人对象和所述行人对象的识别信息;
    对另一路所述均衡后灰度图进行车辆识别,获取车辆对象和所述车辆对象的识别信息。
  17. 根据权利要求11-16任一项所述的车辆中双摄像装置的控制方法,其特征在于,所述获取目标图像,还包括:
    从所述目标图像中提取感兴趣区域;
    获取所感兴趣区域的亮度均值;
    判断所述亮度均值是否高于预设的阈值;
    如果低于所述预设的阈值,则形成反馈信息反馈给所述车辆。
  18. 根据权利要求11-17任一项所述的车辆中双摄像装置的控制方法,其特征在于,还包括:
    当判断出接收的图像中仅包括一幅图像时,则从所述一幅图像中提取感兴趣区域;
    获取所感兴趣区域的亮度均值;
    判断所述亮度均值是否高于预设的阈值;
    如果低于所述预设的阈值,则形成反馈信息反馈给所述车辆。
  19. 一种车辆中双摄像装置的控制装置,其特征在于,包括:
    获取模块,用于获取车辆当前所处环境的光照强度;
    控制模块,用于根据所述光照强度控制所述车辆上的双摄像装置的开启状态。
  20. 一种计算机设备,其特征在于,包括处理器和存储器;
    其中,所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于实现如权利要求1-18中任一所述的车辆中双摄像装置的控制方法。
PCT/CN2018/112904 2017-10-31 2018-10-31 车辆中双摄像装置的控制方法和装置 WO2019085930A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711047626.8A CN109729256B (zh) 2017-10-31 2017-10-31 车辆中双摄像装置的控制方法和装置
CN201711047626.8 2017-10-31

Publications (1)

Publication Number Publication Date
WO2019085930A1 true WO2019085930A1 (zh) 2019-05-09

Family

ID=66294379

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/112904 WO2019085930A1 (zh) 2017-10-31 2018-10-31 车辆中双摄像装置的控制方法和装置

Country Status (2)

Country Link
CN (1) CN109729256B (zh)
WO (1) WO2019085930A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815721A (zh) * 2020-06-03 2020-10-23 华人运通(上海)云计算科技有限公司 车辆及其后视镜防眩目的控制方法、装置、系统与存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110351491B (zh) * 2019-07-25 2021-03-02 东软睿驰汽车技术(沈阳)有限公司 一种低照环境下的补光方法、装置及系统
WO2021102672A1 (zh) * 2019-11-26 2021-06-03 深圳市大疆创新科技有限公司 车辆视觉系统及车辆
CN113421452B (zh) * 2021-06-03 2023-03-17 上海大学 一种基于视觉分析的露天停车场推荐系统
CN114779838B (zh) * 2022-06-20 2022-09-02 鲁冉光电(微山)有限公司 一种车载摄像头角度智能调节控制系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007028630A1 (de) * 2005-09-08 2007-03-15 Johnson Controls Gmbh Fahrerassistenzvorrichtung für ein fahrzeug und verfahren zur sichtbarmachung der umgebung eines fahrzeuges
CN101226597A (zh) * 2007-01-18 2008-07-23 中国科学院自动化研究所 一种基于热红外步态的夜间行人识别方法及系统
CN102461156A (zh) * 2009-06-03 2012-05-16 弗莱尔系统公司 用于双传感器应用的红外相机系统和方法
CN102980586A (zh) * 2012-11-16 2013-03-20 北京小米科技有限责任公司 一种导航终端和使用导航终端进行导航的方法
CN107253485A (zh) * 2017-05-16 2017-10-17 北京交通大学 异物侵入检测方法及异物侵入检测装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202696754U (zh) * 2012-01-19 2013-01-23 迅驰(北京)视讯科技有限公司 一种数字摄像机
CN105554483B (zh) * 2015-07-16 2018-05-15 宇龙计算机通信科技(深圳)有限公司 一种图像处理方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007028630A1 (de) * 2005-09-08 2007-03-15 Johnson Controls Gmbh Fahrerassistenzvorrichtung für ein fahrzeug und verfahren zur sichtbarmachung der umgebung eines fahrzeuges
CN101226597A (zh) * 2007-01-18 2008-07-23 中国科学院自动化研究所 一种基于热红外步态的夜间行人识别方法及系统
CN102461156A (zh) * 2009-06-03 2012-05-16 弗莱尔系统公司 用于双传感器应用的红外相机系统和方法
CN102980586A (zh) * 2012-11-16 2013-03-20 北京小米科技有限责任公司 一种导航终端和使用导航终端进行导航的方法
CN107253485A (zh) * 2017-05-16 2017-10-17 北京交通大学 异物侵入检测方法及异物侵入检测装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815721A (zh) * 2020-06-03 2020-10-23 华人运通(上海)云计算科技有限公司 车辆及其后视镜防眩目的控制方法、装置、系统与存储介质

Also Published As

Publication number Publication date
CN109729256A (zh) 2019-05-07
CN109729256B (zh) 2020-10-23

Similar Documents

Publication Publication Date Title
WO2019085930A1 (zh) 车辆中双摄像装置的控制方法和装置
US11409303B2 (en) Image processing method for autonomous driving and apparatus thereof
US8798314B2 (en) Detection of vehicles in images of a night time scene
JP6819996B2 (ja) 交通信号認識方法および交通信号認識装置
US8995723B2 (en) Detecting and recognizing traffic signs
JP4970516B2 (ja) 周囲確認支援装置
US11715180B1 (en) Emirror adaptable stitching
KR101688695B1 (ko) 차량번호 인식 장치 및 그 방법, 그리고 상기 방법을 수행하는 프로그램을 기록한 컴퓨터 판독 가능한 기록 매체
JP5071198B2 (ja) 信号機認識装置,信号機認識方法および信号機認識プログラム
WO2015089867A1 (zh) 一种交通违章检测方法
WO2019085929A1 (zh) 图像处理方法及其装置、安全驾驶方法
US11436839B2 (en) Systems and methods of detecting moving obstacles
KR101355201B1 (ko) 시-공간적 특징벡터 기반의 차량인식을 활용한 불법 주정차 단속 시스템 및 단속방법
JP2016196233A (ja) 車両用道路標識認識装置
US10462378B2 (en) Imaging apparatus
KR101026778B1 (ko) 차량 영상 검지 장치
KR101319508B1 (ko) 초점열화방법을 이용한 불법 주정차 단속 시스템 및 단속방법
JP5062091B2 (ja) 移動体識別装置、コンピュータプログラム及び光軸方向特定方法
JP2018073275A (ja) 画像認識装置
Hautiere et al. Meteorological conditions processing for vision-based traffic monitoring
KR101370011B1 (ko) 영상 안정화 및 복원방법을 이용한 주행형 자동 단속 시스템 및 단속방법
TWI630818B (zh) Dynamic image feature enhancement method and system
JP6825299B2 (ja) 情報処理装置、情報処理方法およびプログラム
JP2018055591A (ja) 情報処理装置、情報処理方法およびプログラム
CN111666894A (zh) 交通灯和车灯的检测方法及其感测系统、以及车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18872728

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18872728

Country of ref document: EP

Kind code of ref document: A1