CN117528264A - Parameter adjustment method and device and intelligent driving system - Google Patents

Parameter adjustment method and device and intelligent driving system Download PDF

Info

Publication number
CN117528264A
CN117528264A CN202210887821.6A CN202210887821A CN117528264A CN 117528264 A CN117528264 A CN 117528264A CN 202210887821 A CN202210887821 A CN 202210887821A CN 117528264 A CN117528264 A CN 117528264A
Authority
CN
China
Prior art keywords
scene
information
isp
vehicle
parameter value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210887821.6A
Other languages
Chinese (zh)
Inventor
蒋丁潮
张毅
曾兆山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210887821.6A priority Critical patent/CN117528264A/en
Publication of CN117528264A publication Critical patent/CN117528264A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a parameter adjustment method, a parameter adjustment device and an intelligent driving system. In the method, an intelligent driving platform acquires an image obtained by processing an acquired image signal by an image signal processor ISP, identifies scene information corresponding to the image, and then determines a corresponding scene in a current driving state by combining the scene information, the acquired driving state information of a vehicle, scene rules and historical scene perception information, and then adjusts parameter values of the ISP according to the determined scene.

Description

Parameter adjustment method and device and intelligent driving system
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a parameter adjusting method, a parameter adjusting device and an intelligent driving system.
Background
When the vehicle-mounted camera performs imaging, external light is firstly projected onto a Complementary Metal Oxide Semiconductor (CMOS) sensor through an optical lens, and the CMOS sensor converts an optical signal into an electric signal. The image is then converted from an analog signal to a digital signal using analog to digital conversion, and finally sent to an image signal processor (image signal processor, ISP) for image processing and handling. The image processed by ISP is provided for the vehicle-mounted display system to display the surrounding environment of the vehicle for the terminal user, and is also input into the vehicle-mounted automatic driving or intelligent auxiliary system to sense surrounding environment information, including obstacle recognition, vehicle detection, traffic sign detection, lane line detection, pedestrian detection, drivable area recognition, fusion positioning and the like.
At present, an ISP of a vehicle-mounted platform performs a series of complex processes on original data of an image, including black level correction, lens correction, bad pixel correction, color interpolation, bayer noise removal, white balance correction, color correction, gamma correction, color space conversion (RGB conversion into YUV), color noise removal and edge enhancement, color and contrast enhancement on a YUV color space, automatic exposure control in the middle, and the like, then outputs YUV (or RGB) format data, and transmits the YUV (or RGB) format data to a data processing chip through an I/O interface for algorithm processing or display by a vehicle-mounted media system. The processing of the images by different camera or chip manufacturers may vary, including the algorithm modules they use and their parameter configurations for each algorithm module.
In order to obtain high quality images under different illumination environments, the ISP needs to use parameters and policies of different exposure, white balance, gamma correction and other algorithm modules. The current ISP can only perform preliminary judgment on the brightness and the brightness of the ambient light according to the statistical information such as the brightness of the ambient light, the average brightness and the color temperature of the image, the exposure parameters, the photosensitivity of the sensor, etc., which may cause the sharp degradation of the image quality of the ISP in some scenes, so that the vehicle-mounted automatic driving or intelligent auxiliary system misjudges the external information, thereby causing serious driving safety problems.
Disclosure of Invention
The method, the device and the intelligent driving system for adjusting the parameters ensure the reliability of scene recognition and avoid negative influence of the scene recognition on ISP imaging, so that high-quality images can be obtained even if the scene of a vehicle changes in the running process.
In order to achieve the above purpose, the following technical scheme is adopted in the application.
In a first aspect, the present application provides a parameter adjustment method, which may include: acquiring an image in the running process of a vehicle and running state information of the vehicle; the image is obtained by processing an image signal in the running process of the vehicle by an image signal processor ISP; identifying scene information corresponding to the image; determining a first scene corresponding to the scene information according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information; and adjusting the parameter value of the ISP according to the first scene corresponding to the scene information.
Wherein the driving state information of the vehicle may include at least one of: vehicle speed, vehicle GPS positioning information, steering wheel angle, acceleration state, deceleration state, brake state, accelerator pedal state, wheel direction, wheel speed, yaw rate and acceleration, pitch rate and acceleration, roll rate and acceleration, and the like. Of course, the running state information of the vehicle may also include other information, such as a lamp state of the vehicle (e.g., a high beam start state, a fog lamp start state, etc.), which is not particularly limited in this application.
Therefore, scenes in the running process of the vehicle can be finely divided, parameter values of the ISP are adjusted according to the scenes, reliability of scene identification is guaranteed, negative influence of scene identification on ISP imaging is avoided, and therefore high-quality images can be obtained even if the scenes of the vehicle change in the running process.
In addition, the current scene recognition and detection result is judged by using scene rules, vehicle state information (such as GPS positioning/speed/direction, etc.), scene recognition and detection history results, ISP environment and image statistics information. Meanwhile, the ISP parameters are adjusted based on scene recognition, so that a plurality of pain point scenes of the automatic driving vehicle-mounted ISP can be covered, such as tunnels, ground libraries, forests, low illumination, high backlight, cloudy days, rainy days, foggy days and the like. In addition, scene identification is used for dynamically identifying and detecting scene types, illumination conditions, road types and key areas, and a user can flexibly configure and manage the scenes.
In a specific implementation manner, according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information, determining a first scene corresponding to the scene information specifically includes: according to the scene information and the running state information of the vehicle, respectively searching scene rules and historical scene perception information; and when the scene information and the running state information of the vehicle are matched with the scene rule and the historical scene perception information, determining a first scene corresponding to the scene information.
In some implementations, the method further includes: and when the scene information and the running state information of the vehicle are not matched with the scene rules and/or the historical scene perception information, sending a matching failure message to the ISP.
In some specific implementations, after determining the first scene corresponding to the scene information, the method further includes: determining the confidence level of the first scene according to the scene information, the scene rule and the historical scene perception information; when the confidence of the first scene meets a first threshold value, determining that the first scene is correctly identified according to the scene information, the driving state information of the vehicle and the map data.
According to the method and the device, the confidence degree of the first scene is determined, and whether the first scene is correctly identified or not can be determined by combining map data, so that the reliability of the scene identification result is guaranteed, one or more groups of configurable parameters are provided for each pain point scene, and the environment conditions of each scene are flexibly adapted.
In some specific implementations, after determining the first scene corresponding to the scene information, the method further includes: and updating the historical scene perception information according to the scene information and the running state information of the vehicle.
In a specific implementation manner, the driving state information of the vehicle includes position information of the vehicle, and the historical scene sensing information is updated according to the scene information and the driving state information of the vehicle, specifically: when the acquisition time of the image signal meets the preset time and the position information of the vehicle meets the preset position, the historical scene perception information is updated according to the scene information and the running state information of the vehicle.
In a specific implementation manner, according to a first scene corresponding to the scene information, the parameter value of the ISP is adjusted, which specifically is: searching ISP parameter values corresponding to the first scene; and adjusting the parameter value of the ISP according to the ISP parameter value corresponding to the first scene.
In a specific implementation manner, according to the ISP parameter value corresponding to the first scene, the parameter value of the ISP is adjusted, specifically: the identification of the first scene and the ISP parameter value corresponding to the first scene are sent to the ISP, so that the ISP modifies the ISP parameter value into the ISP parameter value corresponding to the first scene; or, modifying the parameter value of the ISP to the parameter value of the ISP corresponding to the first scene.
In the method, the scene recognition result is further verified through the image statistical information, and the reliability of the scene recognition result is further ensured, so that one or more groups of configurable parameters are provided for each pain point scene, and the environment conditions of each scene are flexibly adapted.
In a second aspect, the present application provides a parameter adjustment device, which may include: a first acquisition unit configured to acquire an image during running of the vehicle, the image being obtained by processing an image signal during running of the vehicle by an image signal processor ISP; a second acquisition unit configured to acquire running state information of the vehicle; the identification unit is used for identifying scene information corresponding to the image; the determining unit is used for determining a first scene corresponding to the scene information according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information; and the adjusting unit is used for adjusting the parameter value of the ISP according to the first scene corresponding to the scene information.
Wherein the recognition unit may also be referred to as a scene recognizer. The determination unit may also be referred to as a rule arbiter. The adjustment unit may also be referred to as a parameter adjuster. Of course, the present application is not limited to the above-mentioned names, and may be named as other names having the same or similar functions.
Therefore, scenes in the running process of the vehicle can be finely divided, parameter values of the ISP are adjusted according to the scenes, reliability of scene identification is guaranteed, negative influence of scene identification on ISP imaging is avoided, and therefore high-quality images can be obtained even if the scenes of the vehicle change in the running process.
In addition, the current scene recognition and detection result is judged by using scene rules, vehicle state information (such as GPS positioning/speed/direction, etc.), scene recognition and detection history results, ISP environment and image statistics information. Meanwhile, the ISP parameters are adjusted based on scene recognition, so that a plurality of pain point scenes of the automatic driving vehicle-mounted ISP can be covered, such as tunnels, ground libraries, forests, low illumination, high backlight, cloudy days, rainy days, foggy days and the like. In addition, scene identification is used for dynamically identifying and detecting scene types, illumination conditions, road types and key areas, and a user can flexibly configure and manage the scenes.
In a specific implementation, the determining unit is specifically configured to: according to the scene information and the running state information of the vehicle, respectively searching scene rules and historical scene perception information; and when the scene information and the running state information of the vehicle are matched with the scene rule and the historical scene perception information, determining a first scene corresponding to the scene information.
In a specific implementation, the apparatus further includes: and the sending unit is used for sending a matching failure message to the ISP when the scene information and the running state information of the vehicle are not matched with the scene rule and/or the historical scene perception information.
In a specific implementation manner, the determining unit is further configured to determine a confidence level of the first scene according to the scene information, the scene rule, and the historical scene perception information; when the confidence of the first scene meets a first threshold value, determining that the first scene is correctly identified according to the scene information, the driving state information of the vehicle and the map data.
According to the method and the device, the confidence degree of the first scene is determined, and whether the first scene is correctly identified or not can be determined by combining map data, so that the reliability of the scene identification result is guaranteed, one or more groups of configurable parameters are provided for each pain point scene, and the environment conditions of each scene are flexibly adapted.
In a specific implementation, the apparatus further includes: and the updating unit is used for updating the historical scene perception information according to the scene information and the running state information of the vehicle. Wherein the updating unit may also be referred to as a history scene manager. Of course, the present application is not limited to the above-mentioned names, and may be named as other names having the same or similar functions.
In a specific implementation manner, the running state information of the vehicle includes position information of the vehicle, and the updating unit is specifically configured to: when the acquisition time of the image signal meets the preset time and the position information of the vehicle meets the preset position, the historical scene perception information is updated according to the scene information and the running state information of the vehicle.
In a specific implementation, the adjusting unit is specifically configured to: searching ISP parameter values corresponding to the first scene; and adjusting the parameter value of the ISP according to the ISP parameter value corresponding to the first scene.
In a specific implementation, the adjusting unit is specifically configured to: sending the identification of the first scene and the ISP parameter value corresponding to the identification of the first scene to the ISP, so that the ISP modifies the ISP parameter value into the ISP parameter value corresponding to the first scene; or modifying the parameter value of the ISP to the parameter value of the ISP corresponding to the first scene.
In the method, the scene recognition result is further verified through the image statistical information, and the reliability of the scene recognition result is further ensured, so that one or more groups of configurable parameters are provided for each pain point scene, and the environment conditions of each scene are flexibly adapted.
In a third aspect, the present application provides an intelligent driving system, which may include: image collector, sensor, image signal processor ISP and intelligent driving platform, wherein: the image collector is used for acquiring an image signal in the running process of the vehicle and sending the image signal to the ISP; the ISP is used for processing the image signals to obtain images and sending the images to the intelligent driving platform; the sensor is used for acquiring the running state information of the vehicle and sending the running state information to the intelligent driving platform; the intelligent driving platform is used for identifying scene information corresponding to the image; determining a first scene corresponding to the scene information according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information; and adjusting the parameter value of the ISP according to the first scene corresponding to the scene information.
In one specific implementation, the intelligent driving platform is specifically configured to: according to the scene information and the running state information of the vehicle, respectively searching scene rules and historical scene perception information; and when the scene information and the running state information of the vehicle are matched with the scene rule and the historical scene perception information, determining a first scene corresponding to the scene information.
In a specific implementation, the intelligent driving platform is further configured to: and when the scene information and the running state information of the vehicle are not matched with the scene rules and/or the historical scene perception information, sending a matching failure message to the ISP.
In a specific implementation, the intelligent driving platform is further configured to: determining the confidence level of the first scene according to the scene information, the scene rule and the historical scene perception information; when the confidence of the first scene meets a first threshold value, determining that the first scene is correctly identified according to the scene information, the driving state information of the vehicle and the map data.
In a specific implementation, the intelligent driving platform is further configured to: and updating the historical scene perception information according to the scene information and the running state information of the vehicle.
In a specific implementation manner, the running state information of the vehicle includes position information of the vehicle, and the intelligent driving platform is further configured to: when the acquisition time of the image signal meets the preset time and the position information of the vehicle meets the preset position, the historical scene perception information is updated according to the scene information and the running state information of the vehicle.
In a specific implementation, the intelligent driving platform is further configured to: searching ISP parameter values corresponding to the first scene; and adjusting the parameter value of the ISP according to the ISP parameter value corresponding to the first scene.
In a specific implementation manner, when the parameter value of the ISP is adjusted, the intelligent driving platform is configured to send the identifier of the first scene and the ISP parameter value corresponding to the first scene to the ISP; the ISP is used for acquiring the identification of the first scene; determining that the first scene is correctly identified according to the characteristic information of the image and the identification of the first scene; modifying the parameter value of the ISP into an ISP parameter value corresponding to the first scene; and processing the image and/or a subsequent image signal in the running process of the vehicle according to the adjusted ISP parameter value.
In one particular implementation, the ISP is also used to: when the feature information of the image matches the feature information of the first scene, it is determined that the first scene is correctly identified.
In one particular implementation, the ISP is deployed in an image collector, or in an intelligent driving platform.
In a fourth aspect, the present application provides an intelligent driving platform comprising: a processor and a memory coupled to the processor, the memory for storing computer program code comprising computer instructions that, when read from the memory by the processor, cause the intelligent driving platform to perform the method of the first aspect.
In a fifth aspect, the present application provides a vehicle, which may include the parameter adjustment device according to the second aspect or the intelligent driving system according to the third aspect or the intelligent driving platform according to the fourth aspect.
In a sixth aspect, the present application provides a computer program product comprising computer instructions which, when run on a computer, cause the computer to perform the method according to the first aspect.
In a seventh aspect, the present application provides a computer readable storage medium comprising computer instructions which, when run on a computer, cause the computer to perform the method according to the first aspect.
The specific embodiments and the corresponding technical effects of each of the second aspect to the seventh aspect may be referred to the specific embodiments and the technical effects of the first aspect.
In the method, an intelligent driving platform acquires an image obtained by processing an acquired image signal by an image signal processor ISP, identifies scene information corresponding to the image, then combines the scene information, acquired driving state information of a vehicle, scene rules and historical scene perception information to determine a corresponding scene in a current driving state, and then adjusts parameter values of the ISP according to the determined scene, so that the scene in the driving process of the vehicle can be finely divided, the parameter values of the ISP can be adjusted according to the scene, the reliability of scene identification is ensured, negative influence of scene identification on ISP imaging is avoided, and therefore, high-quality images can be acquired even if the scene of the vehicle changes in the driving process.
Drawings
Fig. 1 is a schematic diagram of an intelligent driving system according to an embodiment of the present application;
fig. 2 is a schematic diagram of the composition of yet another intelligent driving system according to an embodiment of the present application;
FIG. 3A is a flowchart of a parameter adjustment method according to an embodiment of the present disclosure;
fig. 3B is a schematic flowchart of a specific implementation of determining a first scene in the parameter adjustment method provided in the application embodiment;
fig. 3C is a schematic flowchart of a specific implementation of determining a first scene in the parameter adjustment method provided in the application embodiment;
FIG. 3D is a flowchart illustrating another parameter adjustment method according to an embodiment of the present disclosure;
FIG. 3E is a flowchart illustrating a specific implementation of adjusting the parameter value of an ISP in a parameter adjustment method according to the present embodiment;
fig. 4 is a schematic diagram of a parameter adjusting device according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of the composition of yet another intelligent driving system according to an embodiment of the present application.
Detailed Description
In order to ensure driving safety, the vehicle needs to present the surrounding environment of the vehicle to the driver (or user) during running of the vehicle. Specifically, a camera on the vehicle is used for collecting images of the surrounding environment of the vehicle, and the images collected by the camera are displayed to a driver through a vehicle-mounted display system.
When a camera of a vehicle performs imaging, external light is projected onto a CMOS sensor through an optical lens, and the CMOS sensor converts an optical signal into an electrical signal, which is an analog signal. The CMOS sensor inputs the electrical signal to an analog-to-digital converter to obtain a digital signal. The digital signal is input to an image signal processor ISP for image processing. The image processed by the ISP is provided for the vehicle-mounted display system. Of course, the ISP outputs YUV (or RGB) format data, and may also input to an on-vehicle automatic driving or intelligent auxiliary system to perform sensing of surrounding environment information, including traffic recognition, vehicle detection, traffic sign detection, lane line detection, pedestrian detection, drivable region recognition, fusion positioning, and other services. The processing of the image by different cameras or data processing chips can be different, such as the algorithm module used by the cameras or the parameters configured for each algorithm module.
In order to obtain high quality images under different illumination environments, the ISP needs to use parameters and policies of different exposure, white balance, gamma correction, etc. algorithm modules. The current ISP can only primarily judge the brightness and the brightness of the ambient light according to the brightness of the ambient light, the average brightness and the color temperature of the image, the exposure parameters, the photosensitivity of the sensor and other information, and cannot identify the specific scene where the vehicle is located, such as high speed, urban roads, tunnels and the like. This results in dramatic degradation of the imaging quality of ISP in some scenarios, such as under-exposure and color temperature problems during daytime tunnel/ground store, and over-exposure problems during tunnel/ground store; at high speed/overhead, color temperature problems occur; when passing through a boulevard road, ISP bright-dark scene switching causes image dithering; under the condition of extremely low illumination, the condition that the low illumination scene causes the image to acquire target object information cannot be judged; in the case of large backlight, the strong light portion is overexposed, resulting in unrecognizable surrounding objects. The image quality in these scenes is reduced, which may cause misjudgment of external information by the vehicle-mounted automatic driving or intelligent auxiliary system, such as failure to identify vehicles and pedestrians in tunnels and ground exits, thereby causing serious driving safety problems.
In order to ensure high-quality image output, a good information basis is provided for an automatic driving perception algorithm, and driving safety is ensured. Currently, ISPs are divided into two groups of modules: the first set of modules performs image rectification and enhancement, and the second set of modules performs image enhancement and optimization. The required enhancement effect is determined by scene recognition using the first set of module processed images. Then, the processed image is processed separately by the first group of modules or the second group of modules, or processed jointly by the two groups of modules. However, this method of treatment of ISP has the following disadvantages: 1. the parameter adjustment for each algorithm module is still based on whether the adjustable parameter is less than a certain threshold; 2. the adjustment of the image requires repeated calls of the first module and the second module; 3. no reliability guarantee is made for scene recognition, nor is there any measure against scene recognition errors.
In another case, at present, a camera performs shooting optimization on a night scene and a portrait night scene. Specifically, a machine learning technology is used for carrying out scene recognition on a current scene, and if the scene recognition result is a night scene, the camera shoots by using a night scene mode. If the face is detected, the scene recognition result is a face night scene, and the camera shoots by using a face night scene mode. In addition, the machine learning technology detects a face, and if the face is detected, the camera shoots by using a face night scene mode. However, it has the following disadvantages: 1. the applicable scenes are only night scenes and face night scenes, and the complex scene change in the automatic driving field cannot be dealt with; 2. the scene recognition is not perfectly verified in reliability, and the accuracy cannot be ensured far from the brightness information.
In order to solve the problem that a high-quality image cannot be acquired due to scene change in the driving process of a vehicle, the embodiment of the application provides a parameter adjustment method which is applied to an intelligent driving system.
The following describes in detail the parameter adjustment methods in different application scenarios:
and a first scene, a tunnel portal.
When a vehicle is about to drive into the tunnel portal, a sensor of the vehicle collects driving state information of the vehicle, which may include vehicle chassis information, which may include a speed of the vehicle, a position of the vehicle, and a direction of the vehicle. The camera of the vehicle collects the image signals and sends the image signals to the ISP. The ISP processes the acquired image signals to obtain a first image, wherein the first image comprises a tunnel portal. An ISP scene recognition module of the vehicle (described below) may recognize scene information corresponding to the first image. And, the ISP scene recognition module determines at least one pending scene according to the scene information, the driving state information of the vehicle, and the history scene perception information (the history scene perception information may be understood as a correspondence relationship between the scene information, the driving state information of the vehicle, and the scene). The scene recognition module determines a target scene according to at least one pending scene and scene rules, wherein the target scene is a tunnel entering scene. The scene recognition module sends the target scene to an ISP scene adaptation module (described below) of the vehicle, which adjusts the parameter values of the ISP according to the target scene and the parameter values of the ISP corresponding to the pre-configured scene. In this way, ISP processes the first image and/or the subsequent image signals in the running process of the vehicle according to the adjusted parameters, so that the brightness of the tunnel portal in the obtained image is improved, things in the tunnel portal area are conveniently identified, and the image quality is improved. In one effect, the driver can be made to see something inside the tunnel portal; in another effect, things at the tunnel portal can be accurately identified by the vehicle, so that the vehicle can conveniently execute operations such as avoidance and the like.
And a second scene, a ground warehouse port.
Similarly, when a vehicle is about to drive into a ground garage, a sensor of the vehicle acquires driving state information of the vehicle, wherein the driving state information can comprise vehicle chassis information, and the vehicle chassis information can comprise the speed of the vehicle, the position of the vehicle and the direction of the vehicle. The camera of the vehicle collects the image signals and sends the image signals to the ISP. And the ISP processes the acquired image signals to obtain a second image, wherein the second image comprises a ground library port. The ISP scene recognition module of the vehicle may recognize scene information corresponding to the second image. And the scene recognition module determines at least one pending scene according to the scene information, the driving state information of the vehicle, and the historical scene perception information (the historical scene perception information can be understood as the corresponding relation among the scene information, the driving state information of the vehicle and the scene). The scene recognition module determines a target scene according to at least one pending scene and scene rules, wherein the target scene is an entering ground library scene. The scene recognition module sends the target scene to an ISP scene self-adaptation module of the vehicle, and the ISP scene self-adaptation module adjusts the parameter value of the ISP according to the target scene and the parameter value of the ISP corresponding to the pre-configured scene. In this way, the ISP adjusts the first image and/or the subsequent image signals acquired by the camera of the vehicle according to the adjusted parameters, so that the brightness of the ground opening in the obtained image is improved, things in the ground opening area are conveniently identified, and the image quality is improved. In one effect, the driver can be made to see something in the garage mouth clearly; in another effect, things at the ground warehouse can be accurately identified by the vehicle, so that the vehicle can conveniently execute operations such as avoidance and the like.
In summary, according to the embodiment of the application, the ISP parameters are adjusted based on scene recognition, so that numerous application scenes of the vehicle-mounted ISP for automatic driving can be covered, such as tunnels, ground libraries, forests, low illumination, high backlight, cloudy days, rainy days, foggy days and the like. In addition, scene identification is used for dynamically identifying and detecting scene types, illumination conditions, road types and key areas, and a user can flexibly configure and manage the scenes.
The intelligent driving system may include different product forms in the vehicle field, for example: vehicle-mounted chip, vehicle-mounted device (such as vehicle-mounted machine, vehicle-mounted computing platform, whole vehicle and server (virtual or physical)). Of course, the parameter adjustment method provided by the embodiment of the application is not only applied to intelligent automobiles, but also particularly applied to the fields of automatic driving and intelligent auxiliary driving. The method can also be applied to other fields requiring image acquisition, such as intelligent security, intelligent mobile phones, cameras, intelligent robots, unmanned aerial vehicles, machine vision and the like.
The control of the above-described vehicle can be divided into three major parts: driving, cabin and whole car control, correspond three major platform respectively: the intelligent driving platform, the intelligent cabin platform and the whole vehicle control platform. The three platforms can communicate with each other by accessing the backbone network of the in-vehicle networking. The intelligent driving platform is a hardware platform of an intelligent driving system applied to the parameter adjustment method, and is also a vehicle brain for realizing functions of automatic driving (intelligent driving) perception, map and sensor fusion positioning, decision making, planning and the like. The system is suitable for various application scenes such as passenger vehicles (such as congestion following vehicles, high-speed cruising, automatic bus parking), commercial vehicles (such as port freight transportation and trunk logistics), working vehicles (such as mining trucks, cleaning vehicles and unmanned delivery).
In one case, the intelligent driving system relates to functions such as communication with various vehicle-mounted devices (such as various sensors of cameras, millimeter wave radars, laser radars and the like), environment sensing, information fusion, decision and planning algorithm operation, acquisition and use of a high-precision map, V2X communication, control executed by operation results and the like, and has very high calculation force requirements, and the calculation force requirements are continuously improved along with the increase of automatic driving grades.
Fig. 1 is a schematic structural diagram of an intelligent driving system provided in the present application. The intelligent driving system is applied to vehicles and comprises an intelligent driving platform 110, an image collector 140 (such as a camera) and a plurality of sensors (such as millimeter wave radar 120 and laser radar 130). In addition, the intelligent driving system may further include a vehicle domain controller 150, and a car networking BOX (T-BOX) 160.
Specifically, the appearance of the intelligent driving platform 110 shown in fig. 1 is similar to a box, and is responsible for sensing, decision-making, control and other functions related to automatic driving. The intelligent driving platform 110 may support various interface protocols, so as to realize communication with other devices on the vehicle, for example, communication with various sensors and image collectors (such as millimeter wave radar 120, laser radar 130, camera 140, etc.) installed on the vehicle body, obtain information sensed by the various sensors accessing the image collectors, make driving decisions/plans based on the sensed information, and issue operation commands to the vehicle domain controller 150 based on the driving decisions/plans, and control the vehicle actuators to execute corresponding actions through the vehicle domain controller 150, for example: the vehicle-mounted display system displays surrounding environment images, acceleration/deceleration, lane changing, steering, braking, warning and the like, or directly issues operation commands to the vehicle executor based on driving decision/planning, and controls the executor to execute corresponding actions. Intelligent driver platform 110 may also communicate with the cloud, other terminals (e.g., vehicles, cell phones, etc.), or roadside devices, etc., via T-BOX 160.
When implementing the parameter adjustment method provided in the present application, as shown in fig. 2, the intelligent driving system in the present application may further include an image signal processor (image signal processor, ISP), the image collector 140 shown in fig. 1 may include an ISP scene adaptation (ISP scene adaptation), and the intelligent driving platform 110 shown in fig. 1 may include an ISP scene recognition (ISP scene recognition) module. The ISP may be deployed on image collector 140 or on intelligent driving platform 110. Among other things, the ISP scene adaptation module may include a scene parameter manager (scene para manager), an ISP tuner (ISP tuner), and an ISP scene aggregator (ISP scene confidence). The ISP scene recognition module may include a scene recognizer (scene recognizer), a rule arbiter (rule discriminator), a scene history result manager (result manager), a rule manager (rule manager), and a map arbiter (map discriminator).
The flow of the parameter adjustment method is described below in conjunction with the structure shown in fig. 2, and the method includes: s201, the ISP receives image signals collected by a camera of the vehicle. S202, processing the image signals in the running process of the vehicle by the ISP to obtain an image, and sending the image to the ISP scene recognition module. S203, the ISP scene recognition module acquires images and running state information of the vehicle in the running process of the vehicle. S204, the ISP scene recognition module recognizes scene information corresponding to the image. S205, determining a first scene corresponding to the scene information by the ISP scene recognition module according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information. S206, the ISP scene recognition module sends the first scene to the ISP scene self-adaption module. S207, the ISP scene self-adaptation module adjusts the parameter value of the ISP according to the first scene corresponding to the scene information.
The following describes the parameter adjustment method of the intelligent driving system shown in fig. 2 in detail with reference to fig. 3A to 3E. The method includes steps S301-S308 (some of which are optional), and reference is made specifically to the description of the embodiments below.
In S301, the intelligent driving platform acquires an image of the vehicle during running, which is obtained by processing an image signal of the vehicle during running by the ISP.
Specifically, as shown in fig. 2, the ISP scene recognition module in the intelligent driving platform acquires an image during the driving process of the vehicle.
The image signal is acquired by a camera of the vehicle during the running of the vehicle. In other words, the camera of the vehicle transmits the acquired image signal to the ISP, which processes the image signal to obtain the image.
Among other things, the processing performed may include black level compensation (black level compensation), lens correction (lens shading correction), bad pixel correction (bad pixel correction), color interpolation (demosaic), bayer denoising, white balance (automatic white balance) correction, color correction (color correction), gamma correction, color space conversion (RGB to YUV), color noise removal, edge enhancement, color and contrast enhancement, auto-exposure control, wide dynamics (wide dynamic range, WDR), dynamic range control (dynamic range control, DRC), and the like. ISP obtains images, such as YUV format images. The image obtained by the ISP is a YUV format image, and the YUV format image is subjected to format conversion, scaling, cutting and other preprocessing by a digital visual preprocessing (DVPP) module of a parameter adjusting device, and the RGB format image is output.
S302, the intelligent driving platform acquires driving state information of the vehicle.
Specifically, as shown in fig. 2, the ISP scene recognition module in the intelligent driving platform obtains the driving state information of the vehicle.
The travel state information may include one or more of vehicle driver information, vehicle gear information, vehicle throttle information, vehicle wheel information, vehicle brake information, vehicle motion information, and vehicle chassis information. For example, the driving state information of the vehicle may include vehicle gear information, vehicle throttle information, vehicle wheel information, vehicle brake information, vehicle motion information. Alternatively, the running state information of the vehicle may include vehicle brake information, vehicle motion information, and vehicle chassis information. For example, the driving state information of the vehicle may include at least one of: vehicle speed, vehicle GPS positioning information, steering wheel angle, acceleration state, deceleration state, brake state, accelerator pedal state, wheel direction, wheel speed, yaw rate and acceleration, pitch rate and acceleration, roll rate and acceleration, and the like. Of course, the running state information of the vehicle may also include other information, such as a lamp state of the vehicle (e.g., a high beam start state, a fog light start state, etc.), which is not specifically limited in the embodiments of the present application. Specifically, the vehicle may collect driving state information of the vehicle through a sensor on the vehicle.
S303, the intelligent driving platform identifies scene information corresponding to the image.
The scene information may include classification information of each scene and probability thereof. Wherein each scene may include an illumination scene, a weather scene, and a road scene. The classification information of the illumination scene can comprise a street lamp in the day, a street lamp in the night, a dusk and the like. The classification information of the weather scene may include sunny days, cloudy days, rain, snow, fog, etc. The classification information of the road scene may include urban roads (boulevard, non-boulevard, protected left turn, unprotected right turn, open area, congested road segments, etc.), high speed, overhead, tunnels (in-tunnel, out-tunnel, etc.), ground banks (in-ground bank, out-ground bank, etc.), large backlit scenes, etc. The key regions within the scene may include: traffic lights, pedestrians, vehicles, tunnel junctions, ground garage junctions, strong lights/large backlights, etc.
In a specific implementation manner, after the image (such as a YUV format image) is obtained in S302, as shown in fig. 2, the ISP sends the image to the ISP scene recognition module of the intelligent driving platform. The ISP scene recognition module receives the image and performs scene recognition and detection on the image. Illustratively, a scene identifier in the ISP scene recognition module periodically extracts image frames from an image data stream generated by the ISP, and converts an image format from a YUV format to an RGB format to obtain scene information corresponding to the image.
S304, the intelligent driving platform determines a first scene corresponding to the scene information according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information.
The scene rule may be understood as a rule for screening out a target scene from among a plurality of scenes.
The historical scene perception information can be understood as information which is used for perceiving a scene.
Fig. 3B is a schematic flowchart of a specific implementation of determining a first scene in the parameter adjustment method according to the embodiment of the present application. As shown in fig. 3B, S304 includes: s3041, S3042 and S3043. S3041, S3042, and S3043 can be specifically implemented as:
s3041, the intelligent driving platform searches for scene rules and historical scene perception information according to scene information and running state information of the vehicle.
For example, as shown in fig. 2, the rule discriminator in the ISP scene recognition module may receive the scene information sent by the scene recognizer, and call the scene history result manager to find the history scene perception information corresponding to the scene information according to the scene information and the driving state information of the vehicle. The rule discriminator can receive the scene information sent by the scene identifier, and call the rule manager to search the scene rule corresponding to the scene information according to the scene information and the running state information of the vehicle. The rule discriminator may determine the first scene corresponding to the scene information based on the scene information, the driving state information of the vehicle, the scene rule corresponding to the scene information, and the history scene perception information. Specifically as described in S3042.
S3042, when the scene information and the running state information of the vehicle are matched with the scene rule and the historical scene perception information, the intelligent driving platform determines a first scene corresponding to the scene information.
The scene information and the running state information of the vehicle are matched with the scene rules and the historical scene perception information, and can be understood as follows: scene rules and historical scene awareness information can be matched through the scene information and the running state information of the vehicle. For example, the scene information may include: tunnel, tunnel portal, strong light/large backlight. The running state information of the vehicle may include vehicle motion information. Through the tunnel, tunnel portal, strong light/big backlight and vehicle motion information, the target historical scene perception information can be matched. And meanwhile, determining a scene corresponding to the scene information as a tunnel outlet according to the target historical scene perception information and the scene rule.
S3043, when the scene information and the running state information of the vehicle are not matched with the scene rules and/or the historical scene perception information, the intelligent driving platform sends a matching failure message to the ISP.
Wherein, the scene information and the running state information of the vehicle are not matched with the scene rule and/or the historical scene perception information, which can be understood as follows: the scene information and the driving state information of the vehicle are not matched with the scene rules and/or the historical scene perception information. Namely: the scene information and the running state information of the vehicle are not matched with the scene rule; or the scene information and the running state information of the vehicle are not matched with the historical scene perception information; alternatively, the scene information and the driving state information of the vehicle do not match the scene rules and the historical scene perception information.
Fig. 3C is a schematic flowchart of another implementation of determining the first scene in the parameter adjustment method according to the embodiment of the present application. As shown in fig. 3C, S304 includes: s3044 and S3045. S3044 and S3045 may be specifically implemented as: s3044, the intelligent driving platform determines historical scene perception information corresponding to the scene information according to the scene information, the running state information of the vehicle and the historical scene perception information. S3045, the intelligent driving platform determines a first scene corresponding to the scene information according to the historical scene perception information, the scene rule and the running state information of the vehicle. For example, the scene history result manager may receive the scene information transmitted from the scene identifier, and determine the history scene perception information corresponding to the scene information according to the scene information, the driving state information of the vehicle, and the history scene perception information. The scene history result manager may send the history scene perception information corresponding to the scene information to the rule manager. The rule manager receives historical scene perception information corresponding to the scene information, and determines a first scene corresponding to the scene information according to the historical scene perception information, the scene rule and the running state information of the vehicle.
In some embodiments, scene confidence is required for the identified scene in order to make scene identification more accurate. Specifically, fig. 3D is a schematic flow chart of a parameter adjustment method according to an embodiment of the present application. As shown in fig. 3D, after S304, the parameter adjustment method provided in the embodiment of the present application may further include:
s305, the intelligent driving platform determines the confidence level of the first scene according to the scene information, the scene rule and the historical scene perception information. In a specific implementation manner, the rule discriminator may determine the confidence level of the first scene according to the scene information, the scene rule and the historical scene perception information. For example, assume that the first scene is a tunnel entrance scene, it is required to satisfy the detected tunnel entrance, and the previous scene is information of tunnel entrance, high speed, urban road scene, and the like. For example, the distance from the detected object to the vehicle is estimated by detecting how much the detected object occupies the pixel points, for example, when the resolution is 224×224, the tunnel portal pixel points are higher than 30×30, and then the detected object is judged to be near; when the resolution is 640 x 640, the tunnel portal pixel point is higher than 84 x 84, and is judged to be near. The ISP scene recognition module may also include a map arbiter (map discriminator). The map discriminator receives the confidence of the first scene sent by the rule discriminator.
And S306, when the confidence coefficient of the first scene meets a first threshold value, the intelligent driving platform determines that the first scene is correctly identified according to the scene information, the driving state information of the vehicle and the map data.
The first threshold may be determined according to practical situations, for example, 85 or 90. The embodiment of the present application is not particularly limited. In one particular implementation, as shown in FIG. 2, the map arbiter in the ISP scene recognition module determines that the confidence level of the first scene meets a first threshold. In this case, the map discriminator determines that the first scene recognition is correct based on the scene information, the running state information of the vehicle, and the map data. The map arbiter determines the position of the vehicle in the map dimension according to the scene information, the running state information of the vehicle and the map data, and can determine the environment of the position of the vehicle by combining the map data, thereby determining whether the identified first scene is correct.
According to the embodiment of the application, by determining the confidence coefficient of the first scene and combining map data, whether the first scene is correctly identified or not can be determined, and the reliability of the scene identification result is guaranteed, so that one or more groups of configurable parameters are provided for each application scene, and the application scene is flexibly adapted to the environmental conditions of each scene.
In some embodiments, in order to make scene recognition more accurate, real-time updates to the historical scene perception information are required. Specifically, after S304, the parameter adjustment method provided in the embodiment of the present application may further include: s307, the intelligent driving platform updates historical scene perception information according to the scene information and the running state information of the vehicle. In a specific implementation, as illustrated in fig. 2, the scene history result manager of the intelligent driving platform updates the historical scene perception information according to the scene information and the driving state information of the vehicle. For example, if the driving state information of the vehicle includes position information of the vehicle, the scene history result manager may update the history scene sensing information according to the scene information and the driving state information of the vehicle when the acquisition time of the image signal satisfies the preset time and the position information of the vehicle satisfies the preset position. For example, the vehicle acquires an image signal within a preset time, and the vehicle acquires position information of the vehicle at a preset position. In this case, the scene history result manager updates the history scene perception information according to the scene information and the driving state information of the vehicle.
S308, the intelligent driving platform adjusts the parameter value of the ISP according to the first scene corresponding to the scene information.
In a specific implementation manner, as shown in fig. 2, an ISP scene adaptive module in the intelligent driving platform adjusts a parameter value of the ISP according to a first scene corresponding to the scene information. Specifically, the ISP scene adaptive module may send an instruction to the ISP, where the instruction includes a parameter value of the ISP; alternatively, the ISP scene adaptation module may send a scene identification to the ISP, which, after receiving the scene identification, adjusts the parameter values of the ISP according to the scene identification.
Fig. 3E is a schematic flow chart of a specific implementation of adjusting a parameter value of an ISP in a parameter adjustment method according to an embodiment of the present application. As shown in fig. 3E, S308 includes: s3081 and S3082. S3081 and S3082 may be embodied as: s3081, the intelligent driving platform searches ISP parameter values corresponding to the first scene. Specifically, the scene parameter manager in the ISP scene adaptive module is configured to pre-configure the parameter value of the ISP in a preset scene. Thus, the ISP scene fusion device receives the first scene corresponding to the scene information sent by the rule discriminator and sends the first scene to the ISP tuning device. The ISP modulator receives the first scene and invokes the scene parameter manager to search the ISP parameter value corresponding to the first scene. S3082, the intelligent driving platform adjusts the parameter value of the ISP according to the parameter value of the ISP corresponding to the first scene. Specifically, an ISP optimizer in the ISP scene adaptive module adjusts the parameter value of the ISP according to the ISP parameter value corresponding to the first scene.
The ISP may or may not be on the intelligent driving platform. For these two different cases, the S3082 implementation may include the following:
in one particular implementation, the ISP is not on the intelligent driving platform, 3082 may be implemented as: the ISP optimizer sends the identification of the first scene and the ISP parameter value corresponding to the first scene to the ISP, so that the ISP modifies the parameter value of the ISP into the ISP parameter value corresponding to the first scene. The ISP optimizer sends the identification of the first scene and the ISP parameter value corresponding to the first scene to the ISP, and then, when the ISP matches the identification of the first scene and the feature information of the pre-stored scene, the ISP determines that the first scene is identified correctly. The ISP adjusts the parameter value of the ISP to the ISP parameter value corresponding to the first scene. In another specific implementation manner, the ISP is on a hardware platform of the intelligent driving platform, and S3082 may be specifically implemented as: the ISP modulator modifies the parameter value of the ISP into the ISP parameter value corresponding to the first scene.
In the embodiment of the application, the scene recognition result is further verified through the image statistical information, so that the reliability of the scene recognition result is further ensured, and one or more groups of configurable parameters are provided for each pain point scene, so that the environment conditions of each scene can be flexibly adapted.
In the various embodiments of the application, if there is no specific description or logical conflict, terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments according to their inherent logical relationships.
The embodiments of the present application also provide an apparatus for implementing any of the above methods, for example, an apparatus including a unit (or means) for implementing each step performed by the parameter adjustment apparatus in any of the above methods is provided. As another example, another apparatus is provided that includes means for performing the steps performed by the vehicle in any of the methods above.
For example, please refer to fig. 4, which is a schematic diagram of a parameter adjustment apparatus according to an embodiment of the present application, the apparatus 400 may include:
a first obtaining unit 401, configured to obtain an image during running of the vehicle, where the image is obtained by processing an image signal during running of the vehicle by an image signal processor ISP, the image signal may be collected by a camera of the vehicle, and the camera sends the collected image signal to the ISP; the first obtaining unit 401 may perform the step of S301 described above. The first obtaining unit 401 may be the above ISP scene recognition module.
A second acquisition unit 402 for acquiring running state information of the vehicle; the second acquisition unit 402 may perform the steps of S302 described above, for example. The second obtaining unit 402 may be the above ISP scene recognition module, and the driving state information of the vehicle may be collected by a sensor of the vehicle. The sensor transmits the acquired driving state information of the vehicle to the second acquisition unit 402.
An identifying unit 403, configured to identify scene information corresponding to the image; illustratively, the identifying unit 403 may perform the step of S303 described above. The recognition unit 403 may be the above-described ISP scene recognition module.
A determining unit 404, configured to determine a first scene corresponding to the scene information according to the scene information, the driving state information of the vehicle, the scene rule, and the historical scene perception information; illustratively, the determining unit 404 may perform the steps of S304 described above. The first determining unit 404 may be the above-mentioned ISP scene recognition module.
An adjusting unit 405, configured to adjust the parameter value of the ISP according to the first scene corresponding to the scene information. For example, the adjustment unit 405 may perform the step of S308 described above. The adjustment unit 405 may be an ISP scene adaptation module as described above.
In a specific implementation, the determining unit 404 is further configured to:
searching scene rules and historical scene perception information according to the scene information and the running state information of the vehicle;
and when the scene information and the running state information of the vehicle are matched with the scene rule and the historical scene perception information, determining a first scene corresponding to the scene information.
The ISP scene recognition module may include, for example, a scene history result manager, a rule arbiter, and a rule manager. The rule discriminator can receive the scene information sent by the scene identifier, and call the scene history result manager to search the history scene perception information corresponding to the scene information according to the scene information and the running state information of the vehicle. The rule discriminator can receive the scene information sent by the scene identifier, and call the rule manager to search the scene rule corresponding to the scene information according to the scene information and the running state information of the vehicle. The rule discriminator may determine the first scene corresponding to the scene information based on the scene information, the driving state information of the vehicle, the scene rule corresponding to the scene information, and the history scene perception information.
In some implementations, the apparatus 400 further includes:
And a transmitting unit 406 for transmitting a match failure message to the ISP when the scene information and the driving state information of the vehicle do not match the scene rule and/or the history scene perception information.
In some specific implementations, the determining unit 404 is further configured to:
determining the confidence level of the first scene according to the scene information, the scene rule and the historical scene perception information; illustratively, the determining unit 404 may perform the step of S305 described above. The determining unit 404 may be the rule arbiter described above.
When the confidence of the first scene meets a first threshold value, determining that the first scene is correctly identified according to the scene information, the driving state information of the vehicle and the map data.
Illustratively, the determining unit 404 may perform the steps of S306 described above. The determination unit 404 may be the map discriminator described above.
In some implementations, the apparatus 400 further includes:
an updating unit 407 for updating the historical scene perception information according to the scene information and the running state information of the vehicle. For example, the updating unit 407 may perform the step of S307 described above. The updating unit 407 may be the above-described scene history result manager.
In a specific implementation manner, the running state information of the vehicle includes position information of the vehicle, and the updating unit 407 is further configured to:
Determining that the acquisition time of the image signal meets the preset time, and the position information of the vehicle meets the preset position, and updating the historical scene perception information according to the scene information and the running state information of the vehicle.
In a specific implementation, the adjusting unit 405 is further configured to:
searching ISP parameter values corresponding to the first scene;
and adjusting the parameter value of the ISP according to the ISP parameter value corresponding to the first scene.
In a specific implementation, the adjusting unit 405 is further configured to:
the identification of the first scene and the ISP parameter value corresponding to the first scene are sent to the ISP, so that the ISP modifies the ISP parameter value into the ISP parameter value corresponding to the first scene; or alternatively, the first and second heat exchangers may be,
and modifying the parameter value of the ISP into the parameter value of the ISP corresponding to the first scene.
The embodiments of the present application also provide a system for implementing any of the above methods, for example, an intelligent driving system is provided that includes a unit (or means) for implementing the steps performed by the parameter adjustment device in any of the above methods. As another example, another intelligent driving system is provided, which includes a unit (or means) for implementing the steps performed by the vehicle in any of the above methods.
For example, referring to fig. 5, which is a schematic diagram of an intelligent driving system according to an embodiment of the present application, the system 500 may include: an image collector 501, a sensor 502, an image signal processor ISP 503 and an intelligent driving platform 504, wherein:
The image collector 501 is used for acquiring an image signal in the running process of the vehicle; the image collector 501 may be a camera, for example.
ISP503 is used to process the image signals to obtain images and send the images to intelligent driving platform 504; the image signal processor ISP503 may perform the steps of S302 described above, for example.
The sensor 502 is configured to acquire driving state information of the vehicle, and send the driving state information to the intelligent driving platform 504; illustratively, the sensor 502 may perform the step of S301 described above.
The intelligent driving platform 504 is used for identifying scene information corresponding to the image; illustratively, the scene identifier in the intelligent driving platform 504 may perform the step of S303 described above.
Determining a first scene corresponding to the scene information according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information; illustratively, the rule arbiter in the intelligent driving platform 504 may perform the step of S304 described above.
And adjusting the parameter value of the ISP according to the first scene corresponding to the scene information. For example, the parameter adjustment unit in the intelligent driving platform 504 may perform the step of S308 described above.
In some implementations, the intelligent driving platform 504 is specifically configured to:
According to the scene information and the running state information of the vehicle, respectively searching scene rules and historical scene perception information;
and when the scene information and the running state information of the vehicle are matched with the scene rule and the historical scene perception information, determining a first scene corresponding to the scene information. Illustratively, the rule arbiter in the intelligent driving platform 504 may perform the steps of S3041-S3042 described above.
In one particular implementation, intelligent driving platform 504 is specifically configured to:
and when the scene information and the running state information of the vehicle are not matched with the scene rules and/or the historical scene perception information, sending a matching failure message to the ISP. Illustratively, the rule arbiter in the intelligent driving platform 504 may perform the steps of S3043 described above.
In some implementations, the intelligent driving platform 504 is specifically configured to:
determining the confidence level of the first scene according to the scene information, the scene rule and the historical scene perception information; illustratively, the rule arbiter in the intelligent driving platform 504 may perform the step of S305 described above.
When the confidence of the first scene meets a first threshold value, determining that the first scene is correctly identified according to the scene information, the driving state information of the vehicle and the map data. For example, the map discrimination unit in the intelligent driving platform 504 may perform the step of S306 described above.
In one particular implementation, intelligent driving platform 504 is specifically configured to:
and updating the historical scene perception information according to the scene information and the running state information of the vehicle. For example, the history scene management unit in the intelligent driving platform 504 may perform the step of S307 described above.
In one specific implementation, the driving state information of the vehicle includes position information of the vehicle, and the intelligent driving platform 504 is specifically configured to:
determining that the acquisition time of the image signal meets the preset time, and the position information of the vehicle meets the preset position, and updating the historical scene perception information according to the scene information and the running state information of the vehicle.
In one particular implementation, intelligent driving platform 504 is specifically configured to:
the scene parameter management module is used for searching ISP parameter values corresponding to the first scene;
and the parameter adjustment module is used for adjusting the parameter value of the ISP according to the ISP parameter value corresponding to the first scene.
For example, the parameter adjustment unit in the intelligent driving platform 504 may perform the steps of S3081 and S3082 described above.
In one particular implementation, when adjusting the parameter values of the ISP, the intelligent driving platform 504 is specifically configured to:
the identification of the first scene and the ISP parameter value corresponding to the first scene are sent to the ISP;
ISPs are used to: acquiring an identification of a first scene;
determining that the first scene is correctly identified according to the characteristic information of the image and the identification of the first scene;
modifying the parameter value of the ISP into an ISP parameter value corresponding to the first scene;
and processing the image and/or a subsequent image signal in the running process of the vehicle according to the adjusted ISP parameter value.
For example, the parameter adjustment unit in the intelligent driving platform 504 may perform the steps of S30821, S30822, and S30823 described above.
In one particular implementation, the ISP is also used to:
when the feature information of the image matches the feature information of the first scene, the ISP determines that the first scene identification is correct.
In one particular implementation, the ISP is deployed in an image collector, or in an intelligent driving platform.
It should be understood that the division of the units in the above apparatus is only a division of a logic function, and may be fully or partially integrated into one physical entity or may be physically separated when actually implemented. Furthermore, units in the apparatus may be implemented in the form of processor-invoked software; the device comprises, for example, a processor, which is connected to a memory, in which instructions are stored, the processor calling the instructions stored in the memory to implement any of the above methods or to implement the functions of the units of the device, wherein the processor is, for example, a general-purpose processor, such as a central processing unit (central processing unit, CPU) or microprocessor, and the memory is a memory within the device or a memory outside the device. Alternatively, the units in the apparatus may be implemented in the form of hardware circuits, and the functions of some or all of the units may be implemented by the design of hardware circuits, which may be understood as one or more processors; for example, in one implementation, the hardware circuit is an application-specific integrated circuit (ASIC), and the functions of some or all of the above units are implemented by the design of the logic relationships of the elements within the circuit; for another example, in another implementation, the hardware circuit may be implemented by a programmable logic device (programmable logic device, PLD), for example, a field programmable gate array (field programmable gate array, FPGA), which may include a large number of logic gates, and the connection relationship between the logic gates is configured by a configuration file, so as to implement the functions of some or all of the above units. All units of the above device may be realized in the form of processor calling software, or in the form of hardware circuits, or in part in the form of processor calling software, and in the rest in the form of hardware circuits.
In the embodiment of the application, the processor is a circuit with signal processing capability, and in one implementation, the processor may be a circuit with instruction reading and running capability, such as a central processing unit CPU, a microprocessor, a graphics processor GPU (which may be understood as a microprocessor), or a digital signal processor DSP, etc.; in another implementation, the processor may implement a function through a logical relationship of hardware circuitry that is fixed or reconfigurable, e.g., a hardware circuit implemented by a processor being an application specific integrated circuit ASIC or a programmable logic device PLD, such as an FPGA. In the reconfigurable hardware circuit, the processor loads the configuration document, and the process of implementing the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, a hardware circuit designed for artificial intelligence may be provided, which may be understood as an ASIC, such as a neural network processing unit NPU, tensor processing unit (tensor processing unit, TPU), deep learning processing unit (Deep learning Processing Unit, DPU), etc.
It will be seen that each of the units in the above apparatus may be one or more processors (or processing circuits) configured to implement the above method, for example: CPU, GPU, NPU, TPU, DPU, microprocessor, DSP, ASIC, FPGA, or a combination of at least two of these processor forms.
Furthermore, the units in the above apparatus may be integrated together in whole or in part, or may be implemented independently. In one implementation, these units are integrated together and implemented in the form of a system-on-a-chip (SOC). The SOC may include at least one processor for implementing any of the methods above or for implementing the functions of the units of the apparatus, where the at least one processor may be of different types, including, for example, a CPU and an FPGA, a CPU and an artificial intelligence processor, a CPU and a GPU, and the like.
Optionally, in this possible design, all relevant contents related to each step of the method embodiment shown in fig. 3A to 3E in the foregoing description may be referred to the functional description of the corresponding functional module, which is not repeated herein. The intelligent driving system described in this possible design is used to perform the functions of the intelligent driving system in the parameter adjustment method shown in fig. 3A to 3E, and thus the same effects as those of the above-described parameter adjustment method can be achieved.
The embodiment of the application provides a vehicle, including: the intelligent driving system comprises a processor and a memory, wherein the memory is coupled with the processor, and is used for storing computer program codes, and the computer program codes comprise computer instructions, and when the processor reads the computer instructions from the memory, the intelligent driving system can execute the parameter adjustment methods shown in fig. 3A-3E.
The vehicle provided by the embodiment of the application comprises a parameter adjusting device shown in fig. 4 or an intelligent driving system shown in fig. 5.
An intelligent driving platform that this application embodiment provided, it includes: the intelligent driving platform comprises a processor and a memory, wherein the memory is coupled with the processor, the memory is used for storing computer program codes, the computer program codes comprise computer instructions, and when the processor reads the computer instructions from the memory, the intelligent driving platform can execute the parameter adjustment methods shown in fig. 3A-3E.
The embodiment of the application provides a computer program product, which when run on a computer, causes the computer to execute the parameter adjustment methods shown in fig. 3A to 3E.
The embodiment of the application provides a computer readable storage medium, which includes computer instructions that, when executed on a terminal, cause a network device to execute a parameter adjustment method shown in fig. 3A to 3E.
The chip system provided by the embodiment of the application comprises one or more processors, and when the one or more processors execute instructions, the one or more processors execute the parameter adjustment methods shown in fig. 3A to 3E.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
In the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
It will be appreciated that the communication device or the like described above, in order to implement the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the present application may divide the functional modules of the communication device or the like according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.

Claims (28)

1. A method of parameter adjustment, comprising:
acquiring an image in the running process of a vehicle, wherein the image is obtained by processing an image signal in the running process of the vehicle by an image signal processor ISP;
acquiring running state information of the vehicle;
identifying scene information corresponding to the image;
determining a first scene corresponding to the scene information according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information;
and adjusting the parameter value of the ISP according to the first scene corresponding to the scene information.
2. The method of claim 1, wherein the determining the first scene corresponding to the scene information based on the scene information, the driving state information of the vehicle, the scene rules, and the historical scene perception information comprises:
according to the scene information and the running state information of the vehicle, searching the scene rule and the historical scene perception information respectively;
and when the scene information and the running state information of the vehicle are matched with the scene rule and the historical scene perception information, determining a first scene corresponding to the scene information.
3. The method as recited in claim 2, further comprising:
and when the scene information and the running state information of the vehicle are not matched with the scene rules and/or the historical scene perception information, sending a matching failure message to the ISP.
4. The method of claim 2, wherein after determining the first scene to which the scene information corresponds, the method further comprises:
determining the confidence level of the first scene according to the scene information, the scene rule and the historical scene perception information;
And when the confidence coefficient of the first scene meets a first threshold value, determining that the first scene is correctly identified according to the scene information, the driving state information of the vehicle and the map data.
5. The method according to any one of claims 1-4, further comprising, after determining the first scene to which the scene information corresponds:
and updating the historical scene perception information according to the scene information and the running state information of the vehicle.
6. The method according to claim 5, wherein the driving state information of the vehicle includes position information of the vehicle, and updating the history scene-aware information based on the scene information and the driving state information of the vehicle includes:
and when the acquisition time of the image signal meets the preset time and the position information of the vehicle meets the preset position, updating the historical scene perception information according to the scene information and the running state information of the vehicle.
7. The method according to any one of claims 1-6, wherein adjusting the parameter value of the ISP according to the first scene corresponding to the scene information comprises:
Searching an ISP parameter value corresponding to the first scene;
and adjusting the parameter value of the ISP according to the parameter value of the ISP corresponding to the first scene.
8. The method of claim 7, wherein adjusting the ISP parameter value according to the ISP parameter value corresponding to the first scene comprises:
sending the identification of the first scene and the ISP parameter value corresponding to the first scene to the ISP so that the ISP can modify the parameter value of the ISP into the ISP parameter value corresponding to the first scene; or (b)
And modifying the parameter value of the ISP into an ISP parameter value corresponding to the first scene.
9. A parameter adjustment device, characterized in that the parameter adjustment device comprises:
the first acquisition unit is used for acquiring an image in the running process of the vehicle, wherein the image is obtained by processing an image signal in the running process of the vehicle by the image signal processor ISP;
a second acquisition unit configured to acquire travel state information of the vehicle;
the identification unit is used for identifying scene information corresponding to the image;
the determining unit is used for determining a first scene corresponding to the scene information according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information;
And the adjusting unit is used for adjusting the parameter value of the ISP according to the first scene corresponding to the scene information.
10. The parameter adjustment device according to claim 9, wherein the determination unit is specifically configured to:
according to the scene information and the running state information of the vehicle, searching the scene rule and the historical scene perception information respectively;
and when the scene information and the running state information of the vehicle are matched with the scene rule and the historical scene perception information, determining a first scene corresponding to the scene information.
11. The parameter adjustment device of claim 10, further comprising:
and the sending unit is used for sending a matching failure message to the ISP when the scene information and the running state information of the vehicle are not matched with the scene rule and/or the historical scene perception information.
12. The parameter adjustment device of claim 10, wherein the determination unit is further configured to:
determining the confidence level of the first scene according to the scene information, the scene rule and the historical scene perception information;
and when the confidence coefficient of the first scene meets a first threshold value, determining that the first scene is correctly identified according to the scene information, the driving state information of the vehicle and the map data.
13. The parameter adjustment device according to any one of claims 9 to 12, characterized by further comprising:
and the updating unit is used for updating the historical scene perception information according to the scene information and the running state information of the vehicle.
14. The parameter adjustment device according to claim 13, wherein the running state information of the vehicle includes position information of the vehicle, the updating unit being specifically configured to:
and when the acquisition time of the image signal meets the preset time and the position information of the vehicle meets the preset position, updating the historical scene perception information according to the scene information and the running state information of the vehicle.
15. Parameter tuning device according to any one of the claims 9-14, wherein the tuning unit is specifically adapted to:
searching an ISP parameter value corresponding to the first scene;
and adjusting the parameter value of the ISP according to the parameter value of the ISP corresponding to the first scene.
16. The parameter adjustment device according to claim 15, wherein the adjustment unit is specifically configured to, when adjusting the parameter value of the ISP according to the ISP parameter value corresponding to the first scene:
Sending the identification of the first scene and the ISP parameter value corresponding to the first scene to the ISP so that the ISP can modify the parameter value of the ISP into the ISP parameter value corresponding to the first scene; or (b)
And modifying the parameter value of the ISP into an ISP parameter value corresponding to the first scene.
17. An intelligent driving system, characterized in that the system comprises an image collector, a sensor, an image signal processor ISP and an intelligent driving platform, wherein:
the image collector is used for acquiring an image signal in the running process of the vehicle and sending the image signal to the ISP;
the ISP is used for processing the image signals to obtain images and sending the images to the intelligent driving platform;
the sensor is used for acquiring the running state information of the vehicle and sending the running state information to the intelligent driving platform; the intelligent driving platform is used for:
identifying scene information corresponding to the image;
determining a first scene corresponding to the scene information according to the scene information, the running state information of the vehicle, the scene rule and the historical scene perception information;
and adjusting the parameter value of the ISP according to the first scene corresponding to the scene information.
18. The system of claim 17, wherein the intelligent driving platform is specifically configured to:
according to the scene information and the running state information of the vehicle, searching the scene rule and the historical scene perception information respectively;
and when the scene information and the running state information of the vehicle are matched with the scene rule and the historical scene perception information, determining a first scene corresponding to the scene information.
19. The system of claim 18, wherein the intelligent ride platform is further configured to:
and when the scene information and the running state information of the vehicle are not matched with the scene rules and/or the historical scene perception information, sending a matching failure message to the ISP.
20. The system of claim 18, wherein the intelligent ride platform is further configured to:
determining the confidence level of the first scene according to the scene information, the scene rule and the historical scene perception information;
and when the confidence coefficient of the first scene meets a first threshold value, determining that the first scene is correctly identified according to the scene information, the driving state information of the vehicle and the map data.
21. The system of any one of claims 18-20, wherein the intelligent ride platform is further configured to: and updating the historical scene perception information according to the scene information and the running state information of the vehicle.
22. The system of claim 21, wherein the travel state information of the vehicle includes location information of the vehicle, the intelligent driving platform further configured to:
and when the acquisition time of the image signal meets the preset time and the position information of the vehicle meets the preset position, updating the historical scene perception information according to the scene information and the running state information of the vehicle.
23. The system of any one of claims 17-22, wherein the intelligent ride platform is further configured to:
searching an ISP parameter value corresponding to the first scene;
and adjusting the parameter value of the ISP according to the parameter value of the ISP corresponding to the first scene.
24. The system of claim 23, wherein when adjusting the parameter value of the ISP,
the intelligent driving platform is used for: sending the identification of the first scene and the ISP parameter value corresponding to the first scene to the ISP;
The ISP is used for: acquiring an identification of the first scene;
determining that the first scene is correctly identified according to the characteristic information of the image and the identification of the first scene;
modifying the parameter value of the ISP into an ISP parameter value corresponding to the first scene;
and processing the image and/or a subsequent image signal in the running process of the vehicle according to the adjusted ISP parameter value.
25. The system of claim 24, wherein the ISP is further configured to:
and when the characteristic information of the image is matched with the characteristic information of the first scene, determining that the first scene is correctly identified.
26. The system of any one of claims 17-25, wherein the ISP is deployed in the image collector or the ISP is deployed in the intelligent driving platform.
27. An intelligent driving platform, characterized by comprising: a processor and a memory coupled to the processor, the memory for storing computer program code comprising computer instructions that, when read from the memory by the processor, cause the intelligent driving platform to perform the method of any of claims 1-8.
28. A vehicle comprising the intelligent ride platform of claim 27.
CN202210887821.6A 2022-07-26 2022-07-26 Parameter adjustment method and device and intelligent driving system Pending CN117528264A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210887821.6A CN117528264A (en) 2022-07-26 2022-07-26 Parameter adjustment method and device and intelligent driving system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210887821.6A CN117528264A (en) 2022-07-26 2022-07-26 Parameter adjustment method and device and intelligent driving system

Publications (1)

Publication Number Publication Date
CN117528264A true CN117528264A (en) 2024-02-06

Family

ID=89744348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210887821.6A Pending CN117528264A (en) 2022-07-26 2022-07-26 Parameter adjustment method and device and intelligent driving system

Country Status (1)

Country Link
CN (1) CN117528264A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118175433A (en) * 2024-05-13 2024-06-11 成都云创天下科技有限公司 ISP automatic tuning method based on different scenes in same video picture

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118175433A (en) * 2024-05-13 2024-06-11 成都云创天下科技有限公司 ISP automatic tuning method based on different scenes in same video picture

Similar Documents

Publication Publication Date Title
KR102554643B1 (en) Multiple operating modes to expand dynamic range
JP6657925B2 (en) In-vehicle camera system and image processing device
US9734425B2 (en) Environmental scene condition detection
US10678255B2 (en) Systems, methods and apparatuses are provided for enhanced surface condition detection based on image scene and ambient light analysis
US10223910B2 (en) Method and apparatus for collecting traffic information from big data of outside image of vehicle
US10634317B2 (en) Dynamic control of vehicle lamps during maneuvers
CN109703460B (en) Multi-camera complex scene self-adaptive vehicle collision early warning device and early warning method
CN113160594B (en) Change point detection device and map information distribution system
EP3779872A1 (en) Operation processing device, object identifying system, learning method, automobile, and lighting appliance for vehicle
CN112750170A (en) Fog feature identification method and device and related equipment
US20220335728A1 (en) Electronic device, method, and computer readable storage medium for detection of vehicle appearance
JP2018041209A (en) Object recognition device, model information generation device, object recognition method, and object recognition program
CN113343738A (en) Detection method, device and storage medium
JP2024503671A (en) System and method for processing by combining visible light camera information and thermal camera information
CN117528264A (en) Parameter adjustment method and device and intelligent driving system
EP4149809B1 (en) Motor-vehicle driving assistance in low meteorological visibility conditions, in particular with fog
CN113870246A (en) Obstacle detection and identification method based on deep learning
US20200118280A1 (en) Image Processing Device
CN110422176B (en) Intelligent transportation system and automobile based on V2X
KR20210100775A (en) Autonomous driving device for detecting road condition and operation method thereof
CN113170080A (en) Information processing apparatus, information processing method, and program
CN113614782A (en) Information processing apparatus, information processing method, and program
KR20210100776A (en) Vehicle for performing autonomous driving using camera and operation method thereof
US20230251649A1 (en) Remote operation system, remote operation control method, and remote operator terminal
CN110550036A (en) Driving assistance apparatus, vehicle, and driving assistance system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination