WO2024113368A1 - 图像处理方法、装置和存储介质 - Google Patents

图像处理方法、装置和存储介质 Download PDF

Info

Publication number
WO2024113368A1
WO2024113368A1 PCT/CN2022/136298 CN2022136298W WO2024113368A1 WO 2024113368 A1 WO2024113368 A1 WO 2024113368A1 CN 2022136298 W CN2022136298 W CN 2022136298W WO 2024113368 A1 WO2024113368 A1 WO 2024113368A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target object
objects
intensity
preset condition
Prior art date
Application number
PCT/CN2022/136298
Other languages
English (en)
French (fr)
Inventor
吕笑宇
马莎
高鲁涛
罗达新
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2022/136298 priority Critical patent/WO2024113368A1/zh
Publication of WO2024113368A1 publication Critical patent/WO2024113368A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present application relates to the field of optical perception technology, and in particular to an image processing method, device and storage medium.
  • Optical perception technology is an indispensable technology in autonomous driving and assisted driving systems.
  • the main on-board optical perception equipment is the visible light camera.
  • traditional optical imaging technology is easily affected by factors such as lighting changes, shadow occlusion, and the same spectrum of foreign objects, which will lead to a decrease in image quality, availability, and accuracy of model prediction results.
  • polarization images obtained using polarization imaging technology have the advantages of being less affected by lighting changes and having obvious contrast between different targets.
  • an embodiment of the present application provides an image processing method.
  • the method comprises:
  • M images where the M images are images collected in different polarization states, the M images include the target object and other objects, and M is an integer greater than 2;
  • the first image is used to indicate intensity information of the target object and other objects
  • the second image is used to indicate polarization information of the target object and other objects
  • the first image and the second image are fused according to a first fusion method to determine a fused image, wherein the first fusion method is determined according to material information of the target object, or according to an intensity difference between the target object and other objects, or a polarization difference between the target object and other objects.
  • the M images may be collected at the same time or at different times. Different polarization states may indicate different polarization angles during collection.
  • the fused image may be used to perform the target task.
  • the advantages of polarization images that are less affected by lighting changes and have obvious contrast between different objects can be utilized, and the difference between polarization images and traditional intensity images is taken into account to achieve more targeted image processing.
  • the first fusion method can be determined based on the material information of the target object, or based on the intensity difference between the target object and other objects and the polarization difference between the target object and other objects, the quality of the fused image can be further improved, the decoupling of polarization image processing and subsequent tasks can be achieved, and the adaptability of the fused image to the traditional task model can be improved to adapt to a variety of task models.
  • the material information of the target object may include reflectivity, roughness, refractive index and glossiness of the target object.
  • a fusion method of the first image and the second image can be designed more specifically to obtain an image that is more suitable for subsequent tasks and improve the accuracy of the target task results.
  • material information of the target object can be obtained based on the intensity difference between the target object and other objects and the polarization difference between the target object and other objects.
  • the material information of the target object can be obtained in a more targeted manner, so that the obtained material information is more consistent with the actual material condition of the target object, and the process of determining the material information is more real-time.
  • the first fusion method may include one of the following:
  • first fusion methods by using a variety of first fusion methods, different first fusion methods can be flexibly selected to deal with different task scenarios in a more targeted manner, so that the fused image can be flexibly adapted to various task models to obtain more accurate task results.
  • the first fusion method when the first preset condition and the second preset condition are met, the first fusion method may be to calculate the difference between the first image and the second image;
  • the first preset condition may include that the reflectivity of the target object is greater than the reflectivity of other objects, and the intensity difference between the target object and the other objects is greater than a first preset threshold;
  • the second preset condition may include that the polarization degree of the target object is less than the polarization degree of other objects, and the polarization degree difference between the target object and the other objects is greater than a second preset threshold.
  • the present application by taking the difference between the first image and the second image when the reflectivity of the target object is greater than the reflectivity of other objects and the polarization degree of the target object is less than the polarization degree of other objects, it is possible to achieve more targeted image fusion, adapt to subsequent tasks, and enable the fused image to more clearly distinguish the target object, thereby improving the accuracy of the subsequent task execution results.
  • the first fusion method may be summing the first image and the second image;
  • the third preset condition may include that the intensity of the target object and the intensity of other objects are both less than a third preset threshold
  • the fourth preset condition may include that the polarization degree of the target object is greater than the polarization degree of other objects, and the difference in polarization degree between the target object and the other objects is greater than the fourth preset threshold
  • the present application by summing the first image and the second image when the reflectivity of the target object and the reflectivity of other objects are both smaller and the polarization degree of the target object is greater than the polarization degree of other objects, it is possible to achieve more targeted image fusion, adapt to subsequent tasks, and enable the fused image to more clearly distinguish the target object, thereby improving the accuracy of the subsequent task execution results.
  • the first fusion method when the fifth preset condition and the sixth preset condition are met, the first fusion method may be to multiply the first image by the second image;
  • the fifth preset condition may include that the intensity difference between the target object and other objects is less than the fifth preset threshold
  • the sixth preset condition may include that the polarization degree of the target object is greater than the polarization degree of other objects, and the polarization degree difference between the target object and other objects is greater than the sixth preset threshold.
  • the present application by multiplying the first image and the second image when the difference between the reflectivity of the target object and the reflectivity of other objects is small and the polarization degree of the target object is greater than the polarization degree of other objects, it is possible to achieve more targeted image fusion, adapt to subsequent tasks, and enable the fused image to more clearly distinguish the target object, thereby improving the accuracy of the subsequent task execution results.
  • the first fusion method may include one of the following:
  • f can represent the first fusion method
  • I can represent the first image
  • P can represent the second image
  • a1 and a2 can respectively represent the weights corresponding to the first image
  • b1, b2 and c can respectively represent the weights corresponding to the second image.
  • the preference for intensity information and polarization information can be adjusted more flexibly according to task requirements, so as to respond to different task scenarios more specifically and improve the adaptability of the fused image.
  • the first image can be determined based on the average value of the intensities of pixels of M images
  • the second image can be determined by calculating the ratio of the intensity of pixels of the polarized part to the intensity of the overall pixels in the M images.
  • the characteristics of the polarization image can be utilized to integrate the intensity information indicated by the M images to determine the overall intensity information, and to integrate the polarization information indicated by the M images to determine the overall polarization information.
  • an embodiment of the present application provides an image processing device.
  • the device includes:
  • An acquisition module used to acquire M images, where the M images are images collected in different polarization states, the M images include the target object and other objects, and M is an integer greater than 2;
  • a determination module configured to determine a first image and a second image according to the M images, wherein the first image is used to indicate intensity information of the target object and other objects, and the second image is used to indicate polarization information of the target object and other objects;
  • a fusion module is used to fuse the first image and the second image according to a first fusion method to determine a fused image.
  • the first fusion method is determined based on material information of the target object, or based on the intensity difference between the target object and other objects and the polarization difference between the target object and other objects.
  • the M images may be collected at the same time or at different times. Different polarization states may indicate different polarization angles during collection.
  • the fused image may be used to perform the target task.
  • the material information of the target object may include reflectivity, roughness, refractive index and glossiness of the target object.
  • material information of the target object can be obtained based on an intensity difference between the target object and other objects and a polarization difference between the target object and other objects.
  • the first fusion method may include one of the following:
  • the first fusion method when the first preset condition and the second preset condition are met, the first fusion method may be to calculate the difference between the first image and the second image;
  • the first preset condition may include that the reflectivity of the target object is greater than the reflectivity of other objects, and the intensity difference between the target object and the other objects is greater than a first preset threshold;
  • the second preset condition may include that the polarization degree of the target object is less than the polarization degree of other objects, and the polarization degree difference between the target object and the other objects is greater than a second preset threshold.
  • the first fusion method may be to sum the first image and the second image
  • the third preset condition may include that the intensity of the target object and the intensity of other objects are both less than a third preset threshold
  • the fourth preset condition may include that the polarization degree of the target object is greater than the polarization degree of other objects, and the difference in polarization degree between the target object and the other objects is greater than the fourth preset threshold
  • the first fusion method may be to multiply the first image by the second image
  • the fifth preset condition may include that the intensity difference between the target object and other objects is less than the fifth preset threshold
  • the sixth preset condition may include that the polarization degree of the target object is greater than the polarization degree of other objects, and the polarization degree difference between the target object and other objects is greater than the sixth preset threshold.
  • the first fusion method may include one of the following:
  • f can represent the first fusion method
  • I can represent the first image
  • P can represent the second image
  • a1 and a2 can respectively represent the weights corresponding to the first image
  • b1, b2 and c can respectively represent the weights corresponding to the second image.
  • the first image can be determined based on the average value of the intensities of pixels of M images
  • the second image can be determined by calculating the ratio of the intensity of pixels of the polarized part to the intensity of the overall pixels in the M images.
  • an embodiment of the present application provides an image processing device, comprising: a processor; a memory for storing processor executable instructions; wherein the processor is configured to implement the above-mentioned first aspect or one or more of the multiple possible implementation methods of the first aspect when executing the instructions.
  • an embodiment of the present application provides a non-volatile computer-readable storage medium having computer program instructions stored thereon.
  • the image processing method implements the above-mentioned first aspect or one or more of the multiple possible implementation methods of the first aspect.
  • an embodiment of the present application provides a terminal device, which can execute the image processing method of the above-mentioned first aspect or one or several possible implementation methods of the first aspect.
  • an embodiment of the present application provides a computer program product, including a computer-readable code, or a non-volatile computer-readable storage medium carrying a computer-readable code.
  • the processor in the electronic device executes the image processing method of the above-mentioned first aspect or one or several possible implementation methods of the first aspect.
  • FIG1 is a schematic diagram showing an application scenario according to an embodiment of the present application.
  • FIG. 2 shows a flow chart of an image processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram showing images collected under different polarization states according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram showing a method of acquiring material information according to an embodiment of the present application.
  • FIG. 5( a ), FIG. 5( b ) and FIG. 5( c ) are schematic diagrams showing image fusion according to an embodiment of the present application.
  • FIG6(a), FIG6(b) and FIG6(c) are schematic diagrams showing image fusion according to an embodiment of the present application.
  • FIG. 7( a ), FIG. 7( b ) and FIG. 7( c ) are schematic diagrams showing a method of image fusion according to an embodiment of the present application.
  • FIG8( a ) and FIG8 ( b ) are schematic diagrams showing the lane line detection effect according to an embodiment of the present application.
  • FIG. 9 shows a structural diagram of an image processing device according to an embodiment of the present application.
  • FIG. 10 shows a structural diagram of an electronic device 1000 according to an embodiment of the present application.
  • Optical perception technology is an indispensable technology in autonomous driving and assisted driving systems.
  • the main on-board optical perception equipment is the visible light camera.
  • the traditional optical imaging technology is easily affected by factors such as illumination changes, shadow occlusion, and the same spectrum of foreign objects, which will lead to a decrease in image quality, availability, and the accuracy of model prediction results.
  • the polarization image obtained by polarization imaging technology has the advantages of being less affected by illumination changes and having obvious contrast between different targets.
  • the present application provides an image processing method.
  • the image processing method of the embodiment of the present application determines a first image and a second image for indicating intensity information and polarization information respectively by acquiring images collected under different polarization states.
  • the advantages of polarization images that are less affected by lighting changes and have obvious contrast between different objects are utilized, and the difference between polarization images and traditional intensity images is taken into account.
  • the first image and the second image are fused according to a first fusion method.
  • the first fusion method can be determined based on the material information of the target object, or based on the intensity difference between the target object and other objects and the polarization difference between the target object and other objects, the quality of the fused image can be further improved, the decoupling of polarization image processing and subsequent tasks can be achieved, and the adaptability of the fused image to the traditional task model can be improved to adapt to a variety of task models.
  • FIG1 shows a schematic diagram of an application scenario according to an embodiment of the present application.
  • the image processing system of the embodiment of the present application can be deployed on a server or a terminal device for processing images in a vehicle-mounted scenario.
  • the image processing system of the embodiment of the present application can be applied to obtain images in multiple polarization states collected by an optical sensor (for example, a vehicle-mounted sensor), and the images are processed and fused to obtain a fused image.
  • the fused image can be used to perform corresponding tasks.
  • the fused image can be input into a corresponding software module (such as a neural network model) to perform subsequent tasks (such as traffic light head detection, lane line detection, etc.) to obtain the target task result.
  • a corresponding software module such as a neural network model
  • subsequent tasks such as traffic light head detection, lane line detection, etc.
  • the image processing system can also be connected to hardware modules such as new optical imaging sensors and ISP (image signal processor) to perform subsequent tasks.
  • ISP image signal processor
  • the optical sensor can be installed on a vehicle (for example, a collection vehicle), which is one or more cameras, such as color, grayscale, infrared, multi-spectral cameras, etc.
  • a vehicle for example, a collection vehicle
  • Polarization imaging technology can be used through one or more cameras to obtain images in multiple polarization states.
  • the server involved in this application can be located in the cloud or locally, and can be a physical device or a virtual device, such as a virtual machine, a container, etc., with a wireless communication function, wherein the wireless communication function can be set in the chip (system) or other parts or components of the server. It can refer to a device with a wireless connection function, and the wireless connection function means that it can be connected to other servers or terminal devices through wireless connection methods such as Wi-Fi and Bluetooth.
  • the server of this application can also have the function of communicating through a wired connection.
  • the terminal device involved in this application may refer to a device with a wireless connection function.
  • the function of wireless connection refers to the ability to connect to other terminal devices or servers through wireless connection methods such as Wi-Fi and Bluetooth.
  • the terminal device of this application may also have the function of communicating through a wired connection.
  • the terminal device of this application may be a touch screen, a non-touch screen, or a screenless device.
  • the touch screen can control the terminal device by clicking and sliding on the display screen with fingers, stylus, etc.
  • the non-touch screen device can be connected to an input device such as a mouse, keyboard, touch panel, etc., and the terminal device can be controlled by the input device.
  • the device without a screen may be a Bluetooth speaker without a screen.
  • the terminal device of this application may be a smart phone, a netbook, a tablet computer, a laptop, a wearable electronic device (such as a smart bracelet, a smart watch, etc.), a TV, a virtual reality device, an audio system, an electronic ink, and the like.
  • the terminal device of the embodiment of this application may also be a vehicle-mounted terminal device.
  • the processor can be built into the vehicle computer on the vehicle as a vehicle-mounted computing unit, so that the image processing process of the embodiment of this application can be realized in real time on the vehicle side.
  • the image processing system of the embodiment of the present application can also be applied to other scenarios besides the vehicle-mounted scenario, as long as it involves processing of polarized images, and the present application does not impose any restrictions on this.
  • FIG2 is a flow chart of an image processing method according to an embodiment of the present application.
  • the method can be used in the above-mentioned image processing system. As shown in FIG2 , the method includes:
  • Step S201 obtaining M images.
  • the M images may be images captured in different polarization states, for example, images captured by the above optical sensor (which may be one or more).
  • the optical sensor may be set on the vehicle.
  • the M images may include the target object and other objects, and M is an integer greater than 2.
  • Images in different polarization states can be obtained through a variety of polarization imaging methods.
  • the M images can be collected at the same time or at different times; they can be collected by one optical sensor or by multiple optical sensors set at different positions.
  • images can be collected by multi-camera synchronous imaging (i.e., polarizers in different directions are set on M cameras, and the M cameras are set at different positions to collect images simultaneously), single-camera imaging (i.e., a rotatable polarizer is set on one camera, and the orientation of the polarizer is adjusted by rotation to collect images in different polarization states), pixel-level polarization coating camera imaging (i.e., different polarization states are set by the polarization coating on the camera to collect images), and other imaging methods.
  • multi-camera synchronous imaging i.e., polarizers in different directions are set on M cameras, and the M cameras are set at different positions to collect images simultaneously
  • single-camera imaging i.e., a rotatable polar
  • Different polarization states may refer to different polarization angles (i.e., polarization directions).
  • FIG3 a schematic diagram of images captured under different polarization states according to an embodiment of the present application is shown.
  • M is 4
  • these four images may correspond to four different polarization angles of 0°, 45°, 90°, and 135°, respectively, and may correspond to the four images I0 (upper left part), I45 (upper right part), I90 (lower left part), and I135 (lower right part) in FIG3 , respectively.
  • the target object can be determined based on the task to be performed subsequently, and can include one or more object elements in the image, and other objects can include all or part of the object elements in the image other than the target object.
  • the target object in the lane line detection task, can include the lane line (such as the center line of the road on the lane, the lane dividing line, the lane edge line, etc.), and correspondingly, other objects can include objects other than the target object, such as the road environment background, etc.
  • the target object in the task of detecting traffic light heads in a road scene, the target object can include the edge line of the light head, and other objects can include objects other than the light head, such as the environment background, etc.
  • the target object in the task of detecting manhole covers on the road surface, can include the manhole cover, and other objects can include other objects on the road surface except the manhole cover.
  • the characteristics of the polarization image can be used below to respectively determine the intensity and polarization information indicated in the M images.
  • Step S202 determining a first image and a second image according to M images.
  • a first image and a second image can be determined according to the M images respectively.
  • the first image may be used to indicate intensity information of the target object and other objects.
  • the intensity information may be the intensity of light, for example, the grayscale value of an image pixel in a grayscale image, or the intensity of an image pixel in a color image (which may be reflected as the pixel value of each pixel of the first image).
  • the first image may be determined according to an average value of the intensity of pixels of the M images.
  • the intensity information indicated by the M images can be integrated to determine the overall intensity information.
  • I may correspond to the first image and represent the pixel intensity of the first image.
  • I0, I45, I90 and I135 may represent the pixel intensity of the image collected under different polarization states. Since light will be attenuated when passing through the polarizer, it may be attenuated by half on average. Therefore, when calculating the average value, it can be divided by 2 (for example, the denominator in formula (1) is originally 4, and then divided by 2 to get 2) to obtain the pixel value of the first image.
  • the second image may be used to indicate polarization information of the target object and other objects.
  • the polarization information may be, for example, a degree of polarization (which may be reflected by a pixel value of each pixel of the second image).
  • the second image can be determined by calculating the ratio of the intensity of the pixels of the polarized part and the intensity of the overall pixels in the M images.
  • the overall polarization information can be determined by integrating the polarization information indicated by the M images.
  • P may correspond to the second image and represent the polarization degree of the second image.
  • I0, I45, I90 and I135 may respectively represent the pixel intensity of the image collected under different polarization states.
  • I may represent the pixel intensity of the first image and may be obtained by formula (1).
  • the pixel value of each pixel point of the second image may be determined by mapping the ratio corresponding to P (i.e., the polarization degree) to a range consistent with the pixel intensity indicated by I (e.g., 0-255).
  • the first image and the second image can be fused in combination with the material characteristics of the target object in the scene, or the difference information between the target object and other objects to determine the fused image and realize the processing of the polarization image, as described below.
  • Step S203 fuse the first image and the second image according to the first fusion method to determine a fused image.
  • the first fusion method may be determined based on material information of the target object, or based on an intensity difference between the target object and other objects, or a polarization degree difference between the target object and other objects.
  • the advantages of polarization images that are less affected by lighting changes and have obvious contrast between different objects can be utilized, and the difference between polarization images and traditional intensity images is taken into account to achieve more targeted image processing.
  • the first fusion method can be determined based on the material information of the target object, or based on the intensity difference between the target object and other objects and the polarization difference between the target object and other objects, the quality of the fused image can be further improved, the decoupling of polarization image processing and subsequent tasks can be achieved, and the adaptability of the fused image to the traditional task model can be improved to adapt to a variety of task models.
  • the material information of the target object may include reflectivity, roughness, refractive index and glossiness of the target object, and may include one or more of them.
  • Reflectivity can be the ratio of the intensity of reflected light to the intensity of incident light.
  • Surfaces of different materials can have different reflectivities.
  • Roughness can be the roughness of the surface of an object. The smaller the surface roughness, the smoother the surface of the corresponding material.
  • the refractive index can represent the ratio of the speed of light in a vacuum to its phase velocity after entering the medium corresponding to the object.
  • the refractive index of the target object is related to the degree of polarization.
  • Glossiness can represent the mirror reflection ability of the surface of an object to light. Mirror reflection can indicate a reflection characteristic with direction selection. The lower the mirror reflectivity of the material surface, the lower the glossiness of the corresponding material surface.
  • a fusion method of the first image and the second image can be designed more specifically to obtain an image that is more suitable for subsequent tasks and improve the accuracy of the target task results.
  • the material information of the target object can be obtained a priori.
  • the reflectivity, roughness, refractive index, glossiness and other information of the surface of the target object material can be collected in advance by sensors such as a gloss meter as a priori input in the fusion process.
  • sensors such as a gloss meter as a priori input in the fusion process.
  • one or more of reflectivity, roughness, refractive index, glossiness, etc. can be collected according to task requirements.
  • the priori input can be input by the user, or it can be collected and input by a vehicle-mounted sensor (such as a vehicle-mounted gloss meter, etc.).
  • the material information of the target object can also be obtained according to the intensity difference between the target object and other objects and the polarization degree difference between the target object and other objects.
  • the intensity of the target object can be determined based on the pixel values corresponding to the target object area in the first image (for example, the average value of the pixel values in the area); the intensity of other objects can be determined based on the pixel values corresponding to other object areas other than the target object in the first image (for example, the average value of the pixel values in the area).
  • the polarization degree of the target object can be determined based on the pixel values corresponding to the target object area in the second image (for example, the average value of the pixel values in the area); the polarization degree of other objects can be determined based on the pixel values corresponding to other object areas other than the target object in the second image (for example, the average value of the pixel values in the area).
  • the intensity difference between the target object and other objects can be determined based on the intensity information indicated by the first image; the polarization difference between the target object and other objects (e.g., the difference between the two) can also be determined based on the polarization information indicated by the second image.
  • the reflectivity and roughness of the target object can be relatively determined by the intensity difference; the refractive index and glossiness of the target object can be relatively determined by the polarization difference.
  • the material information of the target object can be obtained in a more targeted manner, so that the obtained material information is more consistent with the actual material condition of the target object, and the process of determining the material information is more real-time.
  • the target object may be the object indicated in the figure, and the road surface background in the image may represent other objects corresponding to the target object.
  • the user may input the material information of the target object, or the object indicated in the figure may be measured a priori by an on-board sensor to acquire the material information.
  • the intensity difference and polarization difference between the object indicated in the figure and the road background can be determined in the above manner to determine the material information of the target object.
  • first fusion methods can be designed to specifically deal with different task scenarios. The following is a detailed introduction to the determination method of the first fusion method.
  • the first fusion method may include one of the following:
  • the above may be to calculate the difference, sum or product of the pixel value of the first image and the pixel value of the second image.
  • first fusion methods by using a variety of first fusion methods, different first fusion methods can be flexibly selected to deal with different task scenarios in a more targeted manner, so that the fused image can be flexibly adapted to various task models to obtain more accurate task results.
  • different weights can be set for the first image and the second image in the process of finding the difference, sum, and product, so as to flexibly adjust the preference for intensity information and polarization information according to task requirements.
  • f may represent the first fusion method.
  • I may represent the first image (e.g., the pixel value of each pixel of the first image)
  • P may represent the second image (e.g., the pixel value of each pixel of the second image)
  • I and P may be obtained according to the above process.
  • a1 may represent the weight corresponding to the first image
  • b1 may represent the weight corresponding to the second image. The values of a1 and b1 may be preset as needed.
  • a calculation method for summing the first image and the second image can be seen in formula (4):
  • f may represent the first fusion method.
  • I may represent the first image (e.g., the pixel value of each pixel of the first image)
  • P may represent the second image (e.g., the pixel value of each pixel of the second image)
  • I and P may be obtained according to the above process.
  • a2 may represent the weight corresponding to the first image
  • b2 may represent the weight corresponding to the second image. The values of a2 and b2 may be preset as needed.
  • f may represent the first fusion method.
  • I may represent the first image (for example, the pixel value of each pixel of the first image)
  • P may represent the second image (for example, the pixel value of each pixel of the second image)
  • I and P may be obtained according to the above process.
  • c may represent the weight corresponding to the second image (it may also represent the weight corresponding to the first image, or the product of the weight corresponding to the first image and the weight corresponding to the second image). The value of c may be preset as needed.
  • the preference for intensity information and polarization information can be adjusted more flexibly according to task requirements, so as to respond to different task scenarios more specifically and improve the adaptability of the fused image.
  • FIG. 5(a), 5(b) and 5(c) a schematic diagram of image fusion according to an embodiment of the present application is shown.
  • the figure shows a method for determining a first fusion method in a lane line detection task scenario, wherein Figure 5(a) may represent a first image, Figure 5(b) may represent a second image, and Figure 5(c) may represent a fused image.
  • the target object may be the lane line, and other objects may be road areas outside the lane line.
  • the reflectivity corresponding to the lane line is greater than the reflectivity corresponding to other road areas (reflected in that the pixel value of the lane line area in the first image is greater than the pixel value of other road areas);
  • the polarization degree corresponding to the lane line is less than the polarization degree corresponding to other road areas (reflected in that the pixel value of the lane line area in the second image is less than the pixel value of other road areas).
  • the first fusion method can be to calculate the difference between the first image and the second image.
  • a method of calculating the difference can also be referred to the above formula (3).
  • the first preset condition may include that the reflectivity of the target object is greater than the reflectivity of other objects, and the intensity difference between the target object and the other objects is greater than a first preset threshold (corresponding to the feature that the reflectivity corresponding to the lane line in FIG5 (a) is greater than the reflectivity corresponding to other road areas).
  • the second preset condition may include that the polarization degree of the target object is less than the polarization degree of other objects, and the polarization degree difference between the target object and the other objects is greater than a second preset threshold (corresponding to the feature that the polarization degree corresponding to the lane line in FIG5 (b) is less than the polarization degree corresponding to other road areas).
  • the intensity information can reflect the reflectivity (a larger intensity value can represent a larger reflectivity)
  • the first preset condition may also be that the reflectivity of the target object is greater than a predetermined threshold and/or the roughness of the target object is less than a predetermined threshold; the second preset condition may also be that the glossiness of the target object is less than a predetermined threshold.
  • the present application does not limit the first preset condition and the second preset condition, as long as the target object can be more clearly distinguished in the differenced image.
  • the fused image determined by the above fusion method can more clearly distinguish object elements such as lane lines in the image.
  • the present application by taking the difference between the first image and the second image when the reflectivity of the target object is greater than the reflectivity of other objects and the polarization degree of the target object is less than the polarization degree of other objects, it is possible to achieve more targeted image fusion, adapt to subsequent tasks, and enable the fused image to more clearly distinguish the target object, thereby improving the accuracy of the subsequent task execution results.
  • FIG. 6(a), 6(b) and 6(c) a schematic diagram of image fusion according to an embodiment of the present application is shown.
  • the figure shows a method for determining the first fusion method in a traffic light head detection task scenario, wherein Figure 6(a) may represent the first image, Figure 6(b) may represent the second image, and Figure 6(c) may represent the fused image.
  • the target object may be the edge line of the traffic light head, and the other object may be the background area outside the traffic light head.
  • the reflectivity corresponding to the edge of the head and the reflectivity corresponding to the background area are both lower (as reflected in the smaller pixel values of the edge of the head and the background area in the first image);
  • the polarization degree corresponding to the edge of the head is greater than the polarization degree corresponding to the background area (as reflected in the larger pixel values of the edge of the head in the second image).
  • the first fusion method can be to sum the first image and the second image.
  • a way of summing can also be referred to the above formula (4).
  • the third preset condition may include that the intensity of the target object and the intensity of other objects are both less than a third preset threshold value (corresponding to the feature that the reflectivity corresponding to the edge of the lamp head and the reflectivity corresponding to the background area in FIG6 (a) are both lower).
  • the fourth preset condition may include that the polarization degree of the target object is greater than the polarization degree of other objects, and the difference in polarization degree between the target object and the other objects is greater than the fourth preset threshold value (corresponding to the feature that the polarization degree corresponding to the edge of the lamp head is greater than the polarization degree corresponding to the background area in FIG6 (b)).
  • intensity information can reflect reflectivity (a larger intensity value indicates a larger reflectivity)
  • the third preset condition may also be that the reflectivity of the target object is less than a predetermined threshold and/or the roughness of the target object is greater than a predetermined threshold; the fourth preset condition may also be that the glossiness of the target object is greater than a predetermined threshold.
  • the present application does not limit the third preset condition and the fourth preset condition, as long as the target object can be more clearly distinguished in the summed image.
  • the fused image determined by the above fusion method can more clearly distinguish object elements such as the traffic light head in the image.
  • the present application by summing the first image and the second image when the reflectivity of the target object and the reflectivity of other objects are both smaller and the polarization degree of the target object is greater than the polarization degree of other objects, it is possible to achieve more targeted image fusion, adapt to subsequent tasks, and enable the fused image to more clearly distinguish the target object, thereby improving the accuracy of the subsequent task execution results.
  • FIG. 7(a), 7(b) and 7(c) a schematic diagram of image fusion according to an embodiment of the present application is shown.
  • the figure shows a method for determining the first fusion method in a road manhole cover detection task scenario, wherein Figure 7(a) may represent the first image, Figure 7(b) may represent the second image, and Figure 7(c) may represent the fused image.
  • the target object can be a road manhole cover, and other objects can be other road areas other than the manhole cover.
  • the reflectivity corresponding to the manhole cover is similar to the reflectivity corresponding to other road areas (reflected in that the pixel value of the manhole cover in the first image is similar to the pixel value of other road areas);
  • the polarization degree corresponding to the manhole cover is greater than the polarization degree corresponding to other road areas (reflected in that the pixel value of the manhole cover in the second image is greater than the pixel value of other road areas).
  • the first fusion method is to multiply the first image and the second image.
  • a method for multiplying the product can also be found in the above formula (5).
  • the fifth preset condition may include that the intensity difference between the target object and the other objects is less than the fifth preset threshold (corresponding to the feature that the reflectivity corresponding to the manhole cover in FIG. 7 (a) is similar to the reflectivity corresponding to the other road surface areas).
  • the sixth preset condition may include that the polarization degree of the target object is greater than the polarization degree of other objects, and the polarization degree difference between the target object and the other objects is greater than the sixth preset threshold (corresponding to the feature that the polarization degree corresponding to the manhole cover in FIG. 7 (b) is greater than the polarization degree corresponding to the other road surface areas).
  • the intensity information can reflect the reflectivity (the larger the intensity value, the greater the reflectivity), in one way, it can be determined whether the fifth preset condition is met based on the intensity difference between the target object and other objects determined above. It can also be determined whether the sixth preset condition is met based on the polarization degree difference between the target object and other objects determined above.
  • the fifth preset condition may also be that the reflectivity of the target object is less than a predetermined threshold and/or the roughness of the target object is greater than a predetermined threshold; the sixth preset condition may also be that the glossiness of the target object is greater than a predetermined threshold.
  • the present application does not limit the fifth preset condition and the sixth preset condition, as long as the target object can be more clearly distinguished in the image after the product is calculated.
  • the fused image determined by the above fusion method can more clearly distinguish the object elements such as the manhole cover in the image.
  • the present application by multiplying the first image and the second image when the difference between the reflectivity of the target object and the reflectivity of other objects is small and the polarization degree of the target object is greater than the polarization degree of other objects, it is possible to achieve more targeted image fusion, adapt to subsequent tasks, and enable the fused image to more clearly distinguish the target object, thereby improving the accuracy of the subsequent task execution results.
  • the above three scenarios are only three exemplary task scenarios, and in fact, more task scenarios can be included than the above; and the above only shows some exemplary determination methods of the first fusion method, and in fact, more determination methods of the first fusion method can be included than the above.
  • the first fusion method can be flexibly determined according to different task scenarios, and this application does not limit this.
  • the fused image can be used to perform corresponding image tasks, which may be target detection tasks, semantic segmentation tasks, classification tasks, etc. After the fused image is determined through the embodiment of the present application, it can be input into the corresponding task model to obtain a more accurate task execution result.
  • corresponding image tasks which may be target detection tasks, semantic segmentation tasks, classification tasks, etc.
  • FIG8(a) and FIG8(b) are schematic diagrams showing the lane line detection effect according to an embodiment of the present application.
  • FIG8(a) may correspond to the visualization detection result obtained by inputting the lane line detection model when the image is not processed by the method of the embodiment of the present application
  • FIG8(b) may correspond to the visualization result obtained by inputting the lane line detection model when the image is processed by the method of the embodiment of the present application.
  • the straight line shown in the figure may represent the detected lane line.
  • the image is processed by the method of the embodiment of the present application and then input into the lane line detection model for detection, which can effectively prevent the missed detection and false detection of the lane line, making the lane line detection result more accurate.
  • FIG9 shows a structural diagram of an image processing device according to an embodiment of the present application.
  • the device can be used in the above-mentioned image processing system. As shown in FIG9 , the device includes:
  • An acquisition module 901 is used to acquire M images, where the M images are images collected in different polarization states, the M images include a target object and other objects, and M is an integer greater than 2;
  • a determination module 902 configured to determine a first image and a second image according to the M images, wherein the first image is used to indicate intensity information of the target object and other objects, and the second image is used to indicate polarization information of the target object and other objects;
  • the fusion module 903 is used to fuse the first image and the second image according to a first fusion method to determine a fused image.
  • the first fusion method is determined based on material information of the target object, or based on the intensity difference between the target object and other objects and the polarization difference between the target object and other objects.
  • the M images may be collected at the same time or at different times. Different polarization states may indicate different polarization angles during collection.
  • the fused image may be used to perform the target task.
  • the advantages of polarization images that are less affected by lighting changes and have obvious contrast between different objects can be utilized, and the difference between polarization images and traditional intensity images is taken into account to achieve more targeted image processing.
  • the first fusion method can be determined based on the material information of the target object, or based on the intensity difference between the target object and other objects and the polarization difference between the target object and other objects, the quality of the fused image can be further improved, the decoupling of polarization image processing and subsequent tasks can be achieved, and the adaptability of the fused image to the traditional task model can be improved to adapt to a variety of task models.
  • the material information of the target object may include reflectivity, roughness, refractive index and glossiness of the target object.
  • a fusion method of the first image and the second image can be designed more specifically to obtain an image that is more suitable for subsequent tasks and improve the accuracy of the target task results.
  • the material information of the target object may be obtained according to an intensity difference between the target object and other objects and a polarization degree difference between the target object and other objects.
  • the material information of the target object can be obtained in a more targeted manner, so that the obtained material information is more consistent with the actual material condition of the target object, and the process of determining the material information is more real-time.
  • the first image may be determined based on an average value of pixel intensities of the M images
  • the second image may be determined by calculating a ratio between the intensity of the pixels of the polarized part and the intensity of the overall pixels in the M images.
  • the characteristics of the polarization image can be utilized to integrate the intensity information indicated by the M images to determine the overall intensity information, and to integrate the polarization information indicated by the M images to determine the overall polarization information.
  • the first fusion method may include one of the following:
  • first fusion methods by using a variety of first fusion methods, different first fusion methods can be flexibly selected to deal with different task scenarios in a more targeted manner, so that the fused image can be flexibly adapted to various task models to obtain more accurate task results.
  • the first fusion method may include one of the following:
  • f can represent the first fusion method
  • I can represent the first image
  • P can represent the second image
  • a1 and a2 can respectively represent the weights corresponding to the first image
  • b1, b2 and c can respectively represent the weights corresponding to the second image.
  • the preference for intensity information and polarization information can be adjusted more flexibly according to task requirements, so as to respond to different task scenarios more specifically and improve the adaptability of the fused image.
  • the first fusion method may be to calculate the difference between the first image and the second image;
  • the first preset condition may include that the reflectivity of the target object is greater than the reflectivity of other objects, and the intensity difference between the target object and the other objects is greater than a first preset threshold;
  • the second preset condition may include that the polarization degree of the target object is less than the polarization degree of other objects, and the polarization degree difference between the target object and the other objects is greater than a second preset threshold.
  • the present application by taking the difference between the first image and the second image when the reflectivity of the target object is greater than the reflectivity of other objects and the polarization degree of the target object is less than the polarization degree of other objects, it is possible to achieve more targeted image fusion, adapt to subsequent tasks, and enable the fused image to more clearly distinguish the target object, thereby improving the accuracy of the subsequent task execution results.
  • the first fusion method may be to sum the first image and the second image
  • the third preset condition may include that the intensity of the target object and the intensity of other objects are both less than a third preset threshold
  • the fourth preset condition may include that the polarization degree of the target object is greater than the polarization degree of other objects, and the difference in polarization degree between the target object and the other objects is greater than the fourth preset threshold
  • the present application by summing the first image and the second image when the reflectivity of the target object and the reflectivity of other objects are both smaller and the polarization degree of the target object is greater than the polarization degree of other objects, it is possible to achieve more targeted image fusion, adapt to subsequent tasks, and enable the fused image to more clearly distinguish the target object, thereby improving the accuracy of the subsequent task execution results.
  • the first fusion method may be to multiply the first image and the second image
  • the fifth preset condition may include that the intensity difference between the target object and other objects is less than the fifth preset threshold
  • the sixth preset condition may include that the polarization degree of the target object is greater than the polarization degree of other objects, and the polarization degree difference between the target object and other objects is greater than the sixth preset threshold.
  • the present application by multiplying the first image and the second image when the difference between the reflectivity of the target object and the reflectivity of other objects is small and the polarization degree of the target object is greater than the polarization degree of other objects, it is possible to achieve more targeted image fusion, adapt to subsequent tasks, and enable the fused image to more clearly distinguish the target object, thereby improving the accuracy of the subsequent task execution results.
  • An embodiment of the present application provides an image processing device, comprising: a processor and a memory for storing processor executable instructions; wherein the processor is configured to implement the above-mentioned image processing method when executing the instructions.
  • An embodiment of the present application provides a terminal device, which can execute the above-mentioned image processing method.
  • An embodiment of the present application provides a non-volatile computer-readable storage medium on which computer program instructions are stored.
  • the computer program instructions are executed by a processor, the above-mentioned image processing method is implemented.
  • An embodiment of the present application provides a computer program product, including a computer-readable code, or a non-volatile computer-readable storage medium carrying the computer-readable code.
  • the computer-readable code runs in a processor of an electronic device, the processor in the electronic device executes the above-mentioned image processing method.
  • FIG10 shows a block diagram of an electronic device 1000 according to an embodiment of the present application.
  • the electronic device 1000 may be the above-mentioned image processing system.
  • the electronic device 1000 includes at least one processor 1801, at least one memory 1802, and at least one communication interface 1803.
  • the electronic device may also include common components such as an antenna, which will not be described in detail here.
  • Processor 1801 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the above program.
  • Processor 1801 may include one or more processing units, for example: processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-network processing unit (neural-network processing unit, NPU), etc.
  • different processing units may be independent devices or integrated in one or more processors.
  • Communication interface 1803 is used to communicate with other electronic devices or communication networks, such as Ethernet, radio access network (RAN), core network, wireless local area network (WLAN), etc.
  • RAN radio access network
  • WLAN wireless local area network
  • the memory 1802 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a random access memory (RAM) or other types of dynamic storage devices that can store information and instructions, or an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compressed optical disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store the desired program code in the form of instructions or data structures and can be accessed by a computer, but is not limited thereto.
  • the memory may exist independently and be connected to the processor through a bus. The memory may also be integrated with the processor.
  • the memory 1802 is used to store application code for executing the above solution, and the execution is controlled by the processor 1801.
  • the processor 1801 is used to execute the application code stored in the memory 1802.
  • a computer readable storage medium may be a tangible device that can hold and store instructions used by an instruction execution device.
  • a computer readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer readable storage media include: a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), a memory stick, a floppy disk, a mechanical encoding device, such as a punch card or a raised structure in a groove on which instructions are stored, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disc
  • memory stick a floppy disk
  • mechanical encoding device such as a punch card or a raised structure in a groove on which instructions are stored, and any suitable combination of the foregoing.
  • the computer-readable program instructions or codes described herein can be downloaded from a computer-readable storage medium to each computing/processing device, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network can include copper transmission cables, optical fiber transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device.
  • the computer program instructions for performing the operations of the present application may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
  • the computer-readable program instructions may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer via any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
  • LAN local area network
  • WAN wide area network
  • an Internet service provider e.g., via the Internet using an Internet service provider.
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA) or a programmable logic array (PLA)
  • FPGA field programmable gate array
  • PLA programmable logic array
  • These computer-readable program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer or other programmable data processing device, thereby producing a machine, so that when these instructions are executed by the processor of the computer or other programmable data processing device, a device for implementing the functions/actions specified in one or more boxes in the flowchart and/or block diagram is generated.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions make the computer, programmable data processing device and/or other equipment work in a specific way, so that the computer-readable medium storing the instructions includes a manufactured product, which includes instructions for implementing various aspects of the functions/actions specified in one or more boxes in the flowchart and/or block diagram.
  • Computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device so that a series of operating steps are performed on the computer, other programmable data processing apparatus, or other device to produce a computer-implemented process, thereby causing the instructions executed on the computer, other programmable data processing apparatus, or other device to implement the functions/actions specified in one or more boxes in the flowchart and/or block diagram.
  • each square frame in the flow chart or block diagram can represent a part of a module, program segment or instruction, and a part of the module, program segment or instruction includes one or more executable instructions for realizing the logical function of the specification.
  • the functions marked in the square frame can also occur in a sequence different from that marked in the accompanying drawings. For example, two continuous square frames can actually be executed substantially in parallel, and they can also be executed in the opposite order sometimes, depending on the functions involved.
  • each box in the block diagram and/or flowchart, and the combination of boxes in the block diagram and/or flowchart can be implemented by hardware (such as circuits or ASICs (Application Specific Integrated Circuit)) that performs the corresponding function or action, or can be implemented by a combination of hardware and software, such as firmware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

本申请涉及一种图像处理方法、装置和存储介质。该方法包括:获取M个图像,M个图像为在不同偏振态下采集的图像,M个图像包括目标对象和其他对象,M为大于2的整数;根据M个图像确定第一图像和第二图像,第一图像用于指示目标对象和其他对象的强度信息,第二图像用于指示目标对象和其他对象的偏振信息;根据第一融合方法对第一图像和第二图像进行融合,确定融合后的图像,第一融合方法根据目标对象的材质信息,或者,根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异确定。根据本申请实施例,可以进一步提升得到的融合后的图像的质量,提升融合后的图像与传统任务模型的适配性,以适配于多种任务模型。

Description

图像处理方法、装置和存储介质 技术领域
本申请涉及光学感知技术领域,尤其涉及一种图像处理方法、装置和存储介质。
背景技术
光学感知技术是自动驾驶和辅助驾驶系统中必不可少的技术,现阶段主要的车载光学感知设备是可见光相机。然而,在车载场景中,由于传统的光学成像技术易受光照变化、阴影遮挡、异物同谱等因素的影响,会导致图像质量、可用率和模型预测结果准确率的下降。
利用偏振成像技术得到的偏振图像相比于传统的强度图像,具有受照明变化影响小、不同目标对比度明显的优点。然而,由于偏振图像与传统强度图像之间的特征存在差异,且当前的方法通常是基于传统强度图像设计和训练神经网络模型,如果将模型直接应用于偏振图像适配性较差,无法体现偏振成像的优势。因此,亟需一种新型的图像处理方法以提升偏振图像与模型之间的适配性。
发明内容
有鉴于此,提出了一种图像处理方法、装置和存储介质。
第一方面,本申请的实施例提供了一种图像处理方法。该方法包括:
获取M个图像,M个图像为在不同偏振态下采集的图像,M个图像包括目标对象和其他对象,M为大于2的整数;
根据M个图像确定第一图像和第二图像,第一图像用于指示目标对象和其他对象的强度信息,第二图像用于指示目标对象和其他对象的偏振信息;
根据第一融合方法对第一图像和第二图像进行融合,确定融合后的图像,第一融合方法根据目标对象的材质信息,或者,根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异确定。
其中,M个图像可以是同一时刻采集的,也可以是不同时刻采集的。不同偏振态可以指示采集时不同的偏振角度。融合后的图像可用于执行目标任务。
根据本申请实施例,通过获取不同偏振态下采集的图像,确定分别用于指示强度信息和偏振信息的第一图像和第二图像,可以利用偏振图像受照明变化影响小、不同对象对比度明显的优点,并且考虑到了偏振图像与传统强度图像之间的差异,实现更有针对性地对图像进行处理。通过根据第一融合方法,对第一图像和第二图像进行融合,由于其中第一融合方法可以根据目标对象的材质信息,或者,根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异确定,可以进一步提升得到的融合后的图像的质量,实现偏振图像处理与后续任务的解耦,可提升融合后的图像与传统任务模型的适配性,以适配于多种任务模型。
根据第一方面,在图像处理方法的第一种可能的实现方式中,目标对象的材质信息可以包括目标对象的反射率、粗糙度、折射率和光泽度。
根据本申请实施例,通过确定目标对象的反射率、粗糙度、折射率和光泽度,可以更有 针对性地设计第一图像和第二图像的融合方法,以得到更能适配于后续任务的图像,提高目标任务结果的准确率。
根据第一方面或第一方面的第一种可能的实现方式,在图像处理方法的第二种可能的实现方式中,目标对象的材质信息可以根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异得到。
根据本申请实施例,通过利用目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异,可以更具有针对性地得到目标对象的材质信息,使得得到的材质信息更符合目标对象的实际材质状况,且确定材质信息的过程更具有实时性。
根据第一方面或第一方面的第一种或第二种可能的实现方式,在图像处理方法的第三种可能的实现方式中,第一融合方法可包括以下其中之一:
对第一图像与第二图像求差;
对第一图像与第二图像求和;
对第一图像与第二图像求乘积。
根据本申请实施例,通过多种第一融合方法,可以更具有针对性地应对不同的任务场景灵活地选择使用不同的第一融合方法,使得融合后的图像可以灵活适配于各种任务模型,得到更加精准的任务结果。
根据第一方面或第一方面的第一种或第二种或第三种可能的实现方式,在图像处理方法的第四种可能的实现方式中,在满足第一预设条件和第二预设条件的情况下,第一融合方法可以为对第一图像与第二图像求差;
其中,第一预设条件可包括目标对象的反射率大于其他对象的反射率,且目标对象与其他对象的强度差异大于第一预设阈值,第二预设条件可包括目标对象的偏振度小于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第二预设阈值。
根据本申请实施例,通过在目标对象的反射率大于其他对象的反射率、且目标对象的偏振度小于其他对象的偏振度时,对第一图像和第二图像求差,可以实现更有针对性地对图像进行融合,适配于后续任务,且使得融合后的图像能够更清楚地分辨出目标对象,提高后续任务执行结果的准确率。
根据第一方面或第一方面的第一种或第二种或第三种或第四种可能的实现方式,在图像处理方法的第五种可能的实现方式中,在满足第三预设条件和第四预设条件的情况下,第一融合方法可以为对第一图像与第二图像求和;
其中,第三预设条件可包括目标对象的强度与其他对象的强度均小于第三预设阈值,第四预设条件可包括目标对象的偏振度大于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第四预设阈值。
根据本申请实施例,通过在目标对象的反射率与其他对象的反射率均较小、且目标对象的偏振度大于其他对象的偏振度时,对第一图像和第二图像求和,可以实现更有针对性地对图像进行融合,适配于后续任务,且使得融合后的图像能够更清楚地分辨出目标对象,提高后续任务执行结果的准确率。
根据第一方面或第一方面的第一种或第二种或第三种或第四种或第五种可能的实现方式,在图像处理方法的第六种可能的实现方式中,在满足第五预设条件和第六预设条件情况下,第一融合方法可以为对第一图像与第二图像求乘积;
其中,第五预设条件可包括目标对象与其他对象的强度差异小于第五预设阈值,第六预设条件可包括目标对象的偏振度大于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第六预设阈值。
根据本申请实施例,通过在目标对象的反射率与其他对象的反射率之间的差异较小、且目标对象的偏振度大于其他对象的偏振度时,对第一图像和第二图像求乘积,可以实现更有针对性地对图像进行融合,适配于后续任务,且使得融合后的图像能够更清楚地分辨出目标对象,提高后续任务执行结果的准确率。
根据第一方面或第一方面的第一种或第二种或第三种或第四种或第五种或第六种可能的实现方式,在图像处理方法的第七种可能的实现方式中,第一融合方法可包括以下其中之一:
f=a1*I-b1*P,
f=a2*I+b2*P,
f=I*c*P;
其中,f可以表示第一融合方法,I可以表示第一图像,P可以表示第二图像,a1、a2可以分别表示第一图像对应的权重,b1、b2和c可以分别表示第二图像对应的权重。
根据本申请实施例,通过为第一图像、第二图像设置相应的权重,可以更加灵活地根据任务需要调节对于强度信息和偏振信息的偏好程度,以更有针对性地应对不同的任务场景,提高融合后图像的适配度。
根据第一方面或第一方面的第一种或第二种或第三种或第四种或第五种或第六种或第七种可能的实现方式,在图像处理方法的第八种可能的实现方式中,第一图像可以根据M个图像的像素的强度的平均值确定,第二图像可以通过计算M个图像中偏振部分的像素的强度和整体的像素的强度的比值确定。
根据本申请实施例,能够利用偏振图像的特性,综合M个图像指示的强度信息来确定整体的强度信息,并综合M个图像指示的偏振信息来确定整体的偏振信息。
第二方面,本申请的实施例提供了一种图像处理装置。该装置包括:
获取模块,用于获取M个图像,M个图像为在不同偏振态下采集的图像,M个图像包括目标对象和其他对象,M为大于2的整数;
确定模块,用于根据M个图像确定第一图像和第二图像,第一图像用于指示目标对象和其他对象的强度信息,第二图像用于指示目标对象和其他对象的偏振信息;
融合模块,用于根据第一融合方法对第一图像和第二图像进行融合,确定融合后的图像,第一融合方法根据目标对象的材质信息,或者,根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异确定。
其中,M个图像可以是同一时刻采集的,也可以是不同时刻采集的。不同偏振态可以指示采集时不同的偏振角度。融合后的图像可用于执行目标任务。
根据第二方面,在图像处理装置的第一种可能的实现方式中,目标对象的材质信息可以包括目标对象的反射率、粗糙度、折射率和光泽度。
根据第二方面或第二方面的第一种可能的实现方式,在图像处理装置的第二种可能的实现方式中,目标对象的材质信息可以根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异得到。
根据第二方面或第二方面的第一种或第二种可能的实现方式,在图像处理装置的第三种 可能的实现方式中,第一融合方法可包括以下其中之一:
对第一图像与第二图像求差;
对第一图像与第二图像求和;
对第一图像与第二图像求乘积。
根据第二方面或第二方面的第一种或第二种或第三种可能的实现方式,在图像处理装置的第四种可能的实现方式中,在满足第一预设条件和第二预设条件的情况下,第一融合方法可以为对第一图像与第二图像求差;
其中,第一预设条件可包括目标对象的反射率大于其他对象的反射率,且目标对象与其他对象的强度差异大于第一预设阈值,第二预设条件可包括目标对象的偏振度小于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第二预设阈值。
根据第二方面或第二方面的第一种或第二种或第三种或第四种可能的实现方式,在图像处理装置的第五种可能的实现方式中,在满足第三预设条件和第四预设条件的情况下,第一融合方法可以为对第一图像与第二图像求和;
其中,第三预设条件可包括目标对象的强度与其他对象的强度均小于第三预设阈值,第四预设条件可包括目标对象的偏振度大于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第四预设阈值。
根据第二方面或第二方面的第一种或第二种或第三种或第四种或第五种可能的实现方式,在图像处理装置的第六种可能的实现方式中,在满足第五预设条件和第六预设条件情况下,第一融合方法可以为对第一图像与第二图像求乘积;
其中,第五预设条件可包括目标对象与其他对象的强度差异小于第五预设阈值,第六预设条件可包括目标对象的偏振度大于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第六预设阈值。
根据第二方面或第二方面的第一种或第二种或第三种或第四种或第五种或第六种可能的实现方式,在图像处理装置的第七种可能的实现方式中,第一融合方法可包括以下其中之一:
f=a1*I-b1*P,
f=a2*I+b2*P,
f=I*c*P;
其中,f可以表示第一融合方法,I可以表示第一图像,P可以表示第二图像,a1、a2可以分别表示第一图像对应的权重,b1、b2和c可以分别表示第二图像对应的权重。
根据第二方面或第二方面的第一种或第二种或第三种或第四种或第五种或第六种或第七种可能的实现方式,在图像处理装置的第八种可能的实现方式中,第一图像可以根据M个图像的像素的强度的平均值确定,第二图像可以通过计算M个图像中偏振部分的像素的强度和整体的像素的强度的比值确定。
第三方面,本申请的实施例提供了一种图像处理装置,该装置包括:处理器;用于存储处理器可执行指令的存储器;其中,处理器被配置为执行指令时实现上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的图像处理方法。
第四方面,本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,计算机程序指令被处理器执行时实现上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的图像处理方法。
第五方面,本申请的实施例提供了一种终端设备,该终端设备可以执行上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的图像处理方法。
第六方面,本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的图像处理方法。
本申请的这些和其他方面在以下(多个)实施例的描述中会更加简明易懂。
附图说明
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本申请的示例性实施例、特征和方面,并且用于解释本申请的原理。
图1示出根据本申请一实施例的应用场景的示意图。
图2示出根据本申请一实施例的图像处理方法的流程图。
图3示出根据本申请一实施例的不同偏振态下采集的图像的示意图。
图4示出根据本申请一实施例的获取材质信息的示意图。
图5(a)、图5(b)和图5(c)示出根据本申请一实施例的一种进行图像融合的示意图。
图6(a)、图6(b)和图6(c)示出根据本申请一实施例的一种进行图像融合的示意图。
图7(a)、图7(b)和图7(c)示出根据本申请一实施例的一种进行图像融合的示意图。
图8(a)和图8(b)示出根据本申请一实施例的车道线检测效果的示意图。
图9示出根据本申请一实施例的图像处理装置的结构图。
图10示出根据本申请一实施例的电子设备1000的结构图。
具体实施方式
以下将参考附图详细说明本申请的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
另外,为了更好的说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。
光学感知技术是自动驾驶和辅助驾驶系统中必不可少的技术,现阶段主要的车载光学感知设备是可见光相机。然而,在车载场景中,由于传统的光学成像技术易受光照变化、阴影遮挡、异物同谱等因素的影响,会导致图像质量、可用率和模型预测结果准确率的下降。利用偏振成像技术得到的偏振图像相比于传统的强度图像,具有受照明变化影响小、不同目标 对比度明显的优点。由于偏振图像与传统强度图像之间的特征存在差异,且当前的方法通常是基于传统强度图像设计和训练神经网络模型,如果将模型直接应用于偏振图像适配性较差,无法体现偏振成像的优势。因此,亟需一种新型的图像处理方法以提升偏振图像与模型之间的适配性。
为了解决上述技术问题,本申请提供了一种图像处理方法,本申请实施例的图像处理方法通过获取不同偏振态下采集的图像,确定分别用于指示强度信息和偏振信息的第一图像和第二图像,可以利用偏振图像受照明变化影响小、不同对象对比度明显的优点,并且考虑到了偏振图像与传统强度图像之间的差异,通过根据第一融合方法,对第一图像和第二图像进行融合,由于其中第一融合方法可以根据目标对象的材质信息,或者,根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异确定,可以进一步提升得到的融合后的图像的质量,实现偏振图像处理与后续任务的解耦,可提升融合后的图像与传统任务模型的适配性,以适配于多种任务模型。
图1示出根据本申请一实施例的应用场景的示意图。如图1所示,本申请实施例的图像处理系统可以部署于服务器或终端设备,用于车载场景中对图像进行处理。例如,在自动驾驶或辅助驾驶的场景中,可以应用本申请实施例的图像处理系统获取光学传感器(例如是车载传感器)采集的多个偏振态下的图像,对图像进行处理、融合,以得到融合后的图像。该融合后的图像可用于执行相应的任务,例如,可将融合后的图像输入相应的软件模块(如神经网络模型)以执行后续任务(如交通灯头检测、车道线检测等),得到目标任务结果。图像处理系统还可与新型光学成像传感器、ISP(image signal processor)等硬件模块相连接,以执行后续任务。
在上述场景中,光学传感器可以安装在车辆(例如是采集车)上,为一个或多个摄像头,摄像头例如是彩色、灰度、红外、多光谱摄像头等,通过一个或多个摄像头可以利用偏振成像技术,以获得多个偏振态下的图像。
本申请涉及的服务器可以位于云端或本地,可以是实体设备,也可以是虚拟设备,如虚拟机、容器等,具有无线通信功能,其中,无线通信功能可设置于该服务器的芯片(系统)或其他部件或组件。可以是指具有无线连接功能的设备,无线连接的功能是指可以通过Wi-Fi、蓝牙等无线连接方式与其他服务器或终端设备进行连接,本申请的服务器也可以具有有线连接进行通信的功能。
本申请涉及的终端设备可以是指具有无线连接功能的设备,无线连接的功能是指可以通过Wi-Fi、蓝牙等无线连接方式与其他终端设备或服务器进行连接,本申请的终端设备也可以具有有线连接进行通信的功能。本申请的终端设备可以是触屏的、也可以是非触屏的、也可以是没有屏幕的,触屏的可以通过手指、触控笔等在显示屏幕上点击、滑动等方式对终端设备进行控制,非触屏的设备可以连接鼠标、键盘、触控面板等输入设备,通过输入设备对终端设备进行控制,没有屏幕的设备比如说可以是没有屏幕的蓝牙音箱等。举例来说,本申请的终端设备可以是智能手机、上网本、平板电脑、笔记本电脑、可穿戴电子设备(如智能手环、智能手表等)、TV、虚拟现实设备、音响、电子墨水,等等。本申请实施例的终端设备还可以是车载终端设备。其中,处理器可以作为车载计算单元内置于车辆上的车机中,从而,使本申请实施例的图像处理过程可以在车端实现实时处理。
本申请实施例的图像处理系统还可以应用于除车载场景以外的其他场景中,只要是涉及 到对偏振图像进行处理,皆可以应用,本申请对此不作限制。
以下通过图2-图8,对本申请实施例的图像处理方法进行详细的介绍。
图2示出根据本申请一实施例的图像处理方法的流程图。该方法可用于上述图像处理系统,如图2所示,该方法包括:
步骤S201,获取M个图像。
其中,M个图像可以为在不同偏振态下采集的图像,例如可以是通过上述光学传感器(可以是一个或多个)采集的图像,例如,在自动驾驶或辅助驾驶等车载场景中,光学传感器可设置在车辆上。M个图像可包括目标对象和其他对象,M为大于2的整数。
不同偏振态下的图像可以通过多种偏振成像方式获得,M个图像可以是同一时刻采集的,也可以是不同时刻采集的;可以是一个光学传感器采集的,也可以是通过多个光学传感器设置在不同的位置采集的。例如,可以通过多相机同步成像方式(即,在M个相机上分别设置不同方向的偏振片,并将M个相机分别设置在不同的位置上同时采集图像)、单相机成像方式(即,在一个相机上设置可旋转的偏振片,通过旋转调整偏振片的取向,以在不同偏振态下采集图像)、像素级偏振镀膜相机成像方式(即,通过相机上自带的偏振镀膜设置不同的偏振态采集图像)等成像方式采集图像。
不同偏振态可以指不同的偏振角度(即偏振方向)。参见图3,示出根据本申请一实施例的不同偏振态下采集的图像的示意图。0例如以4个偏振态下采集的图像为例(即M为4),这4张图像可以分别对应于0°、45°、90°、135°这4种不同的偏振角度,可分别对应于图3中I0(左上部分)、I45(右上部分)、I90(左下部分)、I135(右下部分)这4张图像。
其中,目标对象可以依据后续执行的任务确定,可以包括图像中的一种对象元素或多种对象元素,其他对象可以包括图像中除目标对象以外的对象元素中的全部或部分对象元素。例如,在车道线检测任务中,目标对象可以包括车道线(如车道上的道路中心线、车行道分界线、车行道边缘线等),相应的,其他对象则可以包括除目标对象以外的对象,如为道路环境背景等。又例如,在道路场景中交通灯头检测任务中,目标对象可以包括灯头边缘线,其他对象则可以包括除灯头以外的对象,如环境背景等。又例如,在路面井盖检测任务中,目标对象可以包括井盖,其他对象则可以包括除路面上除井盖以外的其他对象。
下面可以利用偏振图像的特性,对于M个图像,分别确定其中指示的强度和偏振信息。
步骤S202,根据M个图像确定第一图像和第二图像。
其中,根据M个图像可以分别确定一个第一图像和一个第二图像。
其中,第一图像可用于指示目标对象和其他对象的强度信息。强度信息可以是光的强度,举例来说,在灰度图像中可以表示图像像素的灰度值,在彩色图像中可以表示图像像素的强度(可体现为第一图像的各像素点的像素值)。
可选地,第一图像可以根据M个图像的像素的强度的平均值确定。从而,能够综合M个图像指示的强度信息来确定整体的强度信息。
以M为4为例,确定第一图像的一种计算方式可参见公式(1):
Figure PCTCN2022136298-appb-000001
其中,I可以对应于第一图像,表示第一图像的像素强度。I0、I45、I90和I135可以分别表示不同偏振态下采集的图像的像素强度。由于光经过偏振片会产生衰减,平均可衰减一 半,因此可以在求均值时再除以2(如公式(1)中分母原本为4,再除以2为2),以得到第一图像的像素值。
第二图像可用于指示目标对象和其他对象的偏振信息,偏振信息例如可以是偏振度(可通过第二图像的各像素的像素值体现)。
可选地,第二图像可以通过计算M个图像中偏振部分的像素的强度和整体的像素的强度的比值确定。从而,能够综合M个图像指示的偏振信息来确定整体的偏振信息。
以M为4为例,确定第二图像的一种计算方式可参见公式(2):
Figure PCTCN2022136298-appb-000002
其中,P可以对应于第二图像,表示第二图像的偏振度。I0、I45、I90和I135可以分别表示不同偏振态下采集的图像的像素强度。I可以表示第一图像的像素强度,可通过公式(1)得到。可以通过将P对应的比值(即偏振度)映射至与I指示的像素强度一致的范围(如0-255),以确定第二图像各像素点的像素值。
在得到第一图像和第二图像后,可结合场景中目标对象的材质特性,或者目标对象与其他对象的之间差异信息,对第一图像和第二图像进行融合,以确定融合后的图像,实现对偏振图像的处理,参见下述。
步骤S203,根据第一融合方法对第一图像和第二图像进行融合,确定融合后的图像。
其中,第一融合方法可以根据目标对象的材质信息,或者,根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异确定。
根据本申请实施例,通过获取不同偏振态下采集的图像,确定分别用于指示强度信息和偏振信息的第一图像和第二图像,可以利用偏振图像受照明变化影响小、不同对象对比度明显的优点,并且考虑到了偏振图像与传统强度图像之间的差异,实现更有针对性地对图像进行处理。通过根据第一融合方法,对第一图像和第二图像进行融合,由于其中第一融合方法可以根据目标对象的材质信息,或者,根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异确定,可以进一步提升得到的融合后的图像的质量,实现偏振图像处理与后续任务的解耦,可提升融合后的图像与传统任务模型的适配性,以适配于多种任务模型。
可选地,目标对象的材质信息可包括目标对象的反射率、粗糙度、折射率和光泽度,可包括其中的一种或多种。
反射率可以是反射光的强度与入射光的强度的比值,不同材质的表面可以具有不同的反射率;粗糙度可以是物体表面的粗糙度,表面粗糙度越小,则可以表示相应材质的表面越光滑;折射率可以表示光在真空中的速度跟其进入物体对应的介质后的相速度之比,目标对象的折射率与偏振度相关;光泽度可以表示物体表面对光的镜面反射能力,镜面反射可以指示具有方向选择的反射特性,材质表面的镜面反射率越低,则可以表示相应材质表面的光泽度越低。
根据本申请实施例,通过确定目标对象的反射率、粗糙度、折射率和光泽度,可以更有针对性地设计第一图像和第二图像的融合方法,以得到更能适配于后续任务的图像,提高目标任务结果的准确率。
在第一融合方法为根据目标对象的材质信息确定的情况下,为了提高获取材质信息的效率,目标对象的材质信息可以是先验得到的,例如,可以通过光泽度计等传感器预先采集目 标对象材质表面的反射率、粗糙度、折射率、光泽度等信息作为融合过程中的先验输入。其中,可以根据任务需要采集反射率、粗糙度、折射率、光泽度等的一种或多种。其中,先验输入可以是由用户输入的,也可是通过车载传感器(如车载的光泽度计等)采集得到并输入的。
为了提高材质信息确定的实时性和针对性,可选地,目标对象的材质信息还可以根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异得到。
目标对象的强度可以根据第一图像中目标对象区域对应的像素值确定(例如为该区域内像素值的平均值);其他对象的强度可以根据第一图像中目标对象以外的其他对象区域对应的像素值确定(例如为该区域内像素值的平均值)。目标对象的偏振度可以根据第二图像中目标对象区域对应的像素值确定(例如为该区域内像素值的平均值);其他对象的偏振度可以根据第二图像中目标对象以外的其他对象区域对应的像素值确定(例如为该区域内像素值的平均值)。
例如,可以根据第一图像指示的强度信息,确定目标对象与其他对象的强度差异(例如是二者之差);还可以根据第二图像指示的偏振信息,确定目标对象与其他对象的偏振度差异(例如是二者之差)。在一种确定方式中,在其他对象的环境特定(例如其他对象为柏油路路面)的情况下,则可以通过强度差值,来相对地确定目标对象的反射率、粗糙度;可以通过偏振度差值,来相对地确定目标对象的折射率、光泽度。
根据本申请实施例,通过利用目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异,可以更具有针对性地得到目标对象的材质信息,使得得到的材质信息更符合目标对象的实际材质状况,且确定材质信息的过程更具有实时性。
参见图4,示出根据本申请一实施例的获取材质信息的示意图。如图4所示,目标对象可以是图中指示的物体,图像中的路面背景可以表示目标对象对应的其他对象。在先验获得目标对象的材质信息的场景中,可以由用户输入该目标对象的材质信息,也可通过车载传感器对图中指示的物体进行先验测量,以采集得到材质信息。
在通过目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异,确定目标对象的材质信息时,可以通过上述方式,确定图中指示的物体与路面背景之间的强度差异和偏振度差异,以确定目标对象的材质信息。
由于在不同的任务场景中,存在着不同的目标对象,这些目标对象的材质特性并不完全相同,因此,可以设计不同的第一融合方法以针对地应对不同的任务场景。以下对第一融合方法的确定方式进行详细的介绍。
可选地,该第一融合方法可包括以下其中之一:
对第一图像与第二图像求差;
对第一图像与第二图像求和;
对第一图像与第二图像求乘积;
上述可以是对第一图像的像素值与第二图像的像素值求差、求和、求乘积。
根据本申请实施例,通过多种第一融合方法,可以更具有针对性地应对不同的任务场景灵活地选择使用不同的第一融合方法,使得融合后的图像可以灵活适配于各种任务模型,得到更加精准的任务结果。
其中,还可在求差、求和、求乘积的过程中为第一图像和第二图像设置不同的权重,以 灵活地根据任务需要调节对于强度信息和偏振信息的偏好程度。
例如,对第一图像与第二图像求差的一种计算方式可参见公式(3):
f=a1*I-b1*P              公式(3)
其中,f可以表示第一融合方法。I可以表示第一图像(例如为第一图像各像素点的像素值),P可以表示第二图像(例如为第二图像各像素点的像素值),I和P可以根据上述过程得到。a1可以表示第一图像对应的权重,b1可以表示第二图像对应的权重。a1和b1的值可以根据需要预先设定。
对第一图像与第二图像求和的一种计算方式可参见公式(4):
f=a2*I+b2*P              公式(4)
其中,f可以表示第一融合方法。I可以表示第一图像(例如为第一图像各像素点的像素值),P可以表示第二图像(例如为第二图像各像素点的像素值),I和P可以根据上述过程得到。a2可以表示第一图像对应的权重,b2可以表示第二图像对应的权重。a2和b2的值可以根据需要预先设定。
对第一图像与第二图像求乘积的一种计算方式可参见公式(5):
f=I*c*P              公式(5)
其中,f可以表示第一融合方法。I可以表示第一图像(例如为第一图像各像素点的像素值),P可以表示第二图像(例如为第二图像各像素点的像素值),I和P可以根据上述过程得到。c可以表示第二图像对应的权重(也可表示第一图像对应的权重,或者第一图像对应的权重和第二图像对应的权重的乘积)。c的值可以根据需要预先设定。
根据本申请实施例,通过为第一图像、第二图像设置相应的权重,可以更加灵活地根据任务需要调节对于强度信息和偏振信息的偏好程度,以更有针对性地应对不同的任务场景,提高融合后图像的适配度。
以下对上述确定第一融合方法的三种情况分别进行示例性的介绍:
参见图5(a)、图5(b)和图5(c),示出根据本申请一实施例的一种进行图像融合的示意图。如图示出了在车道线检测任务场景下的第一融合方法的确定方式,其中,图5(a)可以表示第一图像,图5(b)可以表示第二图像,图5(c)可以表示融合后的图像。
在车道线检测任务的场景中,目标对象可以是车道线,其他对象可以是车道线以外的道路区域。从图5(a)中可以看出,在这种场景之下,存在车道线对应的反射率大于其他道路区域对应的反射率的特征(体现为第一图像中车道线区域的像素值大于其他道路区域的像素值);从图5(b)中可以看出,在这种场景之下,存在车道线对应的偏振度小于其他道路区域对应的偏振度的特征(体现为第二图像中车道线区域的像素值小于其他道路区域的像素值)。
因此,基于上述强度与偏振维度下目标对象与其他对象的差异特性,可选地,在满足第一预设条件和第二预设条件的情况下,第一融合方法可以为对第一图像与第二图像求差。求差的一种方式还可参见上述公式(3)。
其中,第一预设条件可包括目标对象的反射率大于其他对象的反射率,且目标对象与其他对象的强度差异大于第一预设阈值(对应于图5(a)中车道线对应的反射率大于其他道路区域对应的反射率的特征)。第二预设条件可包括目标对象的偏振度小于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第二预设阈值(对应于图5(b)中车道线对应的偏 振度小于其他道路区域对应的偏振度的特征)。
例如,由于强度信息可以体现反射率(强度值越大可以表现为反射率越大),可以根据上述确定的目标对象与其他对象的强度差异,来确定是否满足第一预设条件。还可以根据上述确定的目标对象与其他对象的偏振度差异,来确定是否满足第二预设条件。
可选地,在确定了目标对象的材质信息的情况下,第一预设条件还可以是目标对象的反射率大于预定阈值和/或目标对象的粗糙度小于预定阈值;第二预设条件还可以是目标对象的光泽度小于预定阈值。本申请对第一预设条件和第二预设条件不作限制,只要使得求差后图像能够更加清晰的分辨出目标对象即可。
参见图5(c),通过上述融合方法确定的融合后的图像可以更清晰地分辨出图像中的车道线等对象元素。
根据本申请实施例,通过在目标对象的反射率大于其他对象的反射率、且目标对象的偏振度小于其他对象的偏振度时,对第一图像和第二图像求差,可以实现更有针对性地对图像进行融合,适配于后续任务,且使得融合后的图像能够更清楚地分辨出目标对象,提高后续任务执行结果的准确率。
参见图6(a)、图6(b)和图6(c),示出根据本申请一实施例的一种进行图像融合的示意图。如图示出了在交通灯灯头检测任务场景下的第一融合方法的确定方式,其中,图6(a)可以表示第一图像,图6(b)可以表示第二图像,图6(c)可以表示融合后的图像。
在交通灯灯头检测任务的场景中,目标对象可以是交通灯灯头边缘线,其他对象可以是交通灯灯头以外的背景区域。从图6(a)中可以看出,在这种场景之下,存在灯头边缘对应的反射率与背景区域对应的反射率均较低的特征(体现为第一图像中灯头边缘的像素值与背景区域的像素值均较小);从图6(b)中可以看出,在这种场景之下,还存在灯头边缘对应的偏振度大于背景区域对应的偏振度的特征(体现为第二图像中灯头边缘的像素值大于背景区域的像素值)。
因此,基于上述强度与偏振维度下目标对象与其他对象的差异特性,可选地,在满足第三预设条件和第四预设条件的情况下,第一融合方法可以为对第一图像与第二图像求和。求和的一种方式还可参见上述公式(4)。
其中,第三预设条件可包括目标对象的强度与其他对象的强度均小于第三预设阈值(对应于图6(a)中灯头边缘对应的反射率与背景区域对应的反射率均较低的特征)。第四预设条件可包括目标对象的偏振度大于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第四预设阈值(对应于图6(b)中灯头边缘对应的偏振度大于背景区域对应的偏振度的特征)。
例如,由于强度信息可以体现反射率(强度值越大可以表现为反射率越大),一种方式中可以根据上述确定的目标对象与其他对象的强度差异,以及第一图像的平均像素值,来确定是否满足第三预设条件(例如,在第一图像的平均像素值小于预设阈值,且上述确定的目标对象与其他对象的强度差异小于另一预设阈值(这两个预设阈值可分别根据第三预设阈值确定)时,认为满足第三预设条件)。还可以根据上述确定的目标对象与其他对象的偏振度差异,来确定是否满足第四预设条件。
可选地,在确定了目标对象的材质信息的情况下,第三预设条件还可以是目标对象的反射率小于预定阈值和/或目标对象的粗糙度大于预定阈值;第四预设条件还可以是目标对象的 光泽度大于预定阈值。本申请对第三预设条件和第四预设条件不作限制,只要使得求和后图像能够更加清晰的分辨出目标对象即可。
参见图6(c),通过上述融合方法确定的融合后的图像可以更清晰地分辨出图像中的交通灯灯头等对象元素。
根据本申请实施例,通过在目标对象的反射率与其他对象的反射率均较小、且目标对象的偏振度大于其他对象的偏振度时,对第一图像和第二图像求和,可以实现更有针对性地对图像进行融合,适配于后续任务,且使得融合后的图像能够更清楚地分辨出目标对象,提高后续任务执行结果的准确率。
参见图7(a)、图7(b)和图7(c),示出根据本申请一实施例的一种进行图像融合的示意图。如图示出了在路面井盖检测任务场景下的第一融合方法的确定方式,其中,图7(a)可以表示第一图像,图7(b)可以表示第二图像,图7(c)可以表示融合后的图像。
在路面井盖检测任务的场景中,目标对象可以是路面井盖,其他对象可以是井盖以外的其他路面区域。从图7(a)中可以看出,在这种场景之下,存在井盖对应的反射率与其他路面区域对应的反射率相近似的特征(体现为第一图像中井盖的像素值与其他路面区域的像素值近似);从图7(b)中可以看出,在这种场景之下,还存在井盖对应的偏振度大于其他路面区域对应的偏振度的特征(体现为第二图像中井盖的像素值大于其他路面区域的像素值)。
因此,基于上述强度与偏振维度下目标对象与其他对象的差异特性,可选地,在满足第五预设条件和第六预设条件情况下,第一融合方法为对第一图像与第二图像求乘积。求乘积的一种方式还可参见上述公式(5)。
其中,第五预设条件可包括目标对象与其他对象的强度差异小于第五预设阈值(对应于图7(a)中井盖对应的反射率与其他路面区域对应的反射率相近似的特征)。第六预设条件可包括目标对象的偏振度大于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第六预设阈值(对应于图7(b)中井盖对应的偏振度大于其他路面区域对应的偏振度的特征)。
例如,由于强度信息可以体现反射率(强度值越大可以表现为反射率越大),一种方式中可以根据上述确定的目标对象与其他对象的强度差异,来确定是否满足第五预设条件。还可以根据上述确定的目标对象与其他对象的偏振度差异,来确定是否满足第六预设条件。
可选地,在确定了目标对象的材质信息的情况下,第五预设条件还可以是目标对象的反射率小于预定阈值和/或目标对象的粗糙度大于预定阈值;第六预设条件还可以是目标对象的光泽度大于预定阈值。本申请对第五预设条件和第六预设条件不作限制,只要使得求乘积后图像能够更加清晰的分辨出目标对象即可。
参见图7(c),通过上述融合方法确定的融合后的图像可以更清晰地分辨出图像中的井盖等对象元素。
以上阈值的具体数值均可根据实际需要进行设定。
根据本申请实施例,通过在目标对象的反射率与其他对象的反射率之间的差异较小、且目标对象的偏振度大于其他对象的偏振度时,对第一图像和第二图像求乘积,可以实现更有针对性地对图像进行融合,适配于后续任务,且使得融合后的图像能够更清楚地分辨出目标对象,提高后续任务执行结果的准确率。
上述三种场景仅为三种示例性的任务场景,实际上还可包括比上述更多的任务场景;且上述仅示出了第一融合方法的一些示例性的确定方式,实际上还可包括比上述更多的第一融 合方法的确定方式。第一融合方法可以根据不同的任务场景灵活确定,本申请对此不作限制。
融合后的图像可用于执行相应的图像任务,图像任务可以是目标检测任务、语义分割任务、分类任务等。在通过本申请实施例确定了融合后的图像后,可以将其输入相应的任务模型中,得到更为准确的任务执行结果。
图8(a)和图8(b)示出根据本申请一实施例的车道线检测效果的示意图。以车道线检测任务为例,图8(a)可以对应于在不利用本申请实施例的方法对图像进行处理的情况下,输入车道线检测模型得到的可视化检测结果;图8(b)可以对应于在利用了本申请实施例的方法对图像进行处理的情况下,输入车道线检测模型得到的可视化结果。图中示出的直线可以表示检测出的车道线,可以看出,由于存在眩光、遮挡等干扰情况,通过本申请实施例的方法对图像进行处理后输入到车道线检测模型进行检测,可以有效的防止车道线的漏检、误检等情况,使得车道线检测结果更加准确。
图9示出根据本申请一实施例的图像处理装置的结构图。该装置可用于上述图像处理系统,如图9所示,该装置包括:
获取模块901,用于获取M个图像,M个图像为在不同偏振态下采集的图像,M个图像包括目标对象和其他对象,M为大于2的整数;
确定模块902,用于根据M个图像确定第一图像和第二图像,第一图像用于指示目标对象和其他对象的强度信息,第二图像用于指示目标对象和其他对象的偏振信息;
融合模块903,用于根据第一融合方法对第一图像和第二图像进行融合,确定融合后的图像,第一融合方法根据目标对象的材质信息,或者,根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异确定。
其中,M个图像可以是同一时刻采集的,也可以是不同时刻采集的。不同偏振态可以指示采集时不同的偏振角度。融合后的图像可用于执行目标任务。
根据本申请实施例,通过获取不同偏振态下采集的图像,确定分别用于指示强度信息和偏振信息的第一图像和第二图像,可以利用偏振图像受照明变化影响小、不同对象对比度明显的优点,并且考虑到了偏振图像与传统强度图像之间的差异,实现更有针对性地对图像进行处理。通过根据第一融合方法,对第一图像和第二图像进行融合,由于其中第一融合方法可以根据目标对象的材质信息,或者,根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异确定,可以进一步提升得到的融合后的图像的质量,实现偏振图像处理与后续任务的解耦,可提升融合后的图像与传统任务模型的适配性,以适配于多种任务模型。
可选地,目标对象的材质信息可以包括目标对象的反射率、粗糙度、折射率和光泽度。
根据本申请实施例,通过确定目标对象的反射率、粗糙度、折射率和光泽度,可以更有针对性地设计第一图像和第二图像的融合方法,以得到更能适配于后续任务的图像,提高目标任务结果的准确率。
可选地,目标对象的材质信息可以根据目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异得到。
根据本申请实施例,通过利用目标对象与其他对象的强度差异以及目标对象与其他对象的偏振度差异,可以更具有针对性地得到目标对象的材质信息,使得得到的材质信息更符合目标对象的实际材质状况,且确定材质信息的过程更具有实时性。
可选地,第一图像可以根据M个图像的像素的强度的平均值确定,第二图像可以通过计算M个图像中偏振部分的像素的强度和整体的像素的强度的比值确定。
根据本申请实施例,能够利用偏振图像的特性,综合M个图像指示的强度信息来确定整体的强度信息,并综合M个图像指示的偏振信息来确定整体的偏振信息。
可选地,第一融合方法可包括以下其中之一:
对第一图像与第二图像求差;
对第一图像与第二图像求和;
对第一图像与第二图像求乘积。
根据本申请实施例,通过多种第一融合方法,可以更具有针对性地应对不同的任务场景灵活地选择使用不同的第一融合方法,使得融合后的图像可以灵活适配于各种任务模型,得到更加精准的任务结果。
可选地,第一融合方法可包括以下其中之一:
f=a1*I-b1*P,
f=a2*I+b2*P,
f=I*c*P;
其中,f可以表示第一融合方法,I可以表示第一图像,P可以表示第二图像,a1、a2可以分别表示第一图像对应的权重,b1、b2和c可以分别表示第二图像对应的权重。
根据本申请实施例,通过为第一图像、第二图像设置相应的权重,可以更加灵活地根据任务需要调节对于强度信息和偏振信息的偏好程度,以更有针对性地应对不同的任务场景,提高融合后图像的适配度。
可选地,在满足第一预设条件和第二预设条件的情况下,第一融合方法可以为对第一图像与第二图像求差;
其中,第一预设条件可包括目标对象的反射率大于其他对象的反射率,且目标对象与其他对象的强度差异大于第一预设阈值,第二预设条件可包括目标对象的偏振度小于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第二预设阈值。
根据本申请实施例,通过在目标对象的反射率大于其他对象的反射率、且目标对象的偏振度小于其他对象的偏振度时,对第一图像和第二图像求差,可以实现更有针对性地对图像进行融合,适配于后续任务,且使得融合后的图像能够更清楚地分辨出目标对象,提高后续任务执行结果的准确率。
可选地,在满足第三预设条件和第四预设条件的情况下,第一融合方法可以为对第一图像与第二图像求和;
其中,第三预设条件可包括目标对象的强度与其他对象的强度均小于第三预设阈值,第四预设条件可包括目标对象的偏振度大于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第四预设阈值。
根据本申请实施例,通过在目标对象的反射率与其他对象的反射率均较小、且目标对象的偏振度大于其他对象的偏振度时,对第一图像和第二图像求和,可以实现更有针对性地对图像进行融合,适配于后续任务,且使得融合后的图像能够更清楚地分辨出目标对象,提高后续任务执行结果的准确率。
可选地,在满足第五预设条件和第六预设条件情况下,第一融合方法可以为对第一图像 与第二图像求乘积;
其中,第五预设条件可包括目标对象与其他对象的强度差异小于第五预设阈值,第六预设条件可包括目标对象的偏振度大于其他对象的偏振度,且目标对象与其他对象的偏振度差异大于第六预设阈值。
根据本申请实施例,通过在目标对象的反射率与其他对象的反射率之间的差异较小、且目标对象的偏振度大于其他对象的偏振度时,对第一图像和第二图像求乘积,可以实现更有针对性地对图像进行融合,适配于后续任务,且使得融合后的图像能够更清楚地分辨出目标对象,提高后续任务执行结果的准确率。
本申请的实施例提供了一种图像处理装置,包括:处理器以及用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令时实现上述图像处理方法。
本申请的实施例提供了一种终端设备,该终端设备可以执行上述图像处理方法。
本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述图像处理方法。
本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述图像处理方法。
图10示出根据本申请一实施例的电子设备1000的结构图。如图10所示,该电子设备1000可以是上述图像处理系统。该电子设备1000包括至少一个处理器1801,至少一个存储器1802、至少一个通信接口1803。此外,该电子设备还可以包括天线等通用部件,在此不再详述。
下面结合图10对电子设备1000的各个构成部件进行具体的介绍。
处理器1801可以是通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制以上方案程序执行的集成电路。处理器1801可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
通信接口1803,用于与其他电子设备或通信网络通信,如以太网,无线接入网(RAN),核心网,无线局域网(Wireless Local Area Networks,WLAN)等。
存储器1802可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(ElectricallyErasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,所述存储器1802用于存储执行以上方案的应用程序代码,并由处理器1801来控制执行。所述处理器1801用于执行所述存储器1802中存储的应用程序代码。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Electrically Programmable Read-Only-Memory,EPROM或闪存)、静态随机存取存储器(Static Random-Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。
这里所描述的计算机可读程序指令或代码可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或可编程逻辑阵列(Programmable Logic Array,PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、 可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本申请的多个实施例的装置、系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。
也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行相应的功能或动作的硬件(例如电路或ASIC(Application Specific Integrated Circuit,专用集成电路))来实现,或者可以用硬件和软件的组合,如固件等来实现。
尽管在此结合各实施例对本发明进行了描述,然而,在实施所要求保护的本发明过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其它变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其它单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (21)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取M个图像,所述M个图像为在不同偏振态下采集的图像,所述M个图像包括目标对象和其他对象,所述M为大于2的整数;
    根据所述M个图像确定第一图像和第二图像,所述第一图像用于指示所述目标对象和所述其他对象的强度信息,所述第二图像用于指示所述目标对象和所述其他对象的偏振信息;
    根据第一融合方法对所述第一图像和所述第二图像进行融合,确定融合后的图像,所述第一融合方法根据所述目标对象的材质信息,或者,根据所述目标对象与所述其他对象的强度差异以及所述目标对象与所述其他对象的偏振度差异确定。
  2. 根据权利要求1所述的方法,其特征在于,所述目标对象的材质信息包括所述目标对象的反射率、粗糙度、折射率和光泽度。
  3. 根据权利要求1或2所述的方法,其特征在于,所述目标对象的材质信息根据所述目标对象与所述其他对象的强度差异以及所述目标对象与所述其他对象的偏振度差异得到。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述第一融合方法包括以下其中之一:
    对所述第一图像与所述第二图像求差;
    对所述第一图像与所述第二图像求和;
    对所述第一图像与所述第二图像求乘积。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,在满足第一预设条件和第二预设条件的情况下,所述第一融合方法为对所述第一图像与所述第二图像求差;
    其中,所述第一预设条件包括所述目标对象的反射率大于所述其他对象的反射率,且所述目标对象与所述其他对象的强度差异大于第一预设阈值,所述第二预设条件包括所述目标对象的偏振度小于所述其他对象的偏振度,且所述目标对象与所述其他对象的偏振度差异大于第二预设阈值。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,在满足第三预设条件和第四预设条件的情况下,所述第一融合方法为对所述第一图像与所述第二图像求和;
    其中,所述第三预设条件包括所述目标对象的强度与所述其他对象的强度均小于第三预设阈值,所述第四预设条件包括所述目标对象的偏振度大于所述其他对象的偏振度,且所述目标对象与所述其他对象的偏振度差异大于第四预设阈值。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,在满足第五预设条件和第六预设条件情况下,所述第一融合方法为对所述第一图像与所述第二图像求乘积;
    其中,所述第五预设条件包括所述目标对象与所述其他对象的强度差异小于第五预设阈值,所述第六预设条件包括所述目标对象的偏振度大于所述其他对象的偏振度,且所述目标对象与所述其他对象的偏振度差异大于第六预设阈值。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述第一融合方法包括以下其中之一:
    f=a1*I-b1*P,
    f=a2*I+b2*P,
    f=I*c*P;
    其中,所述f表示第一融合方法,所述I表示第一图像,所述P表示第二图像,所述a1、所述a2分别表示第一图像对应的权重,所述b1、所述b2和所述c分别表示第二图像对应的权重。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述第一图像根据所述M个图像的像素的强度的平均值确定,所述第二图像通过计算所述M个图像中偏振部分的像素的强度和整体的像素的强度的比值确定。
  10. 一种图像处理装置,其特征在于,所述装置包括:
    获取模块,用于获取M个图像,所述M个图像为在不同偏振态下采集的图像,所述M个图像包括目标对象和其他对象,所述M为大于2的整数;
    确定模块,用于根据所述M个图像确定第一图像和第二图像,所述第一图像用于指示所述目标对象和所述其他对象的强度信息,所述第二图像用于指示所述目标对象和所述其他对象的偏振信息;
    融合模块,用于根据第一融合方法对所述第一图像和所述第二图像进行融合,确定融合后的图像,所述第一融合方法根据所述目标对象的材质信息,或者,根据所述目标对象与所述其他对象的强度差异以及所述目标对象与所述其他对象的偏振度差异确定。
  11. 根据权利要求10所述的装置,其特征在于,所述目标对象的材质信息包括所述目标对象的反射率、粗糙度、折射率和光泽度。
  12. 根据权利要求10或11所述的装置,其特征在于,所述目标对象的材质信息根据所述目标对象与所述其他对象的强度差异以及所述目标对象与所述其他对象的偏振度差异得到。
  13. 根据权利要求10-12任一项所述的装置,其特征在于,所述第一融合方法包括以下其 中之一:
    对所述第一图像与所述第二图像求差;
    对所述第一图像与所述第二图像求和;
    对所述第一图像与所述第二图像求乘积。
  14. 根据权利要求10-13任一项所述的装置,其特征在于,在满足第一预设条件和第二预设条件的情况下,所述第一融合方法为对所述第一图像与所述第二图像求差;
    其中,所述第一预设条件包括所述目标对象的反射率大于所述其他对象的反射率,且所述目标对象与所述其他对象的强度差异大于第一预设阈值,所述第二预设条件包括所述目标对象的偏振度小于所述其他对象的偏振度,且所述目标对象与所述其他对象的偏振度差异大于第二预设阈值。
  15. 根据权利要求10-14任一项所述的装置,其特征在于,在满足第三预设条件和第四预设条件的情况下,所述第一融合方法为对所述第一图像与所述第二图像求和;
    其中,所述第三预设条件包括所述目标对象的强度与所述其他对象的强度均小于第三预设阈值,所述第四预设条件包括所述目标对象的偏振度大于所述其他对象的偏振度,且所述目标对象与所述其他对象的偏振度差异大于第四预设阈值。
  16. 根据权利要求10-15任一项所述的装置,其特征在于,在满足第五预设条件和第六预设条件情况下,所述第一融合方法为对所述第一图像与所述第二图像求乘积;
    其中,所述第五预设条件包括所述目标对象与所述其他对象的强度差异小于第五预设阈值,所述第六预设条件包括所述目标对象的偏振度大于所述其他对象的偏振度,且所述目标对象与所述其他对象的偏振度差异大于第六预设阈值。
  17. 根据权利要求10-16任一项所述的装置,其特征在于,所述第一融合方法包括以下其中之一:
    f=a1*I-b1*P,
    f=a2*I+b2*P,
    f=I*c*P;
    其中,所述f表示第一融合方法,所述I表示第一图像,所述P表示第二图像,所述a1、所述a2分别表示第一图像对应的权重,所述b1、所述b2和所述c分别表示第二图像对应的权重。
  18. 根据权利要求10-17任一项所述的装置,其特征在于,所述第一图像根据所述M个图 像的像素的强度的平均值确定,所述第二图像通过计算所述M个图像中偏振部分的像素的强度和整体的像素的强度的比值确定。
  19. 一种图像处理装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令时实现权利要求1-9任意一项所述的方法。
  20. 一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1-9中任意一项所述的方法。
  21. 一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行权利要求1-9中任意一项所述的方法。
PCT/CN2022/136298 2022-12-02 2022-12-02 图像处理方法、装置和存储介质 WO2024113368A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/136298 WO2024113368A1 (zh) 2022-12-02 2022-12-02 图像处理方法、装置和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/136298 WO2024113368A1 (zh) 2022-12-02 2022-12-02 图像处理方法、装置和存储介质

Publications (1)

Publication Number Publication Date
WO2024113368A1 true WO2024113368A1 (zh) 2024-06-06

Family

ID=91322823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136298 WO2024113368A1 (zh) 2022-12-02 2022-12-02 图像处理方法、装置和存储介质

Country Status (1)

Country Link
WO (1) WO2024113368A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190052792A1 (en) * 2017-08-11 2019-02-14 Ut-Battelle, Llc Optical array for high-quality imaging in harsh environments
CN111369464A (zh) * 2020-03-04 2020-07-03 深圳市商汤科技有限公司 去除图像中的反光的方法及装置、电子设备和存储介质
CN112488939A (zh) * 2020-11-27 2021-03-12 深圳市中博科创信息技术有限公司 图像处理方法、终端设备及存储介质
CN115219026A (zh) * 2022-07-14 2022-10-21 中国科学院长春光学精密机械与物理研究所 偏振智能感知系统及感知方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190052792A1 (en) * 2017-08-11 2019-02-14 Ut-Battelle, Llc Optical array for high-quality imaging in harsh environments
CN111369464A (zh) * 2020-03-04 2020-07-03 深圳市商汤科技有限公司 去除图像中的反光的方法及装置、电子设备和存储介质
CN112488939A (zh) * 2020-11-27 2021-03-12 深圳市中博科创信息技术有限公司 图像处理方法、终端设备及存储介质
CN115219026A (zh) * 2022-07-14 2022-10-21 中国科学院长春光学精密机械与物理研究所 偏振智能感知系统及感知方法

Similar Documents

Publication Publication Date Title
US10573018B2 (en) Three dimensional scene reconstruction based on contextual analysis
US10755425B2 (en) Automatic tuning of image signal processors using reference images in image processing environments
CN109887003B (zh) 一种用于进行三维跟踪初始化的方法与设备
TWI766201B (zh) 活體檢測方法、裝置以及儲存介質
Kadambi et al. 3d depth cameras in vision: Benefits and limitations of the hardware: With an emphasis on the first-and second-generation kinect models
EP4226322A1 (en) Segmentation for image effects
US10029622B2 (en) Self-calibration of a static camera from vehicle information
CN109521879B (zh) 交互式投影控制方法、装置、存储介质及电子设备
CN111222395A (zh) 目标检测方法、装置与电子设备
CN113052066B (zh) 三维目标检测中基于多视图和图像分割的多模态融合方法
WO2021013049A1 (zh) 前景图像获取方法、前景图像获取装置和电子设备
WO2022021287A1 (zh) 实例分割模型的数据增强方法、训练方法和相关装置
WO2016079179A1 (fr) Procede et dispositif de filtrage adaptatif temps reel d'images de disparite ou de profondeur bruitees
JP7554946B2 (ja) 画像データの処理方法及び装置
WO2023086694A1 (en) Image modification techniques
CN112487979A (zh) 目标检测方法和模型训练方法、装置、电子设备和介质
CN113129249B (zh) 基于深度视频的空间平面检测方法及其系统和电子设备
JP2017102896A (ja) シーンにおける光源の反射率パラメータ及び位置を推定する方法及び装置
WO2024016715A1 (zh) 检测系统的成像一致性的方法、装置和计算机存储介质
US20230412786A1 (en) Matching segments of video for virtual display of a space
WO2024113368A1 (zh) 图像处理方法、装置和存储介质
US20240054667A1 (en) High dynamic range viewpoint synthesis
WO2021149509A1 (ja) 撮像装置、撮像方法、及び、プログラム
TWI798098B (zh) 三維目標檢測方法、電子設備及計算機可讀存儲媒體
JP7407963B2 (ja) 距離情報生成装置および距離情報生成方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22966954

Country of ref document: EP

Kind code of ref document: A1